How to solve UnknownError: Graph execution error

I am trying to use albumentations for data augmentation following this notebook. When I try to fit the model, I get a UnknownError: Graph execution error:. Below is the full error code.

Epoch 1/10
---------------------------------------------------------------------------
UnknownError                              Traceback (most recent call last)
<ipython-input-42-f02a657ecadb> in <module>()
     47     )
     48     model.compile(loss=loss_func,optimizer=optimizer)
---> 49     history = model.fit(train_ds,validation_data=val_ds,class_weight=class_weights,epochs=EPOCHS,callbacks=[mc,es],verbose=1)
     50 
     51     perf[fold+1] = history

1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     53     ctx.ensure_initialized()
     54     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55                                         inputs, attrs, num_outputs)
     56   except core._NotOkStatusException as e:
     57     if name is not None:

UnknownError: Graph execution error:

2 root error(s) found.
  (0) UNKNOWN:  error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::Set<0, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::<unnamed>::SizePolicy)2u; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 1

Traceback (most recent call last):

  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 271, in __call__
    ret = func(*args)

  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
    return func(*args, **kwargs)

  File "<ipython-input-26-1b9ce6086764>", line 3, in aug_fn
    aug_data = transforms(**data)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/core/composition.py", line 171, in __call__
    data = t(**data)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/core/transforms_interface.py", line 38, in __call__
    res[key] = target_function(arg, **dict(params, **target_dependencies))

  File "/usr/local/lib/python3.7/dist-packages/albumentations/augmentations/transforms.py", line 898, in apply
    return F.shift_hsv(image, hue_shift, sat_shift, val_shift)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/augmentations/functional.py", line 244, in shift_hsv
    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)

cv2.error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::Set<0, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::<unnamed>::SizePolicy)2u; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 1



	 [[{{node PyFunc}}]]
	 [[IteratorGetNext]]
	 [[IteratorGetNext/_2]]
  (1) UNKNOWN:  error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::Set<0, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::<unnamed>::SizePolicy)2u; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 1

Traceback (most recent call last):

  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 271, in __call__
    ret = func(*args)

  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
    return func(*args, **kwargs)

  File "<ipython-input-26-1b9ce6086764>", line 3, in aug_fn
    aug_data = transforms(**data)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/core/composition.py", line 171, in __call__
    data = t(**data)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/core/transforms_interface.py", line 38, in __call__
    res[key] = target_function(arg, **dict(params, **target_dependencies))

  File "/usr/local/lib/python3.7/dist-packages/albumentations/augmentations/transforms.py", line 898, in apply
    return F.shift_hsv(image, hue_shift, sat_shift, val_shift)

  File "/usr/local/lib/python3.7/dist-packages/albumentations/augmentations/functional.py", line 244, in shift_hsv
    img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)

cv2.error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::Set<0, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::<unnamed>::SizePolicy)2u; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 1



	 [[{{node PyFunc}}]]
	 [[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_43969]

From what I can tell from ther error, it seems my images have a channel of one which is strange because they actually have 3 channels which is what I am passing to the model.

Yes It seems that opencv receives 1 Channel image in the color conversion.

P.s. Now you can also use our new native augmentation at:

Thanks, I guess albumentations doesn’t really play well with tensorflow. I will try out KerasCV

@Bhack when I try using the RandomAugmentationPipeline I get the error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-78-517638ebd92e> in <module>()
      1 dts = create_dataset(dummy_train)
----> 2 dts = dts.map(apply_pipeline,num_parallel_calls=CONFIG.AUTO)
      3 dvs = create_dataset(dummy_val).map(apply_pipeline,num_parallel_calls=CONFIG.AUTO)
      4 test_ds = create_dataset(testing,augment=False,labeled=False)

33 frames
/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/mix_up.py in tf___augment(self, inputs)
      6         def tf___augment(self, inputs):
      7             with ag__.FunctionScope('_augment', 'fscope', ag__.STD) as fscope:
----> 8                 raise ag__.converted_call(ag__.ld(ValueError), ('MixUp received a single image to `call`.  The layer relies on combining multiple examples, and as such will not behave as expected.  Please call the layer with 2 or more samples.',), None, fscope)
      9         return tf___augment
     10     return inner_factory

ValueError: in user code:

    File "<ipython-input-69-8b3e47e9314c>", line 9, in apply_pipeline  *
        inputs["images"] = pipeline(inputs["images"])
    File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
        raise e.with_traceback(filtered_tb) from None
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
        ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
        ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 62, in else_body_1
        ag__.if_stmt((ag__.ld(images).shape.rank == 4), if_body, else_body, get_state, set_state, ('do_return', 'retval_'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 54, in if_body
        retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._batch_augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
    File "/tmp/__autograph_generated_fileqm866lxs.py", line 12, in tf___batch_augment
        retval_ = ag__.converted_call(ag__.ld(self)._map_fn, (ag__.ld(self)._augment, ag__.ld(inputs)), None, fscope)
    File "/tmp/__autograph_generated_filegnx_otes.py", line 26, in tf___augment
        ag__.for_stmt(ag__.converted_call(ag__.ld(range), (ag__.ld(self).augmentations_per_image,), None, fscope), None, loop_body, get_state, set_state, ('result',), {'iterate_names': '_'})
    File "/tmp/__autograph_generated_filegnx_otes.py", line 23, in loop_body
        result = ag__.converted_call(ag__.ld(tf).cond, ((ag__.ld(skip_augment) > ag__.ld(self).rate), ag__.autograph_artifact((lambda : ag__.ld(inputs))), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(self)._random_choice, (ag__.ld(result),), None, fscope)))), None, fscope)
    File "/tmp/__autograph_generated_filegnx_otes.py", line 23, in <lambda>
        result = ag__.converted_call(ag__.ld(tf).cond, ((ag__.ld(skip_augment) > ag__.ld(self).rate), ag__.autograph_artifact((lambda : ag__.ld(inputs))), ag__.autograph_artifact((lambda : ag__.converted_call(ag__.ld(self)._random_choice, (ag__.ld(result),), None, fscope)))), None, fscope)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
        ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
        ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 35, in if_body_1
        retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
    File "/tmp/__autograph_generated_filekdazyrx0.py", line 14, in tf___augment
        retval_ = ag__.converted_call(ag__.ld(tf).switch_case, (), dict(branch_index=ag__.ld(selected_op), branch_fns=ag__.ld(branch_fns), default=ag__.autograph_artifact((lambda : ag__.ld(inputs)))), fscope)
    File "/tmp/__autograph_generated_fileuhcqprv5.py", line 18, in call_layer
        retval__1 = ag__.converted_call(ag__.ld(layer), (ag__.ld(inputs),), None, fscope_1)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
        ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
        ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
    File "/tmp/__autograph_generated_fileefjo6q64.py", line 35, in if_body_1
        retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
    File "/tmp/__autograph_generated_file5yi92ni5.py", line 8, in tf___augment
        raise ag__.converted_call(ag__.ld(ValueError), ('MixUp received a single image to `call`.  The layer relies on combining multiple examples, and as such will not behave as expected.  Please call the layer with 2 or more samples.',), None, fscope)

    ValueError: Exception encountered when calling layer "random_augmentation_pipeline_11" (type RandomAugmentationPipeline).
    
    in user code:
    
        File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/base_image_augmentation_layer.py", line 217, in call  *
            return self._format_output(self._augment(inputs), is_dict, use_targets)
        File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/base_image_augmentation_layer.py", line 264, in _batch_augment  *
            return self._map_fn(self._augment, inputs)
        File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/random_augmentation_pipeline.py", line 93, in _augment  *
            result = tf.cond(
        File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
            raise e.with_traceback(filtered_tb) from None
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
            ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
            ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 35, in if_body_1
            retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
        File "/tmp/__autograph_generated_filekdazyrx0.py", line 14, in tf___augment
            retval_ = ag__.converted_call(ag__.ld(tf).switch_case, (), dict(branch_index=ag__.ld(selected_op), branch_fns=ag__.ld(branch_fns), default=ag__.autograph_artifact((lambda : ag__.ld(inputs)))), fscope)
        File "/tmp/__autograph_generated_fileuhcqprv5.py", line 18, in call_layer
            retval__1 = ag__.converted_call(ag__.ld(layer), (ag__.ld(inputs),), None, fscope_1)
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
            ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
            ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
        File "/tmp/__autograph_generated_fileefjo6q64.py", line 35, in if_body_1
            retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
        File "/tmp/__autograph_generated_file5yi92ni5.py", line 8, in tf___augment
            raise ag__.converted_call(ag__.ld(ValueError), ('MixUp received a single image to `call`.  The layer relies on combining multiple examples, and as such will not behave as expected.  Please call the layer with 2 or more samples.',), None, fscope)
    
        ValueError: Exception encountered when calling layer "random_choice_11" (type RandomChoice).
        
        in user code:
        
            File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/base_image_augmentation_layer.py", line 217, in call  *
                return self._format_output(self._augment(inputs), is_dict, use_targets)
            File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/random_choice.py", line 92, in _augment  *
                return tf.switch_case(
            File "/tmp/__autograph_generated_fileuhcqprv5.py", line 18, in call_layer
                retval__1 = ag__.converted_call(ag__.ld(layer), (ag__.ld(inputs),), None, fscope_1)
            File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
                raise e.with_traceback(filtered_tb) from None
            File "/tmp/__autograph_generated_fileefjo6q64.py", line 76, in tf__call
                ag__.if_stmt(ag__.ld(training), if_body_2, else_body_2, get_state_2, set_state_2, ('do_return', 'retval_', 'inputs'), 2)
            File "/tmp/__autograph_generated_fileefjo6q64.py", line 63, in if_body_2
                ag__.if_stmt((ag__.ld(images).shape.rank == 3), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2)
            File "/tmp/__autograph_generated_fileefjo6q64.py", line 35, in if_body_1
                retval_ = ag__.converted_call(ag__.ld(self)._format_output, (ag__.converted_call(ag__.ld(self)._augment, (ag__.ld(inputs),), None, fscope), ag__.ld(is_dict), ag__.ld(use_targets)), None, fscope)
            File "/tmp/__autograph_generated_file5yi92ni5.py", line 8, in tf___augment
                raise ag__.converted_call(ag__.ld(ValueError), ('MixUp received a single image to `call`.  The layer relies on combining multiple examples, and as such will not behave as expected.  Please call the layer with 2 or more samples.',), None, fscope)
        
            ValueError: Exception encountered when calling layer "mix_up_9" (type MixUp).
            
            in user code:
            
                File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/base_image_augmentation_layer.py", line 217, in call  *
                    return self._format_output(self._augment(inputs), is_dict, use_targets)
                File "/usr/local/lib/python3.7/dist-packages/keras_cv/layers/preprocessing/mix_up.py", line 74, in _augment  *
                    raise ValueError(
            
                ValueError: MixUp received a single image to `call`.  The layer relies on combining multiple examples, and as such will not behave as expected.  Please call the layer with 2 or more samples.
            
            
            Call arguments received by layer "mix_up_9" (type MixUp):
              • inputs={'images': 'tf.Tensor(shape=(224, 224, 3), dtype=float32)'}
              • training=True
        
        
        Call arguments received by layer "random_choice_11" (type RandomChoice):
          • inputs={'images': 'tf.Tensor(shape=(224, 224, 3), dtype=float32)'}
          • training=True
    
    
    Call arguments received by layer "random_augmentation_pipeline_11" (type RandomAugmentationPipeline):
      • inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
      • training=True

This is my data pipeline

def create_dataset(dataset,augment=True,labeled=True):
  if augment:
    dataset = dataset.map(augmentation,num_parallel_calls=CONFIG.AUTO)
    # dataset = dataset.map(apply_pipeline,num_parallel_calls=CONFIG.AUTO)
  if labeled:
    dataset = dataset.shuffle(CONFIG.BUFFER_SIZE)
  dataset = dataset.batch(CONFIG.BATCH_SIZE).prefetch(CONFIG.AUTO)
  return dataset

and this is how I am calling it;

dts = create_dataset(dummy_train)
dts = dts.map(apply_pipeline,num_parallel_calls=CONFIG.AUTO)

Check this tutorial:

It is seems that in your case the MixUp augmentation is receiving only one sample but this augmentation requires more than one.

Thanks, I was able to solve the issue.

It has been a while but if memory serves me right the augmentation required that my dataset be batched already. Essentially make sure you’re applying the augmentation over a batch of images rather than at one image at a time.

I am also facing same issue. It will be helpful for me if anybody find any solution on this