Hello,
I tried to train EfficientDetLite2 detection model on custom data following the tutorial. I changed some parameters as follows:
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('/home/smurf/efficientDetTrainingDataFinal.csv')
# spec = object_detector.EfficientDetSpec(
# tflite_max_detections = 301,
# model_name='efficientdet-lite2',
# uri='https://tfhub.dev/tensorflow/efficientdet/lite2/feature-vector/1',
# hparams={'max_instances_per_image': 301, 'autoaugment_policy' : None, 'optimizer':'adam' , 'learning_rate' : 0.008, 'lr_warmup_init': 0.0008},
# epochs = 120)
# model = object_detector.create(train_data, model_spec=spec, batch_size=16, train_whole_model=True, validation_data=validation_data)
# model.export(export_dir='.', tflite_filename = "lite2.tflite")
# model.export(export_dir='.', tflite_filename = "lite2_fp16.tflite", quantization_config = config.QuantizationConfig.for_float16())
# model.export(export_dir='.', tflite_filename = "lite2_dynamic.tflite", quantization_config = config.QuantizationConfig.for_dynamic())
When the training is done, i wanted to test the models on an image both “lite2_fp16.tflite” and “lite2_dynamic.tflite” work fine and give me the desired results. but int8 quantized model crashes.
it crashes right when I call interpreter.allocate_tensors()
and the error is
Aborted (core dumped)
I tried loading again from checkpoint and training again for couple of epochs and saving, it saves without any error, but crashes when I want to do inference.
The funny part is when I train the model for few epochs like 1 to 8 the conversion is done correctly and the model works