Input tensor 0 is missing TensorMetadata

This article shows how to add meta_data into a tflite model before running in Android. However, it only shows example for Image classification. I am trying to add meta data for Audio classification. How can I get guide.

I am getting error in Android when running a model.tflite which has no meta data.

Error getting native address of native library: task_audio_jni
java.lang.IllegalArgumentException: Error occurred when initializing AudioClassifier: input tensor 0 is missing TensorMetadata.
at org.tensorflow.lite.task.audio.classifier.AudioClassifier.initJniWithModelFdAndOptions(Native Method)
at org.tensorflow.lite.task.audio.classifier.AudioClassifier.access$000(AudioClassifier.java:74)
at org.tensorflow.lite.task.audio.classifier.AudioClassifier$1.createHandle(AudioClassifier.java:143)
at org.tensorflow.lite.task.audio.classifier.AudioClassifier$1.createHandle(AudioClassifier.java:136)
at org.tensorflow.lite.task.core.TaskJniUtils$1.createHandle(TaskJniUtils.java:70)
at org.tensorflow.lite.task.core.TaskJniUtils.createHandleFromLibrary(TaskJniUtils.java:91)
at org.tensorflow.lite.task.core.TaskJniUtils.createHandleFromFdAndOptions(TaskJniUtils.java:66)
at org.tensorflow.lite.task.audio.classifier.AudioClassifier.createFromFileAndOptions(AudioClassifier.java:134)
at org.tensorflow.lite.task.audio.classifier.AudioClassifier.createFromFile(AudioClassifier.java:91)
at com.samsung.classification.MainActivity.startAudioClassification(MainActivity.java:86)
at com.samsung.classification.MainActivity.lambda$showActivitySelectionDialogue$0$com-samsung-classification-MainActivity(MainActivity.java:51)
at com.samsung.classification.MainActivity$ExternalSyntheticLambda0.onClick(Unknown Source:2)
at com.android.internal.app.AlertController$AlertParams$3.onItemClick(AlertController.java:1485)
at android.widget.AdapterView.performItemClick(AdapterView.java:376)
at android.widget.AbsListView.performItemClick(AbsListView.java:1295)
at android.widget.AbsListView$PerformClick.run(AbsListView.java:3571)
at android.widget.AbsListView$3.run(AbsListView.java:4751)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:226)
at android.os.Looper.loop(Looper.java:313)

Hi @hissain

Is your python code working ok before trying to convert it to .tflite? What example have you used to train the model? Have you used TF Model Maker or a different approach?
The code for inserting the metadata is the same as the one you have provided. Can you share more info about the normalization and the other info for your model that must be added?

Check here some examples with the TF Lite Model Maker that adds the metadata automatically

Here is the example how TF Lite Model Maker is adding the parameters inside the model.

You can se also which are the parameters based on if it is YamnetSpec or BrowserFftSpec

It was working OK in the same Notebook as I have found evaluation score at around 90% accuracy. I was trying to install tflite_model_maker however, it is failing to install because of dependency conflicts with tflite_support. Is there any guideline on it?

Please refer to the issue I am facing. python - ERROR: Cannot install tflite-model-maker (The conflict is caused by other modules) - Stack Overflow
@George_Soloupis

It is an known issue. Please refer to http://discuss.ai.google.dev/t/future-of-tflite-model-maker-and-mediapipe-model-maker/17375 or take a look a recent example I wrote
Use TensorFlow Lite Model Maker with a custom dataset | by George Soloupis | Medium
Inside there you can find a python notebook with example of using a conda env inside Colad.

Regards

@George_Soloupis your blog provides examples for unix systems. I was trying to run same but tflite model maker installing for indefinite time and storage gets full. Any other suggestions?

@hissain The example is working OK inside a Colab session.
No other suggestions I can think of.

@George_Soloupis can you share any working notebook? I have same issue in colab notebook as well.

Inside the blog post there is a link to a notebook…just before the code snippets

@George_Soloupis could you please help from where can I get the /gdrive/MyDrive/JWick/gun_shot_wav.zip file? I was trying to run the colab and it failed to unzip because I have no zip file in my drive. Here is my notepad copied from your git.

All you need to use is inside the blog post!

Hi @hissain
In this case you need to add tensor flow metadata manually for audio classifications. Here is how I did it.

from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb

""" ... """
"""Creates the metadata for an audio classifier."""

# Creates model info.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "model name"
model_meta.description = "Description"
model_meta.version = "v1"
model_meta.author = "Your Name"
model_meta.license = ("Apache License. Version 2.0 "
                      "http://www.apache.org/licenses/LICENSE-2.0.")

# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()

# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()

input_meta.name = "name"
input_meta.description = "description"
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.AudioPropertiesT()
input_meta.content.contentProperties.channels = no. channels
input_meta.content.contentProperties.sample_rate = rate
input_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.AudioProperties)
input_stats = _metadata_fb.StatsT()
input_meta.stats = input_stats

# Creates output info.
output_meta = _metadata_fb.TensorMetadataT()
output_meta.name = "name"
output_meta.description = "description"
output_meta.content = _metadata_fb.ContentT()
output_meta.content.content_properties = {}
output_meta.content.contentPropertiesType = (
    _metadata_fb.ContentProperties.FeatureProperties)
output_stats = _metadata_fb.StatsT()
output_stats.max = [1.0]
output_stats.min = [0.0]
output_meta.stats = output_stats

label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename("label_file location")
label_file.description = "Description"
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS
output_meta.associatedFiles = [label_file]

# Creates subgraph info.
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = [output_meta]
model_meta.subgraphMetadata = [subgraph]
print(subgraph.inputTensorMetadata[0].content.contentProperties.sample_rate)
print(subgraph.inputTensorMetadata[0].content.contentProperties.audio_format)
b = flatbuffers.Builder(0)
b.Finish(
    model_meta.Pack(b),
    _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()
print(metadata_buf)
populator = _metadata.MetadataPopulator.with_model_file("/lmodel location")
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(["label file location"])
populator.populate()

Change the file location as per your system.