Instructions to Inference TFLITE BERT model in Python

Steps to Inference the TFLITE BERT Model in Python:

Below are the important steps to inference the Tflite BERT model in Python:

Step1: Important prerequisites:
Before inferencing the TFLITE Bert model, first import the important libraries in python:
import tensorflow as tf
import numpy as np
import numpy as np
import tensorflow as tf
from keras.preprocessing.text import Tokenizer
import pandas as pd
from tensorflow.keras.preprocessing import sequence
import tensorflow_hub as hub
from tensorflow.keras import layers
import bert
from transformers import DistilBertModel, DistilBertTokenizer
from transformers import DistilBertForSequenceClassification, DistilBertConfig
from transformers import BertTokenizer, BertModel

Step2: Encode the test sentence using BERT embedding layer
To inference the trained tflite BERT model , you need to first encode the test sentence, for this you can use the following codes:
def prepare_features(seq_1, zero_pad = True, max_seq_length = 128):

  • enc_text = tokenizer.encode_plus(seq_1, add_special_tokens=True, max_length=300, truncation=True) # add tokens*
  • if zero_pad:*
  • while len(enc_text[‘input_ids’]) < max_seq_length:*
  •  enc_text['input_ids'].append(0)*
    
  •  # enc_text['token_type_ids'].append(0)*
    
  •  # enc_text['shape'].append(0)*
    
  • return enc_text*
    Note: Make sure max_seq_length should be same as you use during training of your model. In my case I have used the 128. so I used it, but you can change it.

Step3: Load your trained TFLITE Model from directory
interpreter = tf.lite.Interpreter(model_path=“/content/drive/MyDrive/Hilo datasets/model.tflite”)
interpreter.allocate_tensors()

Step4: Inferencing the sentence in your trained model
sentence= "Tensorflow is a library developed by Google "
output= prepare_features(sentence)
output1= output[‘input_ids’]
encoded_results= np.asarray(output1, dtype=np.int32)
print(encoded_results)
a= np.expand_dims(encoded_results, axis=0)
print(a.shape)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0][‘index’], a)
interpreter.invoke()
output_details = interpreter.get_output_details()
output_data = interpreter.get_tensor(output_details[0][‘index’])

Step5: Checking the results and outputs:
output_data = interpreter.get_tensor(output_details[0][‘index’])
print(output_data)

I hope this helps to all, who wants to work on Text classification TFLITE model and want to check the results on a python before deploying onto the mobile devices.
Thanks