Hi,
I have trained a TFLITE BERT Model in Python. Now when I am training this model without GPU, it is taking around 8 hours per epochs. When I trained with GPU it hardly takes 30 minutes which is significantly very less time. I am doing some research like how we can improve its training speed without GPU?
For your references, I am attaching the codes that I have written:
from tflite_model_maker.text_classifier import DataLoader
from tflite_model_maker import configs
training= traindata.to_csv(‘/content/train(1).csv’,index=False)
testing= testdata.to_csv(‘/content/train(1).csv’, index=False)
spec = model_spec.get(‘bert_classifier’)
train_data = DataLoader.from_csv(
-
filename='/content/train(1).csv',*
-
text_column='Cleaned_text',*
-
label_column='labels',*
-
model_spec=spec, *
-
is_training=True)*
test_data = DataLoader.from_csv(
-
filename='/content/test(1).csv',*
-
text_column='Cleaned_text',*
-
label_column='labels',*
-
model_spec=spec,*
-
is_training=False)*
model = text_classifier.create(train_data, model_spec=spec, epochs=7,batch_size=64)
model.summary()
loss, acc = model.evaluate(test_data)
model.export(export_dir=‘bert_classifier/’, export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])
Can anyone help me to optimize the codes, so that we can make a generalized solution that will on without GPU as well.
Note: The codes are working fine with GPU and I am able to train it within 30 mint. But Just for learning purpose I am doing some experiments without GPU.
Thanks