Hi, I have a question about the documentation found on tensorflow.org involving BERT.
There is one tutorial called “Classify Text with BERT” Classify text with BERT | Text | TensorFlow
Here is another - “Fine-tuning a BERT model”
I was wondering what the difference between the two is when it comes to preprocessing data. Specifically, in the “Classify Text with BERT” tutorial, preprocessing the data is just using a preprocessing model provided by Tensorflow Hub. On the other hand, “Fine-tuning a BERT model” uses python code to tokenize and encode the data and stuff; it just seems a lot more complicated than using the preprocessing model provided.
So basically, I was wondering if there is an actual difference between these two preprocessing methods, and there is a reason for why one uses a model and the other actually implements python code?