Hi @marcocintra ,
A pre-trained model is a model that has been trained on a large dataset and can be used to perform a specific task, such as image classification or object detection.
A checkpointed model is a model that has been trained and then saved at a specific point in the training process. This can be useful for resuming training or for evaluating the modelâs performance at different points in the training process.
When you load a pre-trained model, you are loading the modelâs architecture and weights.
When you load a checkpointed model, you are loading the modelâs architecture, weights, and the current state of the hyperparameters.
To retrain a model, you need to load the modelâs architecture and weights. You can then compile the model and train it on new data.
The excerpt from the book that you quoted is also referring to the difference between loading the model and loading the modelâs weights.
When you load the model, you are loading the modelâs architecture and weights.
When you load the modelâs weights, you are only loading the weights.
To retrain a model, you need to load the modelâs weights. You can then compile the model and train it on new data.
Which method you use to retrain the model depends on your needs. If you want to retrain the model from scratch, then you should load the modelâs weights. If you want to continue training the model from where it left off, then you should load the model.
I hope this helps to clarify the difference between loading a model, loading the modelâs weights, and checkpointing a model.
Thanks.