Fine-tuning a pre-trained model while replacing one of the pre-trained layers with a new PyTorchlayer

Hi everyone!

I want to fine-tune a pre-trained BERT model off of the official BERT repository. I want to replace one of the pre-trained dense layers with a custom PyTorch layer.

I’ve been trying to implement this, but I haven’t been able to figure it out. I learned that the get_assignment_map_from_checkpoint function computes the union of the current variables and checkpoint variables, and then the output of this function is passed into tf.train.init_from_checkpoint.

Any tips/advice/suggestions would be greatly appreciated. Thank you in advance for your help!