Issue: Official BERT code not working on TF 2.17; model creation complains about symbolic values

Hi!

I am building a model based on BERT, using the official code from Tensorflow Kaggle page.

When running the code to create the model abstraction, I get an error that I don’t use actual values.

The error comes at line (see full code below):
encoder_inputs = preprocessor(text_input)

The error is:
A KerasTensor is symbolic: it's a placeholder for a shape an a dtype. It doesn't have any actual numerical value. You cannot convert it to a NumPy array.

The whole model code is as follows:

text_input = tf.keras.layers.Input(shape=(), dtype=tf.string)
preprocessor = hub.KerasLayer(
    "https://kaggle.com/models/tensorflow/bert/TensorFlow2/en-uncased-preprocess/3")
encoder_inputs = preprocessor(text_input)
encoder = hub.KerasLayer(
    "https://www.kaggle.com/models/tensorflow/bert/TensorFlow2/en-uncased-l-12-h-768-a-12/4",
    trainable=True)
outputs = encoder(encoder_inputs)
pooled_output = outputs["pooled_output"]      # [batch_size, 768].
sequence_output = outputs["sequence_output"]  # [batch_size, seq_length, 768].

Hi @kaarle, This is a known issue,as a work around could you please try by using the legacy version which will not produce the error. please refer to this gist. Thank You.