Difficulties adapting text generation example to regression problem

Hello All,

I am a researcher attempting to develop a hybrid CNN-RNN network to generate a sequence of outputs that approximates a multiphysics simulation. I have already trained the CNN portion of the network (which handles the instantaneous component of the problem) and now am trying to create a code framework for integrating it with an RNN for prediction of time series data.

As a first step, I have attempted to create a simpler, RNN-only model based on the text generation tutorial on the Tensorflow website, which predicts the target output from the previous timestep’s output. I defined the following class for the initial model to be trained:

class RNNOnlyModel(tf.keras.Model):
  def __init__(self, rnn_units, leaky_alpha=0.2):
    super().__init__(self)

    self.rnn_units = rnn_units
    self.leaky_alpha = leaky_alpha

    #self.reshaper = tfl.Reshape((1, 2))
    self.lstm = tfl.LSTM(rnn_units,
                                   return_sequences=True,
                                   return_state=True,
                                   input_shape=(None, 2))
    self.dense = tfl.Dense(1, activation=tfl.ReLU(), kernel_initializer='he_normal')

  def call(self, inputs, states=None, return_state=False, training=False):
    x = inputs
    #x = self.reshaper(x, training=training)
    if states is None:
      states = self.lstm.get_initial_state(x)
    x, states, states_c = self.lstm(x, initial_state=states, training=training)
    x = self.dense(x, training=training)
    if return_state:
      return x, states
    else:
      return x

This model and its training data were instantiated using the following code (N_A=1024):

x_input = tf.data.Dataset.from_tensor_slices(x_dataset)
y_input = tf.data.Dataset.from_tensor_slices(y_dataset)

train_dataset = tf.data.Dataset.zip((x_input, y_input)).batch(BATCH_SIZE).prefetch(1)

model = RNNOnlyModel(N_A)

The model was successfully trained for 100 epochs. Following this, a separate program was created for sequence generation, using the following one-step model class based off the tutorial:

class OneStep(tf.keras.Model):
  def __init__(self, model):
    super().__init__()
    self.model = model
    self.reshaper = tfl.Reshape((1, 1))

  @tf.function
  def generate_one_step(self, inputs, states=None):

    # Run the model.
    # predicted_fgr.shape is [batch, char, 1]
    predicted_fgr, states = self.model(inputs=inputs, states=states,
                                          return_state=True)
    # Only use the last prediction.
    predicted_fgr = predicted_fgr[:, -1, :]
    predicted_fgr = self.reshaper(predicted_fgr)

    # Return the characters and model state.
    return predicted_fgr, states

Finally, I attempted to use the following code for sequence generation:

model = tf.keras.models.load_model(model_name)
one_step_model = OneStep(model)

results = []
MAE_list = []

for i in range(num_sequences):
    x = tf.expand_dims(x_data[i][0], 0)
    states = None
    result = [x_data[i][0][0]]

    for j in range(num_steps):
        pred, states = one_step_model.generate_one_step(x, states=states)
        print(pred)
        pred_conv = pred[0]
        result.append(pred_conv)
        x = tf.expand_dims(tf.concat([pred_conv, x_data[i][j][1]], axis=0), axis=0)

    results.append(result)
    
    #if i == 0:
        #print(results_converted)

    print(tf.keras.metrics.mean_absolute_error(y_data[0][0:num_steps], result))
    MAE_list.append(float(tf.math.reduce_mean(tf.keras.metrics.mean_absolute_error(y_data[0][0:num_steps], result))))

However, when I attempted to run this code, I received the following error:

ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * <tf.Tensor 'inputs:0' shape=(1, 1, 2) dtype=float32>
        * None
        * True
        * False
      Keyword arguments: {}
    
     Expected these arguments to match one of the following 2 option(s):
    
    Option 1:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 676, 2), dtype=tf.float32, name='input_1')
        * None
        * False
        * True
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 676, 2), dtype=tf.float32, name='input_1')
        * None
        * False
        * False
      Keyword arguments: {}

Despite the tutorial explicitly stating that the RNN model is capable of making predictions on variable-length input, the model defined here is hardcoded to only accept input data with a number of time steps equal to the length of the original training data sequences. I have examined other tutorials on one-to-many sequence generation using RNNs, yet many of them have the network predict an entire sequence and only take the first prediction. Due to the nature of my intended hybrid CNN-RNN model, such a solution would be excessively computationally wasteful for the problem I am attempting to solve. Is there any way to enable my RNN model to accept variable length data during prediction?