Subclassing keras.model to create a custom autoregressive LSTM model with multi-column input

Hello. I posted this on SO a few weeks ago, but didn’t get any responses. I’m hoping some of you folks can help me.

I’m trying to create a model to forecast energy grid load (net electricity consumed by the grid) based on weather data. In production, we won’t have load data to do a standard batch prediction. We’re trying an autoregressive approach so that we can feed it the last reported load reading and the next 24 hours of forecasted weather data to produce a 24 (or 72, 168 etc) hour prediction of load.

I’m working from this tutorial, which recommends subclassing the model class to make a step by step prediction. I believe the model.fit documentation also recommends subclassing.

The above tutorial creates a subclass of keras.model called Feedback and overwrites the model.call() method, which is called during training and prediction.

def call(self, inputs, training=None):
  # Use a TensorArray to capture dynamically unrolled outputs.
  predictions = []
  # Initialize the LSTM state.
  prediction, state = self.warmup(inputs)

  # Insert the first prediction.
  predictions.append(prediction)

  # Run the rest of the prediction steps.
  for n in range(1, self.out_steps):
    # Use the last prediction as input.
    x = prediction
    # Execute one lstm step.
    x, state = self.lstm_cell(x, states=state,
                              training=training)
    # Convert the lstm output to a prediction.
    prediction = self.dense(x)
    # Add the prediction to the output.
    predictions.append(prediction)

  # predictions.shape => (time, batch, features)
  predictions = tf.stack(predictions)
  # predictions.shape => (batch, time, features)
  predictions = tf.transpose(predictions, [1, 0, 2])
  return predictions

When calling fit(), I pass in datasets for training and validation. The datasets made via keras.utils.timeseries_dataset_from_array().

history = model.fit(dataset_train, epochs=epochs,
                    validation_data=dataset_val,
                    callbacks=[es_callback, modelckpt_callback])

My data shapes are hourly time series data, 11 columns of weather data and 1 column of targets. I’m using a window size of two hours.

My issue is that it seems as though the prediction calls in the for loop are only using previous predictions as input. I don’t understand how they could be accessing the training or validation datasets.

I tried looking around in the Pycharm debugger for a way to access the datasets, but didn’t find anything. I also tried looking for people doing similar subclassing, but this tutorial is the best I could find.

If a running example is needed, that tutorial goes through dataset creation and subclass implementation. My hope is that someone could explain how to properly subclass keras.model (in a similar fashion to that tutorial) to take multiple-column input, and do autoregressive predictions. The overwriting of the call() method is where I’m most confused.