Time series forecast

In this link, the author show a tutoria for time series forecast Time series forecasting  |  TensorFlow Core. The questions are related to the normalization. The author subtracted and divided the validation data (val_df) and test data (test_df) using the train mean values and train standard deviation? (see below)
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
As far as I know for MLP, RNN and most of the ANN we need values between 0 and 1 and -1 and 1. Why did the author considered traning data for validation and test data in order to normalize them? We will not get values according to ANN requirements. Another question and more important is: How can I access the labes of the test data to compare with the predicted data? I have only access to the example inputs or example labels through wide_window.train/val or test. How to access all labels from the test data? I would like to see further charts and not the one is plotted using the plot function with normalized data. I would like to see real temperatura data for instance.

Hi @Cavour_Martinelli ,

The purpose of normalization is to scale your features so they are on a similar scale. This helps in speeding up training and avoiding issues related to numerical stability. In supervised learning, including time series forecasting, it’s a standard practice to normalize validation and test data using the statistics (mean and standard deviation) of the training data. This is because the training data represents the data distribution that the model will learn from, and the model should not have prior access to the statistical properties of the test and validation sets, as these are meant to simulate unseen, real-world data.

It’s important to use the training data’s statistics for normalization to avoid data leakage and ensure a fair evaluation of the model’s performance. If each set were normalized independently, the scales might differ, leading to misleading results during model evaluation.

While it’s common to normalize data to the range [0, 1] or [-1, 1], especially for certain types of neural networks (like CNNs working with image data), it’s not a strict requirement for all kinds of networks or data. Using the mean and standard deviation for normalization (also known as Z-score normalization) is another common approach. This method transforms the data into a distribution with a mean of 0 and a standard deviation of 1. It works well in many scenarios and is particularly common in time series forecasting.

1 Like