LSTM for prediction

Hi folks,
I am using LSTM for time series forecasting and after hours of browsing my question is still now answered:

Is there a difference between using windowing vs not?

Question
I have collected data from force sensor, more specifically force in Fz.
The reading of data was collected in various configurations 100 times. This results in 100 csv files, each with Fz sensor measurements, which is time-series of varying length.
I then concatenate all the csv files into one big one, and thus obtain (5350,1) dataset.

Initially I have turned the data into sequences using sliding window method (see function below), where I set the TIMESTEPS=50.

def to_sequence(data, timesteps=1):
    n_features=data.shape[2]
    x = []
    y = []
    for i in range(len(data)-timesteps):        
        _x = data[i:(i+timesteps)]
        _x = _x.reshape(timesteps, n_features)
        _y = data[i+timesteps]
        _y = _y.reshape(n_features)
        x.append(_x)
        y.append(_y)

    return np.array(x), np.array(y)

After transforming data changes from: (5350,1) → (5300,50,1)
Which makes sense, as I have 5300 samples/batches, each has 50 timesteps of data, and I have 1 feature.

On the other hand I am aware that LSTMs have memory cells and gates, and should be able to store memory. Would it be appropriate to instead of using sliding window data preprocessing simply use data of size (5350,1,1), feed it to LSTM, and then make predictions?

Hi @SimpleStudent ,

LSTMs are designed to capture long-term dependencies, If you choose windowing, it might not capture long-term dependencies beyond the window size.But experimenting with different configurations will help determine the best approach for your particular use case.