I have the following trained time series classification tensorflow model :
model = Sequential()
model.add(Masking(mask_value=0.0, input_shape=(90, 8)))
model.add(LSTM(100, return_sequences=True))
model.add(LSTM(70, return_sequences=True))
model.add(LSTM(70, return_sequences=False))
model.add(Dense(20, activation='relu'))
model.add(Dense(22, activation='softmax'))
- The input to the model has the shape: (Batch_size, 90, 8)
- 90 - time series length
- 8 - number of features
- The output of the model is 22 (classes)
I want to get the importance of the features so Iām using the following code:
first_layer_weights = model.layers[1].get_weights()[0]
feature_importance = np.abs(first_layer_weights).sum(axis=1)
-
Am I right that the values of
feature_importance
ā higher values means more importance than other features ? -
I want to check if the model give more attention (higher probability) to some classes.
Iām using this code to get the weights of the last layer:
last_layer_weights = model.layers[-1].get_weights()[1]
feature_importance = np.abs(last_layer_weights )
-
Is my suggestion for the features is true for the last layer - the values of
feature_importance
ā higher values means more importance than other features ā and thus some classes may get higher probability by the model to be selected ?