Hi, I’m a newbie.
INPUT: a simple sentence with altogether 7 words.
using tensorflow.keras.layers, I’ve created the embedding layer, its output shape looks like this (None, 7, 64), which is expected. And i’ve also encoded the output from this layer, however, the encoding layer’s output shape is (None, 64), so, it missed the data set size of 7. both lstm and bydirectional encoding results in the same output. Has any of you experienced the same? Any insight or pointer into fixing the problem would be appreciated.
@Don_Learner
It seems like the issue you’re facing is related to the output shape of the encoding layer when using LSTM or Bidirectional layers in TensorFlow’s Keras. The discrepancy in dimensions is likely due to the nature of these layers.
When you use an LSTM or Bidirectional LSTM layer in Keras, the default behavior is to return only the output of the last time step. This behavior leads to a reduction in the temporal dimension from the sequence length (7 in your case) to 1.
Solution:
If you want the output from each time step, you can set the return_sequences
parameter to True
when defining the LSTM or Bidirectional layer.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Bidirectional
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=64, input_length=7))
model.add(LSTM(64, return_sequences=True)) # Set return_sequences to True
# or use Bidirectional LSTM
# model.add(Bidirectional(LSTM(64, return_sequences=True)))
By setting return_sequences=True
, the LSTM or Bidirectional LSTM layer will return the output for each time step, and the shape should be (None, 7, 64)
as expected. Make sure to adjust the input_dim
and any other parameters according to your specific model. This modification should address the issue you described.
1 Like
@BadarJaffer thank you so much for your informative response. Fixed.
1 Like
@Don_Learner Awesome! I am glad I was able to help. Feel free to reach out if you need any help in the future.