LSTM class (function of block code )

I am looking for any documentation, tutorial, article in which the LSTMs are implemented with tensorflow framework but in which layers are implemented under class Model
“implemented in block as a forward propagation function” i, e the layers are defined and instantiated under model class . it will be more good if i can get a CNN lstms hybrid block as i need much details about the input dimension and other arguments for Lstm

what i mostly found is this
model.add(TimeDistributed(Conv1D(128, kernel_size=1, activation=‘relu’, input_shape=(None,50,1))))
model.add(TimeDistributed(MaxPooling1D(2)))
model.add(TimeDistributed(Conv1D(256, kernel_size=1, activation=‘relu’)))
model.add(TimeDistributed(MaxPooling1D(2)))
model.add(TimeDistributed(Conv1D(512, kernel_size=1, activation=‘relu’)))
model.add(TimeDistributed(MaxPooling1D(2)))
model.add(TimeDistributed(Flatten()))
model.add(Bidirectional(LSTM(200,return_sequences=True)))

**what I need is like This in lstms preferably lstm cnn hybrid **

class FixedHiddenMLP(tf.keras.Model):
def init(self):
super().init()
self.flatten = tf.keras.layers.Flatten()
# Random weight parameters created with tf.constant are not updated
# during training (i.e., constant parameters)
self.rand_weight = tf.constant(tf.random.uniform((20, 20)))
self.dense = tf.keras.layers.Dense(20, activation=tf.nn.relu)

def call(self, inputs):
    X = self.flatten(inputs)
    # Use the created constant parameters, as well as the `relu` and
    # `matmul` functions
    X = tf.nn.relu(tf.matmul(X, self.rand_weight) + 1)
    # Reuse the fully-connected layer. This is equivalent to sharing
    # parameters with two fully-connected layers
    X = self.dense(X)
    # Control flow
    while tf.reduce_sum(tf.math.abs(X)) > 1:
        X /= 2
    return tf.reduce_sum(X)

Any online documentations tutorial article and GitHub implementation will be helpful Thank you