Auto-encoder returns flat line

I have the following network,

class AnomalyDetector(Model):
    def __init__(self):
        super(AnomalyDetector, self).__init__()
        self.encoder = tf.keras.Sequential([
            layers.Dense(750, activation="relu"),
            layers.Dense(500, activation="relu"),
            layers.Dense(250, activation="relu")
            ]) # Smallest Layer Defined Here
    
        self.decoder = tf.keras.Sequential([
            layers.Dense(500, activation="relu"),
            layers.Dense(750, activation="relu"),
            layers.Dense(1000, activation="sigmoid")])
    
    def call(self, x):
        encoded = self.encoder(x)
        decoded = self.decoder(encoded)
        return decoded

autoencoder = AnomalyDetector()
autoencoder.compile(optimizer='adam', loss='mae')

Its trained on 10,000 input signals (each 1000 samples long) that are all the same with the only difference being added random noise to each input. The data is read in, normalized & split into training & testing data as follows,

for idx, s_file in enumerate(synth_files):
    
    x = np.array(pd.read_csv(s_file)).flatten()
    x[x < 0] = 0
    x = x / np.linalg.norm(x)
    synth_data.append(x)
        
    if idx % 100 == 0:
        print(idx)

synth_data = np.array(synth_data)

normal_train_data = synth_data[:int(len(synth_files) - len(synth_files)/10)]
normal_test_data  = synth_data[int(len(synth_files) - len(synth_files)/10):]

But when I compare an input with a reconstruction the autoencoder just returns a flat line, eg

(OK so I’m not allowed to post images which makes things a lot more complicated to explain, why is that?)

So basically I have a ramp up, a level section then a ramp down with random noise, where as the reconstruction is just a flat line at y=0

What am I missing here?

Hi @DrBwts ,

It sounds like your autoencoder is not learning to properly reconstruct the input signals,

I tried to replicate the code attaching the reference gist Kindly Refer this .

Thank You !