Hi, first of all, I am new here so sorry if I make a mess with tags and that stuff.
I am trying to create an autoencoder neural network in Python with Tensorflow, Keras, etc. It is for screw images reconstruction and I am having trouble with rebuilding the screw threads because they are very small details. I guess adding more conv layer could help, but I wanted to know if you have some tips to improve these kind of networks. For example, should latent space dimension for the encoder dense layer be bigger or smaller than last conv latent space dimension?
If it is helpful, I was using MSE loss and ReLU for con layers. However, I have read about Leaky ReLU, Adversarial Networks, etc.
Thanks