I am experimenting with autoencoders for a very specific application, but cannot unfortunately go into the specific details of what I am doing yet (fingers crossed I can do so after I make some progress).
Traditionally autoencoders have the encoder followed by the decoder. After the encoder, I add both a normalization and Gaussian noise step before adding the decoder as shown below. However, instead of adding Gaussian noise like in the example code, I want to add "non-random noise" for each sample. By "non-random noise", I mean augmenting the encoder with a vector of values. So for each data sample, I want to add this precomputed vector of values for that sample (the vector will differ by sample, so it's dependent on the sample itself) at the encoder level as noise.
I am not super familiar with deep learning frameworks, but is there any way to do this in keras and if so, how? If not, can I do it using tensor flow instead? I have included some of the code I have gotten started with:
def build_autoencoder(input_dim, encoded_dim, noise_std):
autoencoder = Sequential()
# Encoder Layers
autoencoder.add(Dense(encoded_dim, input_shape=(input_dim,), activation='relu'))
autoencoder.add(Dense(encoded_dim, activation='linear'))
# add normalization layer
autoencoder.add(Lambda(lambda x: K.l2_normalize(x,axis=1)))
# add Gaussian noise
# TODO: replace this with non-random noise for each sample
autoencoder.add(GaussianNoise(noise_std))
# Decoder layer
autoencoder.add(Dense(input_dim, activation='sigmoid'))
return autoencoder