1
$\begingroup$

I wrote an implementation of a feedback recurrent autoencoder in Keras. The key difference to a regular autoencoder is, that the decoded output is fed back to the input layers of both, encoder and decoder, of the autoencoder and there merged with the current new input of the autoencoder. Currently, my code looks like this

 class Autoencoder(Model): def __init__(self, latent_dim, shape): super(Autoencoder, self).__init__() self.latent_dim = latent_dim self.shape = shape self.ht = 2 self.buffer= np.repeat(np.zeros(shape,dtype=np.float32), repeats=self.ht, axis=1) self.encoder = tf.keras.Sequential([ layers.Flatten(), layers.Dense(np.round(shape[0]*shape[1]*1.5), activation='swish'), layers.Dense(latent_dim, activation='swish'), ]) self.decoder = tf.keras.Sequential([ layers.Dense(np.round(shape[0]*shape[1]*1.5), activation='swish'), layers.Dense(tf.math.reduce_prod(shape), activation='sigmoid'), layers.Reshape(shape) ]) def update_buffer(self, new_element): n = self.shape[1] self.buffer[:, n:] = self.buffer[:, :-n] self.buffer[:, :n] = new_element def call(self, x): decoded = [None] * x.shape[0] for i in range(0, x.shape[0]): buffer_tensor = tf.convert_to_tensor(self.buffer) xin = tf.expand_dims(tf.concat((x[i,:,:], buffer_tensor), axis=1), axis=0) encoded = self.encoder(xin) buffer_reshaped = tf.reshape(buffer_tensor, [1,self.buffer.shape[0]*self.buffer.shape[1]]) decin = tf.concat([encoded, buffer_reshaped], axis=1) y = self.decoder(decin) decoded[i] = y self.update_buffer(tf.squeeze(y)) out = tf.squeeze(tf.convert_to_tensor(decoded)) return out 

and runs fine in eager execution mode. However, because training is pretty slow, I'd like to see the performance without eager execution. Unfortunately, then the training immediately fails, because I use numpy function calls: I receive the error

 NotImplementedError: Cannot convert a symbolic tf.Tensor (autoencoder_36/Squeeze:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported. 

due to the line self.update_buffer(tf.squeeze(y)) and the corresponding method. How do I implement a buffer like mine in tensorflow so that I can avoid getting this error? The key issue is the assignment

self.buffer[:, :n] = new_element 

which I cannot easily avoid as far as I know Keras

$\endgroup$

1 Answer 1

1
$\begingroup$

I asked ChatGPT, and, to my surprise (due to previous experience), it solved my question correctly, as far as I can say at the moment, but I am open for comments. Apparently this works:

class Autoencoder(Model): def __init__(self, latent_dim, shape): super(Autoencoder, self).__init__() self.latent_dim = latent_dim self.shape = shape self.ht = 2 self.buffer = tf.Variable(initial_value=tf.zeros(shape=(shape[0], shape[1] * self.ht), dtype=tf.float32)) self.encoder = tf.keras.Sequential([ layers.Flatten(), layers.Dense(int(shape[0] * shape[1] * 1.5), activation='swish'), layers.Dense(latent_dim, activation='swish'), ]) self.decoder = tf.keras.Sequential([ layers.Dense(int(shape[0] * shape[1] * 1.5), activation='swish'), layers.Dense(tf.math.reduce_prod(shape), activation='sigmoid'), layers.Reshape(shape) ]) def update_buffer(self, new_element): n = self.shape[1] self.buffer[:, n:].assign(self.buffer[:, :-n]) self.buffer[:, :n].assign(new_element) def call(self, x): decoded = [] for i in range(x.shape[0]): buffer_tensor = self.buffer xin = tf.expand_dims(tf.concat((x[i, :, :], buffer_tensor), axis=1), axis=0) encoded = self.encoder(xin) buffer_reshaped = tf.reshape(buffer_tensor, [1, self.buffer.shape[0] * self.buffer.shape[1]]) decin = tf.concat([encoded, buffer_reshaped], axis=1) y = self.decoder(decin) decoded.append(y) self.update_buffer(tf.squeeze(y)) out = tf.squeeze(tf.convert_to_tensor(decoded)) return out 

I tried something similar, but got the assignment in the update_buffer function wrong apparently!

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.