Questions tagged [autoencoder]
Autoencoders are a type of neural network that learns a useful encoding for data in an unsupervised manner.
322 questions
4 votes
1 answer
35 views
Possible to Improve Reconstruction Quality and Accuracy with VAE?
I am training a VAE architecture on microscopy images. Dataset of 1000 training images, 253 testing images. Images are resized to 128x128 input or 256x256 input from original resolution which is ...
7 votes
3 answers
132 views
Misleading representation for autoencoder
I might be mistaken, but based on my current understanding, autoencoders typically consist of two components: encoder $f_{\theta}(x) = z$ decoder $g_\phi(z)=\hat{x}$ The goal during training is to ...
0 votes
0 answers
22 views
Predicting pregnancy codes with transformer
Im trying to predict pregnancy codes with a basic transformer model architecture. These pregnancy codes are like following prg001, prg002 to prg030. Prg001 would be antenatal screening and prg030 ...
0 votes
1 answer
49 views
Why does my reconstruced image appears darker than the original?
I am trying build an autoencoder which would encode the image into latent space and then decode it back to the original image without any changes. I am mainly trying to implement this paperUniversal ...
1 vote
0 answers
255 views
Can you use the Euclidean Distance as a loss function?
While building an auto-encoder that preserves distances, i accidentally used the euclidean norm as the loss for the difference between the x and z distances that im trying to minimize. (I hope you can ...
2 votes
1 answer
125 views
Custom loss function in python
I am trying to implement a custom loss function inspired by https://arxiv.org/pdf/2305.10464.pdf. That is: $ L(\mathbf{x}) = (1-y) \left\lVert \mathbf{x_{true} - \mathbf{x_{pred}}} \right\rVert^2 + y \...
2 votes
2 answers
426 views
Autoencoders are fitting anomalies too good
I have a set of ~ 5000 greyscale images with resolution of 64x128. I want to do an unsupervised anomaly detection. As a first try, I chose convolutional autoencoders (AE) and trained an AE model. I ...
0 votes
0 answers
60 views
Losing Information while resizing the image in Segmentation task using U-net
I'm using U-net architecture to build a segmentation task of image. During training I have image of size 256256 image. It works very well on the segmentation of same size 256256 or near to size 256*...
0 votes
1 answer
38 views
Why are some columns of feature matrix after dimentionality reduction zero?
I am trying to implement a paper in which the ultimate goal is to predict mutliple labels for instances (which are genes here). The feature matrix with shape of 1236*18930 is built by calculating term ...
0 votes
0 answers
65 views
Trying to train a denoising autoencoder to restore missing information from a binary image
I am building a denoising autoencoder to repaint lanes from a binary image. The input is a binary image that has incomplete lanes, due to vehicles getting in the way. I repaint the lanes manually so ...
1 vote
0 answers
32 views
0 votes
1 answer
33 views
How to improve the influence of one element of the input on the latent code in an autoencoder?
I am trying to apply an autoencoder for feature extraction with the input like I=[x1,x2,x3,...,xn]. Representing the latent code after encoding as L, I want to improve the influence of one element of ...
1 vote
1 answer
105 views
Not understanding how to eval a VAE model?
As I understanding the VAE, it's a model to get the P(x) of x(final job like image generation). When i train it, It input x from dataset to get mu and var from encoder, and to get a sample z from mu ...
1 vote
1 answer
154 views
Unsupervised Machine Translation System Using Variational Autoencoder Models
I want to work on an unsupervised machine translation system using a variational autoencoder. I did a literature review but didn't find any related work, and most of the work is based on denoising ...
1 vote
1 answer
464 views
numpy in call method: how to run without eager execution?
I wrote an implementation of a feedback recurrent autoencoder in Keras. The key difference to a regular autoencoder is, that the decoded output is fed back to the input layers of both, encoder and ...