Skip to main content

Questions tagged [vae]

0 votes
0 answers
36 views

I have a dataset of human vocalisations during development, separated into syllables. I am using a VAE and I obtain a feature vector for each datapoint from the latent space. Does it make sense to use ...
user176315's user avatar
1 vote
1 answer
39 views

I'm applying VAEs to sections genomic data (haplotypic vcf format, so binary variables), with one model being trained on each section. They each have different layer sizes and weights to better fit ...
Whitehot's user avatar
  • 121
1 vote
1 answer
105 views

As I understanding the VAE, it's a model to get the P(x) of x(final job like image generation). When i train it, It input x from dataset to get mu and var from encoder, and to get a sample z from mu ...
KEIFTH YANG's user avatar
1 vote
1 answer
862 views

I'm implementing a VQ-VAE for a LDM for biological time series data. I trained the VQ-VAE, and reconstructions works somewhat reasonable, but I have an understanding problem with how a VQ-VAE works. ...
Jackilion's user avatar
0 votes
1 answer
165 views

I'm really a novice working with these technologies and I'm struggling to design a neural network that is powerful enough to model a spectrogram. For a personal project, I'm working on a spectrogram ...
BOBONA's user avatar
  • 3
4 votes
2 answers
947 views

I want to know about how variational autoencoders work. I am currently working in a company and we want to incorporate variational autoencoders for creating synthetic data. I have questions regarding ...
NevMthw's user avatar
  • 57
0 votes
1 answer
582 views

I am getting confused in the testing dataset of a VAE. After training the VAE, what should be the testing data-set of the VAE? I understand that during testing the VAE only has the decoder part. Hence,...
Formal_this's user avatar
1 vote
1 answer
132 views

Posterior collapse means the variational distribution collapse towards the prior: $\exists i: s.t. \forall x: q_{\phi}(z_i|x) \approx p(z_i)$. $z$ becomes independent of $x$. We would like to avoid it ...
JXuan's user avatar
  • 21
0 votes
0 answers
101 views

I have a Convolutional-VAE architecture where the target images are in the range [0, 1], their pixel values. To synthesize/reconstruct images in this scale, I am using a sigmoid activation function in ...
Arun's user avatar
  • 121
1 vote
0 answers
24 views

I've noticed all semi-supervised VAEs assume discrete (categorical) labels to encourage disentangled representation learning in VAEs. e.g., Kingma, Durk P., et al. "Semi-supervised learning with ...
MerelyLearning's user avatar
1 vote
1 answer
3k views

I am training a VAE on CelebA HQ (resized to 256x256). The training is going well, the reconstruction loss is decreasing and reconstructions are also meaningful. But, the problem is with KL divergence ...
RajaParikshat's user avatar
0 votes
1 answer
140 views

I know that autoencoders can be used to generate new data. From what I could understand.. The autoencoder uses the original distribution X to learn a random gaussian distribution described by mean and ...
Sparsh Garg's user avatar
1 vote
0 answers
99 views

I'm searching for a way to compare mu and sigma values of the encoder network's output of variational autoencoders. In detail, imagine I trained my VAE on the MNIST digits dataset using the official ...
BlackCode's user avatar
1 vote
0 answers
130 views

From this post we can read that VAEs encode inputs as distributions instead of simple points ? What does it mean concretely ? If the encoder consists of the weights between the input image and the ...
RandomFellow's user avatar
1 vote
1 answer
289 views

I followed this Keras documentation guide about Auto Encoders. At the end of the documentation there is the graph of the latent variable z: But I can not understand and how to interpret the plot, ...
Turned Capacitor's user avatar

15 30 50 per page