Talk:Variational autoencoder: Difference between revisions

Content deleted Content added
SineBot (talk | contribs)
Line 49:
Clearly, the latter makes sense, since it is the very goal to learn <math>\theta</math> through the probabilistic decoder as generative model for the likelihood <math>p_\theta(x\mid z)</math>.
So is there a deeper meaning or sense in parametrizing the prior as <math>p_\theta(z)</math> as well, with the very same parameters <math>\theta</math> as the likelihood, or is it in fact a typo/mistake? <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/46.223.162.38|46.223.162.38]] ([[User talk:46.223.162.38#top|talk]]) 22:11, 11 October 2021 (UTC)</small> <!--Autosigned by SineBot-->
 
== The image shows just a normal encoder ==
 
There is an image with a caption saying it is a variational autoencoder, but it is showing just a plain autoencoder.
 
In a different section, there is something described as a "trick", which seems to be the central point that distinguishes autoencoders from variational autoencoders.
 
I'm not sure that image should just be removed, or whether it make sense in the section anyway. [[User:Volker Siegel|Volker Siegel]] ([[User talk:Volker Siegel|talk]]) 14:18, 24 January 2022 (UTC)