does this crap blow anyone else's mind? imo bayesian neural networks are the coolest thing since sliced bread.
>Having recovered the latent manifold and assigned it a coordinate system, it becomes trivial to walk from one point to another along the manifold, creatively generating realistic digits all the while
http://blog.fastforwardlabs.com/post/149329060653/under-the-hood-of-the-variational-autoencoder-in
no, but im mentally handicapped so i dont rly understand it tbqh
What is this?
>>8455247
it's the process of using neural networks to uncover the latent manifold on which some collection of data lies, once you have said manifold you can do things like hop around on it and sample fake data that closely resembles real data.
for example, here are some generated images of bedrooms from a bedroom manifold.
and here's a paper that improves superresolution techniques by projecting an enhanced image onto a manifold of natural images
https://arxiv.org/abs/1609.04802
>>8455255
This isn't new though, dimension reduction via manifolds been around for more than a decade. Isomap/SE are well studied and based in theory, rather than black boxes too.
>>8455349
It's never worked this well before to my knowledge.
>>8454950
faking images has never been so easy
>>8455255
dont you have this pic in lower resolution m8 I can still see parts of your point..
>>8455255
Whad. How does knowledge that it's a manifold help? Manifold isn't some magic word, lol, it's just a type of a topological space.
>>8455826
By manifold they just mean some arguably continuous subset of pixel space which is just [math]\mathbb{R}^n[/math] for some large n depending on the width and height of your images. The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it but nice pictures of bedrooms are.
>>8455846
>The neural network parameterizes this space via optimization of a generative network and a discriminator network over your dataset so that crap like random noise is not on it
How does it do it?
>>8455882
By having the generative and discriminator networks compete.
Very roughly because I don't fully understand it is that the generator produces some image and the discriminator is fed some combination of these generated images and real images from the dataset and has to decide which are real and which are fake. Fooling the discriminator propagates a positive reward signal through the generator and a negative reward signal through the discriminator and correctly classifying generated images propagates a positive reward signal through the discriminator and a negative reward signal through the generator.
That's for DCGANs. The other common method (VAE) I'm not sure about but you can read about here and other places
https://openai.com/blog/generative-models/
>>8454950
Looks interesting, how much CS is one supposed to know to implement/understand this?
>>8455923
Very little computer science. Lots of probability and statistics and the ability to write Python programs.
>>8455894
Yeah thank you I think it's posterior probability being used here
https://en.wikipedia.org/wiki/Posterior_probability
Although I myself don't fully understand wtf it is supposed to mean (I know the usual bayes formula ofc) and the explanations I've googled are just fucked up.
>>8455928
damn, I don't really like probability ):
>>8454950
idk who posted pic related originally but it is possibly the best post on 4chan ever
>>8455531
This result was shown in a few papers circa 08, with pictures of hands. In a 2D space moving in a direction coresponded with either a orientation of hand or more open/closed
>>8454950
autoencoders are nothing new famalam
>>8456240
>If it's not new it's not good
>>8456240
Vanilla autoencoders don't really work. It's the new Bayesian spin on them that get the whole enterprise rolling again.
>>8456426
>implying Bayesian autoencoders are any better than convolutional autoencoders