>Deep Dreams
How do I go about generating these on my computer?
Is there some sort of open-source toolkit for this?
Reason I'm asking is that the Google demo (https://deepdreamgenerator.com) is very limited in resolution - vid related
Hey, how about this.
While we wait for somebody knowledgeable about the subject to pop up, I'll take any image you submit and blend it with Emma's portrait. Deal?
Here's one I generated using my defrag map
Here's one generated from the SpaceMonger map of the same drive
>>57635088
my fucking lard she's so perfect
This one was generated using the thumbnails in Windows of all the other "Deep Dreams" I did
>>57635598
The original pic is a screenshot from the Beauty and the Beast trailer in case you're wondering
Here's one I generated using a random "Interesting patterns" image from Google images (fuck stock photo sites, BTW)
Generated this one using the /g/ catalog, unzoomed
Did this one using my task manager CPU mon
Did this one using this >>57635417
Very similar, but not identical
>>57635088
Nice webm, oddly interesting.
>>57636291
Thanks. I'll be continuing to make it longer as time passes
Just did this one from one random image from my random folder
This one, from some random image out there.
>>57635088
https://www.reddit.com/r/deepdream/comments/3dur8j/running_deep_dream_on_windows_the_easy_way_with/
You can run it on Windows but doing it with CPU takes fucking forever and uses a lot of RAM. However if you have a compatible GPU (I think Nvidia only unless you manage to get CUDA working on AMD somehow) then you can cut the rendering time by like 98% or something ridiculous like that (the last time I looked this up was last year July so things might be different now).
>>57638895
>https://www.reddit.com/r/deepdream/comments/3dur8j/running_deep_dream_on_windows_the_easy_way_with/
Thank you very much for your input.
I'll be definitely looking into it
Another run with a random image
See https://github.com/graphific/DeepDreamVideo
It sorta works by interpolating the hallucinated artifacts between frames, but the models aren't really designed to be temporar, which is why you always get that per frame flicker.
More LSTMish rNN model (which are used to recognize sequences of events in video, not just picture) would look much better, but afaik nobody really made one yet for deep dreaming.