//=time() ?>
Turing VQGAN's outputs into grayscale before passing them on to CLIP has interesting effects on the output. Maybe without color as a way to quickly convey a concept, it has to do more with structure (compare to the one color version, same prompt).
Prompt: A cute little demon baby
Here's what it looks like zoomed in (without compression). The hen link has the full 20MB 8600x4770px JPEG that you can save and explore :)
Made me a plant factory :) Using https://t.co/STrYAjIh5l it costs less than a cent per plant to generate these. I'm considering leaving this running overnight and picking the top 1% out tomorrow morning and finding something fun to do with them.
I've been getting into the idea of 'speculative biology' - what would life look like on other planets, or in the far future? I feel like GANs might be a wonderful tool for quick idea-generation here. These early tests have me dreaming of filling alien encyclopedias :)
@tania_rivilis @hicetnunc2000 This idea didn't work out the way I hoped 😂 Not my most flattering self-portraits - maybe the AI resents me for making it work so hard...
Love your painting - hope this qualifies me despite deviating from the rules a little :)
'Makemake After'
When you gaze on the abyss, it gazes back?
Such fun making these even though they take AGES!
Big thanks to @dadsalad2020 who made all these tracks and convinced me to make these videos :) He should have them minted soon and then I'll post links!
Here's a workflow I'm really liking:
1) Grab an un-textured model and render it out (top left)
2) Run this through a GAN (CLIP guiding, so add a prompt like 'spaceship concept art sci-fi')
3) Projection-map this back onto the model geometry
4) Slap it in a scene :)