//=time() ?>
Messing around with the prompt, the seed, etc and you can wander around the latent space while staying somewhat tethered to your composition.
Scribble to Diffusion w/ #ControlNet is kind of a game changer.
A quick thumbnail sketch, a prompt, and you get a very controllable composition.
Running in #stablediffusion v1.5 #aiart
Another thing that's fun about this approach is that I can ask #ChatGPT to transform each scene into a totally different style, re-writing the #midjourney prompts in seconds – in this case turning them into a minimalist style inspired by Hilma af Klint
For the lowest scale, #ChatGPT imagines a 'A mind-bending exploration of the very fabric of space-time, where quantum fluctuations give rise to the fabric of the cosmos.'
@TesseractDelta For something like this, I think it would make sense to create a token for each major phase of the animation.
I had #ChatGPT write out bullet points for the overall structure, and then asked it to fine tune each section with more stylistic expression...
In response to my first Fractal Diffusion animation, user @TesseractDelta planted the seed of an idea: A zoom from the Milky Way down to the scale of atoms and molecules.
This would be a more involved piece. So, I started by asking #ChatGPT ...
#midjourney #stablediffusion
I also explore the model's latent space with some lower resolution / steps to uncover particularly engaging starting points for animations.
One challenge is that these seeds will express themselves differently at wider aspect ratios and higher step counts.
#aiart
I'll go into more depth when I post Part II of my Diffusion Secrets✨ series on making #animations with #stablediffusion – but here's a couple of outtakes from my newest render.