//=time() ?>
@GuyP Here is an example using Stable Diffusion (latent interpolation with composable diffusion).
@CoffeeVectors Yeah this is possible. You need to interpolate the images in latest space and use compositional diffusion. Here is one of the examples from that post done with Stable Diffusion.
Saw a bunch of people commenting that the new #willow TV series looks bland as if it was AI generated. Meanwhile heres what an AI version actually looks like! 🧵
#aicinema #midjourney
Got #stablediffusion 2.0 running locally.
Here's some generations at 768x768! So far seems to be pretty coherent out of the box. Hands are a little better I think but a bit early to say.
#stablediffusion2
Death Metal diffusion!
Working on feeding album art into #dreambooth. The results are pretty cool so far.
I like how the compositions always work well with a square crop because of the source material.
Sci-Fantasy Art Timelapse (AstroKnight III) using #stablediffusion #img2img, #photoshop, and #automatic1111's @Gradio UI.
This is maybe the last image from this series but it was good to experiment with SD 1.5!
Full length video here https://t.co/87dVeYEf0M
So it turns out #Dreambooth works on miniatures! Here's a bust my brother sculpted (first image) and some generated images. Will post more below... 1/n
#minipainting #miniaturepainting #minisculpting #stablediffusion
"Mech Girl" #stablediffusion #img2img
The first piece that kicked off my obsession with Img2Img!
Walkthrough: https://t.co/BWxBi4NfwQ (8/n)
"Heavy Metal Drill" #stablediffusion #img2img
Video Walkthrough: https://t.co/jpCe8b5pv6 (7/n)