//=time() ?>
The Stable Diffusion 2 depth model has recently become widely available, and it dramatically improves image-to-image, making things like sketching a drawing and using SD to fill in details and shading much more feasible.
Sky Lagoon Remix V2, with depth awareness:
@Slime_BlaXun Here are a few raw generations. None are perfect, but SD 1.5 generations would usually be missing at least one limb and would pose very stiffly.
Posted the previous attempt b/c it was interesting, but it didn't truly capture Giger's aesthetic. Second attempt at X in the style of Giger, using new "Aesthetic Gradients" technique trained on 10 Giger paintings. https://t.co/0qHY1T93pc
Zero. Custom Stable Diffusion model trained on official Zero artwork, with manual touchup and inpainting.
Zero. Custom Stable Diffusion model trained on official Zero artwork, with manual touchup and inpainting.
Process:
Left: native 512x512 model output based on text prompt
Right: double resolution, manually resketch problem areas and inpaint (mask a selection from the image and force model to refine it while freezing the rest of the image).