//=time() ?>
#StableDiffusion #WaifuDiffusion #StableDiffusionKawaii
The original waifu from the first proper ecchi, Cutie Honey.
Slowly working out the best settings for training an anime character into an already anime oriented model. Text encoder trained for a meager 30 steps at 4e-7.
#StableDiffusion2.1 #StableDiffusion #StableDiffusionArt #StableDiffusionKawaii
After using every single available token that you can use in a Stable Diffusion prompt (75 of 75), I tried recreating a character from an obscure anime I'm currently watching, w/ #WaifuDiffusion 1.4
#StableDiffusion2.1 #StableDiffusion #StableDiffusionArt
It's a challenge to repeatedly tell myself not to order prints of these. I initially made the Craig Wazowski #Dreambooth model as kind of a joke (Where's my Greg?), but I'm actually really loving the outputs I get from it.
#StableDiffusion2.1 #StableDiffusion #Dreambooth #StableDiffusionArt
"Woman eating salad, stock photo", by Greg Rutkowski.
#StableDiffusion2.1 #StableDiffusion #StableDiffusionArt #Dreambooth
It may be a wide image, but he ain't a wide boi. Really though, I'm surprised that it can even figure out what to do with such a wide frame. I expected cherry picking to be required, but this was the second try
#StableDiffusion2.1 #StableDiffusion #StableDiffusionArt #Dreambooth
Since Twitter users love catgirls, here ya go. Didn't know introducing Greg Rutko- I mean, "Craig Wazowski" back into 2.1 would improve anime style characters.
I think I've more or less nailed the exact settings #DifferentDimensionMe uses to get its results. #StableDiffusion, #anything_v3, Clip skip 2, seed delta 31337, the NovelAI pre/negative prompts plus either anime bishonen/shojo/bishojo, CFG 12, denoise strength of 0.65. Fun shit.
So close, but also not close enough. It has either the artstyle, or the character (If either to begin with), but it almost never has both.
Okay, the results from putting Yabuki Joe through #StableDiffusion #Dreambooth onto #anything_v3, they ain't so good. Super dodgy looking, with a shitton of cherry-picking needed, due to how many are off-model. Guess I'll try training onto a different model. #anythingv3
Someday, I'll know how to work #StableDiffusion #Dreambooth. Here's some Braixens, trained on 12 SFW images for 2400 steps (Half 3D, half 2D), with text encoder trained for 50% of the steps, on top of the base 1.5 SD model, with TheLastBen's Colab notebook.🦊
#StableDiffusionArt