//=time() ?>
Pass six, still not there yet. I'm going to do one more pass of training, then I'm going to call it a (very late) night.
Fourth past, things are starting to look mildly more converged than they did prior. Only mildly, though.
Past two of training the model for 600 steps, and I'm not really sure if it's progressing, but it is changing. Same prompts and seeds used.
I tried the Gigachad #StableDiffusion model. The dream of low effort gigachad memes has been realized.
I tried making #StableDiffusion fanart of @ComfyCatboy, the singular vtuber I've ever watched. Couldn't get it to do the face stripe, but otherwise, it almost worked. #NovelAI #WaifuDiffusion #StableDiffusionKawaii
@Lucky_Eagloh After all of that is done, you can just use the model like any other model. It's REALLY resistant to oversaturation and general crappiness at high CFG values, so try to go no lower than 12. There's a list of negative prompt stuff that people recommend, and that's also in alt-text
Starting the second episode (Which the fansubbers never got to subbing), and the translation is definitely better, though letting Whisper do everything unsupervised results in weird interpretations of what lyricless music is "saying". It ain't saying [Music], like YouTube would.
Tried the leaked #stablediffusion NAI model with the hypernetworks, and it's surprising how much of a difference they can make on the same prompt with the same seed. Per image info in alt-text.
#StableDiffusionKawaii #WaifuDiffusion