//=time() ?>
If we take image A as input and try to match the high frequency of image B, it basically uses color of A to colorize B.
On the other hand, keeping the high frequency basically keeps the edges and allows to alter the color by inpainting the low frequency. (left: original; right: after inpainting the low frequency part)
Color leakage is however observed
The original i2i algorithm can change very much the input, and I observe this happens at around denoise strength 0.5.
Too low denoise strength does not change enough the style (first 0.45), too high modifies the composition too much (second 0.5)
Experimenting with inpainting in frequency domain, an alternative to the current img2img algorithm.
The hope is that by keeping the low frequency information we get something closer to the original image.
#stablediffusion #AIart
The full stable diffusion model fine tuned for bofuri is now online https://t.co/Sxz7wqKuMj
#防振り #AIイラスト #bofuri #AIart #stablediffusion
Multi-character scene, even for characters that never show up together is possible for lora too.
I kind of feel it is less flexible for this lr 1e-4 lora, but there is no objective measure here.
#AIイラスト #AIart #LoRA #stablediffusion #crossover #クロスオーバー