//=time() ?>
#anythingv3
1 枚の画像からだけでも十分に高精度な Embeddings を作成できるという次世代 Textual Inversion 実装の Dream Artist を試してみた
https://t.co/QXAJA0Oz5Y
素材はランスシリーズ、シィルさんの公式ページの立ち絵
学習回数 4000 Epoch 程度でこの精度よ
生成画像 ←→ 学習元画像
Same parameters but different styles invoked in the prompt, clockwise from top left:
No embedding —> miata8674_gs45000 ⮒
asutora_gs31800 —> (training-in-progress) atawashi_gs22000.
#幽香 #AI #textualinversion https://t.co/ohCrjIRRbL
I soon realized I was calling it wrong, without using the underscore. Nevertheless, I find it's still worse than my default negatives.
#幽香 #AI #textualinversion https://t.co/Jf2340JfDb
The miata embedding should be okay now, at 45k total steps.
https://t.co/3n45LipS0E
#textualinversion #machinelearning https://t.co/t0kcUjx8K2
23k steps @ high CFG.
Either the updates to UI/VAE or the embedding itself don't like it when it's close to something like "perfect face" in the prompt, it really messes with the result. See the first two images. The second pair are better ones...
#幽香 #textualinversion #NovelAI https://t.co/AxNibT64u4
At 11k steps there's very minimal similarity to miata; conversely, it's very noticeable with realistic art: I didn't highlight the eyes in the prompt at all!
I'm now positive that Google is indeed progressively lowering Colab limits with usage.
#textualinversion #machinelearning https://t.co/FIeOpnIaAy
28k steps is OK and 20k is very close as well, so I'll consider the sketch one to be working just fine and move on to do miata.
Same folder as before:
https://t.co/3n45LiqpQc
#幽香 #メガネ #レミリア・スカーレット #textualinversion #NovelAI https://t.co/TXTDZDS6sw
#NovelAI embedding of the style (mostly in regard to coloring) of a certain artist that used to draw Yuuka:
https://t.co/3n45LipS0E
14k & 30k steps, two Colab days' worth. All images generated on 14k.
#幽香 #textualinversion
@elonmusk The best carbs for your wellbeing Artist: @laser_lew Title: Inversion Excursion Real Fkn Lasers🍽
Should I be working on some grand artistic collection to wow galleries all over the globe?
Am I instead running the arcane model with a textual inversion of my face to become an awesome Arcane Character?
Yes.
Priorities = straight.
how did i do it?👇
#aiia #stablediffusion
w/ text inversion for dalle paint style that doesn't work 100% but does add flavor
@Clink9clfg1 ありがとうございます。
モデル混合とtextual inversionとhypernetで自分好みにチューンした環境でやったものなので、同じ結果は出ないと思いますが、参考までに共通部分をaltにつけました。waifudiffusion v1.3 + trinart_charがメインの混合モデルでやってます。
@CoffeeVectors 3 dollars. I just did it because I was having such issues with getting textual inversion to run on WEBUI and it only took an hour to run the images then you get unlimited (as of now) prompt runs using your or whoever face you made it on.
More examples from finetuning stable diffusion model with 10K satellite images via textual inversion, to merge the urban landscape with the micro-structure from slime mold.
#stableDifusion #urbanart #AIArtwork #aiartcommunity #maps #landscape #generativeart #AIart #architecture
以textual inversion为首,这类以小图集训练新concept的方法是我最担心的,打开了潘多拉的魔盒。画风偷麻了属于是。
#StableDiffusion