1 枚の画像からだけでも十分に高精度な Embeddings を作成できるという次世代 Textual Inversion 実装の Dream Artist を試してみた

https://t.co/QXAJA0Oz5Y

素材はランスシリーズ、シィルさんの公式ページの立ち絵
学習回数 4000 Epoch 程度でこの精度よ

生成画像 ←→ 学習元画像

0 2

Same parameters but different styles invoked in the prompt, clockwise from top left:
No embedding —> miata8674_gs45000 ⮒
asutora_gs31800 —> (training-in-progress) atawashi_gs22000.
https://t.co/ohCrjIRRbL

1 0

I soon realized I was calling it wrong, without using the underscore. Nevertheless, I find it's still worse than my default negatives.
https://t.co/Jf2340JfDb

1 1

Hope everyone is doing well!
Still model training, mixing and matching aesthetic embeddings and textual inversions, having a great time!

6 41

23k steps @ high CFG.
Either the updates to UI/VAE or the embedding itself don't like it when it's close to something like "perfect face" in the prompt, it really messes with the result. See the first two images. The second pair are better ones...
https://t.co/AxNibT64u4

0 1

At 11k steps there's very minimal similarity to miata; conversely, it's very noticeable with realistic art: I didn't highlight the eyes in the prompt at all!
I'm now positive that Google is indeed progressively lowering Colab limits with usage.
https://t.co/FIeOpnIaAy

0 0

28k steps is OK and 20k is very close as well, so I'll consider the sketch one to be working just fine and move on to do miata.
Same folder as before:
https://t.co/3n45LiqpQc
https://t.co/TXTDZDS6sw

2 3

happy birthday, beautiful i would follow you into an Inversion 🎂🎉

8 17

embedding of the style (mostly in regard to coloring) of a certain artist that used to draw Yuuka:
https://t.co/3n45LipS0E
14k & 30k steps, two Colab days' worth. All images generated on 14k.

1 2

The best carbs for your wellbeing Artist: Title: Inversion Excursion Real Fkn Lasers🍽

1 8

There were some missed in the beginning, when you train a textual inversion in AUTOMATTIC make sure you use LOW CFG when referencing it, otherwise, it gets glitchy. 😂 3/4

0 5

Should I be working on some grand artistic collection to wow galleries all over the globe?

Am I instead running the arcane model with a textual inversion of my face to become an awesome Arcane Character?

Yes.

Priorities = straight.

how did i do it?👇

18 166

w/ text inversion for dalle paint style that doesn't work 100% but does add flavor

1 8

ありがとうございます。
モデル混合とtextual inversionとhypernetで自分好みにチューンした環境でやったものなので、同じ結果は出ないと思いますが、参考までに共通部分をaltにつけました。waifudiffusion v1.3 + trinart_charがメインの混合モデルでやってます。

1 2

3 dollars. I just did it because I was having such issues with getting textual inversion to run on WEBUI and it only took an hour to run the images then you get unlimited (as of now) prompt runs using your or whoever face you made it on.

0 5

More examples from finetuning stable diffusion model with 10K satellite images via textual inversion, to merge the urban landscape with the micro-structure from slime mold.

0 8

なるほどね、Textual Inversion

0 2

以textual inversion为首,这类以小图集训练新concept的方法是我最担心的,打开了潘多拉的魔盒。画风偷麻了属于是。

0 0

(Stable Diffusion)
雑に描いた1個の画像を Textual Inversion で少しだけ学習すると、作られる絵がなんとなく片寄って便利かも? 画風ガチャ。

画像は順に、学習した絵 → 3 iteration目のptで出力される絵 → 18 it目 → 321 it目

呪文は「1girl, cat ears, (pt名:1.00)」

0 0