//=time() ?>
Images of our new SD2 fidelity+coherence research project
made with #stablediffusion #StableDiffusion2
It uses raw SD2 + fine-tunings (TI embeddings)
All made possible thanks to @EMostaque, @StabilityAI and the whole SD community.
More soon.
#AIart #AIArtwork #generativeart
I'm far from an expert 🤓, but been training chekpoints, using embeddings, hypernets, and making safe prompts for a while for many styles (here you can see same character in many styles).
Will be glad to help anyone on what I can 😊 Just DM
#stablediffusion #midjourney #novelai
I love Textual inversion in combination with #StableDiffusion 2 🙃. Negative embeddings have become my main focus when designing my text prompts. These images were created by combining Magic Facelift and bad_prompt, among others (Links in the comment).
Embeddings is my new favorite toy in #StableDiffusion2. They work really well, for such a tiny size add-on. I think they will become just as popular as custom models.
Here is the link to the VikingPunk embedding I used: https://t.co/STn7VDBrRV
25000 steps embedding, 3 image source, based on my digital / hand drawn artworks. Getting closer...and surprising, there's literally no art style guiding or artist reference other than to myself (and the embeddings) in the prompt
#WinterWaifuAIChallenge #AIart #AIイラスト #oppai
#StableDiffusion #anythingv3
ちうごくの人が作った、embeddings を使って、原神のナヒーダ描いてもらった。
同じ人が作ったクオリティ上げるプロンプト使うと、指示してないのに勝手に脱ぎだす。見えちゃいけないものがぽろんぽろん出る。
クオリティ高いけど制御が効きづらいな。
より初音ミク理解の強いWaifu Diffusionで「山がギリギリ初音ミクにならないもの」を作って遊ぶなど。text_embeddings自体は流用できるけど調整難しいな。だいたいミクになっちゃう。 #WaifuDiffusion
Some more from first batch.
Lots of optimisation to do, took about an hour of playing about. Prompts here: https://t.co/4soGak9op0 negative important for 2.0 given how we flatten distribution of latents with dedupe etc
Embeddings will make it easier out of the box
DreamArtist TI による Embeddings を利用したランス総統
なんか、きれいなジャイアンを思い出す
#stablediffusion
#anythingv3
#dreamartist
出力画像 ←→ 学習元画像
#anythingv3
1 枚の画像からだけでも十分に高精度な Embeddings を作成できるという次世代 Textual Inversion 実装の Dream Artist を試してみた
https://t.co/QXAJA0Oz5Y
素材はランスシリーズ、シィルさんの公式ページの立ち絵
学習回数 4000 Epoch 程度でこの精度よ
生成画像 ←→ 学習元画像
And voila, its lewd pictures.
Prompt: 1 girl, solo, nude, chained to wall, whipping marks, red hair, dirty skin, cum, tears, dungeon background. Image by <redacted>
Used embeddings: yes | model: wd 1.3 full | sampling method: DDIM 24-32 steps | Upscaler: R-ESRGAN 4x+ Anime6B
@KaliYuga_ai @GanWeaving @KyrickYoung There are so many variables to take into an account including length/strength of prompt. Dreambooth or embeddings used.
Sometimes it's easiest to plot an x/y matrix to see which one you prefer. I was running some this afternoon.
I got too curious, and by doing this I know I'm playing with fire, but I HAD to satiate my curiosity. I feel dirty now. The leaked anime model is really damn good, though the best results likely require using the embeddings files, which don't work on my set-up.
#stablediffusion
Meet #AI’s Multitool: Vector Embeddings
https://t.co/jWl04Ee4Gl @liwaiwaicom
#MachineLearning #DataScience
Cc @helene_wpli @jblefevre60 @YuHelenYu @Xbond49 @mvollmer1 @Fabriziobustama @MarshaCollier @DeepLearn007 @Shi4Tech
Aw yeah, thanks to the leak I can now train my own embeddings on the novelai model. It's turning out pretty damn nice! Unlimited wife art, fuck yeah.
#naotomori
#友利奈緒
で、復元したpromptをwaifuに突っ込んだ結果がこう。
悪くはないんだけど、画風の再現とまでは行ってない印象
復元したpromptだと、元のembeddingsを再現するのは難しそうですね
Experimenting with Textual Inversion. And trained my own embeddings to use to change the style of my pieces to something I've tailored myself using stable diffusion. This is one of my early generations testing it out. #aiart #aiartcommunity #darkart #aiartists #stablediffusion
Round 1 of the MABe Challenge is over, with winner announcements coming soon- but in the meantime, Round 2 has begun! Your goal is still to produce meaningful unsupervised embeddings of animals' actions- but now, you have the raw videos to work with. 1/5