//=time() ?>
TBC trio :3c
May turn these into stickers would anyone be interested?
#tbc #thebrokencode #warriorcats #waca #rootspring #bristlefrost #shadowsight #redesigns
@MesmericNft @knylx_art Dreampunk samurai #studiogreencode #AIart #artoftheday
🌟 Daily Art Showcase: Check out this mesmerizing piece from today's creation! #DailyArt #Studiogreencode #AIart #NFTCommunity
🌅 Rise and shine, beautiful people! A new day awaits, full of opportunities and adventures! ☕️🌞 #GoodMorning #NewDay
#Studiogreencode #AIart
🌅 Good morning, art enthusiasts! We're excited to share another AI-generated masterpiece with you today! #Studiogreencode #AIart #artoftheday #GoodMorningTwitterWorld
🎨🤖 Introducing @Studiogreencode the ultimate NFT and A.I. Artist! Follow my journey as I create innovative digital art! #AIart #nftart #studiogreencode
dynamic-thresholding latents in pixel space.
at sigmas≥1.1: we decode to pixel space, do Imagen-style thresholding, encode to latents.
trained a tiny latent decoder + RGB encoder on VAE outputs (you could call this distillation).
left = CFG 30, usual
right = dynthreshed
イキテオリマス。auto encoderの自動調整ですが、ここまでいけました。HDRも内部的には対応できてるようなので、exrかhdrで出力できんもんかと模索しております!
#StableDiffusionAI #StableDiffusionAI #AIArt #AIイラスト
[wd-1-5-beta2](+LoRA)
学習元44枚。キャプションはDD。バッチサイズ=2。repaeats=10。unet_lr=1e-6。text_encoder_lr=5e-8。20epoch。
まぁこんなもんかな。学習足りない感じあるけど、過学習するよりはまし?学習率難しいな。
diffusersって2系のtext encoderは最終層が既に切除されてて、自動的にclip skip=2になっていると考えていいんだよね?
このissueは読んだ
https://t.co/YhX7f0fjBG
というか、wd15はほとんどclip skipの違いが出ないのか?
1: clip skip = 1
2: clip skip = 10
More improvements coming to #RetroDiffusion!
Should be a nice little update within the week lowering storage requirements and improving generation quality, along with some UI upgrades.
Now featuring a custom trained variational autoencoder for better image quality.
@Monaymaker21 Huge thanks my friend for the offer 🫂🫂🫂. I´m trying to mint the 3 pieces but my internet is slow now. Means a lot to me. For these 3 pieces, I was planning a 24h auction at 33.3. This collection resonates with the 333 number 🫂 and the encoded message in each one
I edited a whole nice video for this, but adobe media encoder said no so here is the boring non-edited timelapse of the #vtuberprom art featuring @kal_zibeon
わーん、長い道のりだったー!
DrawThingsで出力がジャギって困っていたのだけど、モデルを読み込むときに、Custom Variational Autoencoderのところで同じモデルをもう一度読み込んで成功。
これでローカルでも試せる。
#CoolJapanDiffusion
WD1.4 AnimeのText Encoderを25% SDv2.1 にしたもの
若干固くなりにくい...かも
https://t.co/KWlX8mPzes
#StableDiffusion #WaifuDiffusion #StableDiffusionKawaii
The original waifu from the first proper ecchi, Cutie Honey.
Slowly working out the best settings for training an anime character into an already anime oriented model. Text encoder trained for a meager 30 steps at 4e-7.
あけましておめでとうございます!!
突然ですがフォロワー様限定で
イラストリクエストやってみようと思います!!
詳しくは一枚目と二枚目の画像をチェック!!
三枚目と四枚目はイラストのサンプル
「餅を食べるリグル」と「腰が痛いEncodeさん」です
リクエスト締め切りは1月9日(月)まで!!
waifu-diffusion 1.4 epoch 1 is out!
supports non-square aspect ratios, triple prompt length.
includes text encoder and VAE.
thanks to the WD team (@haruu1367, @cafeai_labs , salt, +contributors & sponsors) and to NovelAI for sharing their training process.
https://t.co/t8vVqxnTkK