//=time() ?>
So I drop this crudely fixed image into img2img.
I set the denoise quite low again, the point is to give me a variation on this image. It will clean up my fixes to the wings and removal of the logo making it look more natural (in most cases)
1 generation is usually enough
Similar result here... many generations later.
Some things to note... Perspective can get all kinds of messed up, cherry pick or use an external editor to combine images. The beauty is that with low denoise some features don't change massively so its easy to swap parts out.
I think I've more or less nailed the exact settings #DifferentDimensionMe uses to get its results. #StableDiffusion, #anything_v3, Clip skip 2, seed delta 31337, the NovelAI pre/negative prompts plus either anime bishonen/shojo/bishojo, CFG 12, denoise strength of 0.65. Fun shit.
@RiversHaveWings actually you can get it down to 3 steps, 4 model calls if you skip 0
6.1080, 1.5968, 0.4765, 0.1072
it won't be fully denoised but could you tell?
what if I told you you could denoise latents in float16 but do DPM-Solver++ sampling in float32
left=fp16 Saber
right=mixed-precision Saber
supposedly high-precision sampling helps to converge more stably towards the true image (Karras used fp64)
https://t.co/C0Y0nD9EXS
低解像度の娘たちをアップロードすることの残酷さ... Orz
ExtrasのLollypopアップスケールの後にimg2imgのSD Upscale使いました
Denoiseを0.2ぐらい使ったらもっとヴァリエーションが見えるけど、時間と重さの関係で、お勧めしにくい
#stableDifusion
#NovelAIAIDiffusion
#AIイラスト
#AI絵
classifier-free guidance:
ask model to denoise gauss noise.
no condition: model predicts a salad.
shrine maiden condition: model predicts graffiti of faces.
CFG is "what makes shrine maiden different from salad", multiplied by your guidance scale.
repeat this every sampler step.
CFG20, told to get its 99.95%ile down to the same as CFG7.5's.
code here (see CFGDynTheshDenoiser):
https://t.co/1Zn1aefIFL
I tried to fix the hips but she turns into megaman.
sometimes you have to work within the constraints of what noise you get in the high sigmas of your seed! the Unet predicts that sigma 3.8167 denoises to a wide cloud of legs, so that's what you're up against.
#WaifuDiffusion v1.2とv1.3(4エポック)のimg2img比較
①img2img元画像(v1.2のstep10で生成)
②v1.2(step50)で4枚生成
③v1.3(step50)で9枚生成
CFGは8、denoise strは7.5
v1.3は全体的に打率が高くなってる
絵柄はもう好みのレベルかも
たまにはpromptも (altに)
高画質化は縦横2倍、CFG9.5, denoise0.3~0.4でi2iが基本 (sd upscaleより早くて高画質で崩れが少なかったので)
#WaifuDiffusion
Not the cleanest thing ever, but I learned a lot about inpainting and denoise putting this together. I'm looking forward to working with new styles now that I'm more familiar with SD.
#stablediffusion #waifudiffusion #AIArtwork
Struck By You
09.20.2022
.
Something simple for today, just playing around with subsurface scattering in #cinema4d using #redshiftrender as always. A little bit of cleanup in photoshop and I used an OptiX Denoiser as well.
.
#c4d #3dart
1: craiyon
2: 512x512 denoise0.8
3: 1024x1024 denoise0.8
どうやってstableが良い構図出してくれないときはcraiyonに頼むの良さげ
#craiyon #WaifuDiffusion
Everything generated with AI has a "melted" sort of feeling to it). And if you turn up the denoise slider it just draws a completely different character (in this case it tries to turn my character into Cirno (also for some reason it doesn't know what a sailor uniform is)
Summer Shade
09.15.2022
.
Just having some fun in #cinema4d using #redshiftrender engine and playing around with some #colortheory. I used an OptiX Denoiser to help clean up some refraction samples. Materials and Gobo used in this scene are from @GSG3D
@_hxzzx @LukeStation_ @blu3s_doodles @OnslaughtRush @funkyfizzz Gone are the days you never used a denoiser. How times have changed dear boy. The growth is maddd 😭
Some test renders using the Intel OIDN denoiser in scenes with heavy DOF. https://t.co/H1WUM9MMyh #arnoldrenderer #Intel