//=time() ?>
me when i hook up my brain to the ai and we steal art together https://t.co/D7BYmwTDfK
"anger"
for day 5 of #septembAIr
@septembair
look like anyone you know? 🤔
#septembAIr2022
#stablediffusion #imagesynth
@_yogik You'd think, but the models do not link his name to his style.
1: generic character outputs from SD
2: negligible changes from CLIP-guided models in Midjourney - don't mind the black and white lady - she shows up a lot in other "unrecognizeds"
"echo"
for day 4 of #septembAIr
@septembair
#septembAIr2022
#stablediffusion #imagesynth
"treasure"
for day 3 of #septembAIr
@septembair
#septembAIr2022
#stablediffusion #imagesynth
"ethereal"
for day 2 of #septembAIr @septembair #septembAIr2022
#stablediffusion #imagesynth
@haze_long Back when diffusion models were new, orbs would frequently show up in generations unprompted.
Some examples from my work circa Dec 2021\Jan 2022
I don't expect a single one of these rendered images to have all the desirable local data, so I made a composite of all of the details pulled from different images.
My first composite draft didn't include any of the previous images. In true surea.i fashion, it was gloomy ASF. 6/
I then used this image as an "init", an image from which another image is synthesized, resulting in new images with similar compositions, but different details.
Can't remember how many times I did this, but it was a lot. Probably in the ~50x range 5/