//=time() ?>
I enjoy painting, and I can see myself using this a bunch to get nice references. Doesn't matter if an eye is a little weird, the important thing is lighting reference and getting inspiration for colours/composition.
Playing with @aicrumb's doohicky notebook to test it out. The prompt tags make it easy to replicate a specific look I see a lot with recent midjourney etc images:
Often a lot of diversity for a given prompt! These are the first four from `A lizard portrait, oil on canvas in <arcane-style-jv>` (inspired by an actual lizard outside my window)
Story: 'A Blind man Catches a Bird" by Alexander McCall Smith (slightly compressed to fit into 9 tweets)
Art: AI-generated images made by me (@johnowhitakerA) using the 'Majesty Diffusion' notebook v1.6 from @multimodalart and dango233.
The young man saw a small bird in his own trap. As he turned to the blind mans trap, he grew jealous - the bird inside was marvelously colored, and the feathers would make a good gift for his new wife. He opened the traps and swapped the birds before passing them to the blind man
"Orbs within orbs, concentric circles and ripples of fire (spheres and circles, roundness)"
"A watercolour painting of | planets in space | a desert plant | a sunset landscape | an underwater submarine", imstack+CLOOB. This generation approach really suits watercolour style. Try it yourself: https://t.co/NZry7GElZb
5 minutes of watercolor doodling, 5 minutes editing on my phone, 10 minutes letting a diffusion model chug away, a little cleanup. This sort of hybrid creation is a very satisfying way to get something out quickly, and suits my current attention span :)