Exact same prompt. *Almost* the same training dataset.

To the right, I added a few more pictures to add a bit of variability. The results are much better! (details, colors).

Prediction: "dataset engineering" will be more critical than "prompt engineering".

32 294

We've just published a whole load of Land Cover Map 1km summaries for both GB and Northern Ireland.

The datasets are now available for 2017, 2018, 2019, 2020 and 2021

https://t.co/XQEAnVY6MG

43 152

So Deviantart is making their own AI graphics tool DreamUp, that uses art in deviatntart. You can from image settingas ask for your art to not be included in the dataset. (Tho DA is not responsible even if 3rd party still uses it)

0 0

Yep super impressive but another example here (3 out of 4!).
I guess bias always exists in the datasets but this is quite obvious.

0 5

モデル使ってみたw
ミクさんもNARUTOに🥷

narutopedia画像をBLIPで文字化した"Captioned Naruto dataset"を で追加学習
λGPUのA6000 2基で3万step学習
12時間で$20
先にツイしたBLIP化Colabでデータ作れるし
Fine Tuning敷居が下がった?
https://t.co/VXSkIBqgoU

0 6

I used Naruto anime character pictures from https://t.co/tajQIbWLPf and captioned them using BLIP. Some captions are quite inaccurate but most of them are good enough.

The image + caption dataset is available here: https://t.co/aR6HHoL6Q0

2 18

Some more Hypernetwork + Aesthetic Gradient test before bed.
1. Raw SD
2. Hypernetwork
3. Hypernetwork + Aesthetic Gradient (Same dataset)
I like both the hypernetwork with and without AG. a matter of taste i guess? GN Everyone ✨

1 4

Aprovechando el tutorial de he cogido un dataset de fotos de y me he puesto a entrenar SD.

La sonrisa y la expresión están bastante conseguidas...

Las manos, esas manos!!!

Os pongo a continuación más imágenes generadas ...

1 5

A snapshot of my talk .

Q: What kind of image datasets do you use?
A: I work with my own—digitally native, built from scratch with various AI models...I sculpt with a process that’s akin to chiseling a granite block into color, texture, and lighting palettes.

2 18

Waifu Diffusion vs Novel Diffusion.

Both were trained based SD and the dataset from Danbooru with different training optimization. Surely Waifu still has large room for improvement, but there is always more hope and possiblilities for an open model,

1 5

Using ML to solve PDEs is a hot topic. With PDEBench we propose a comprehensive and extensible benchmark that includes multiple datasets, (pre-trained) baseline models such as the FNO and U-Net, as well as JAX-based numerical solvers for realistic PDEs. https://t.co/NO5eyCAUAu

174 922

"Kind Giants", created for the event by from New York, USA who just started minting a few months ago after being a collector for over a year. He was inspired by friend .

Created using Using a neural network built & trained w/his own datasets.

5 11

It's incredible how good AI has gotten, the output is gorgeous ✨. So many ideas for quick iteration and references. It's absolutely gorgeous and very fun!

However, I feel that Datasets should be opt-in only and have a greater degree of transparency.

2 8

Datasets containing thousands of images taken from artist without their consent to then be used for a meat grinder is not learning, its theft.

1 10

I spend about 20 hours on my art
You're entitled to your own expression obviously, but I don't qualify ai art as art, when people wager that 'abstract isn't art anyone can do that' I disagree, but open-source ai that users OTHER people's art as part of its dataset isn't art...

4 110

Loving the plugin that finally plays nicely with and Mac - same dataset but noticeable difference imho...

1 9

I wonder how setting up this ai worked because it clearly knew what "motorcycle batman" meant even though those words should mean almost nothing to an entirely pokemon related dataset

0 2

I fine tuned the original stable diffusion on a Pokemon dataset, captioned with BLIP. The captions aren't amazing (see this example), but they're ok. You can get my dataset here: https://t.co/enufYgHG41

0 12

i wonder why there's such a big disconnect between the results from searching the LAION dataset with CLIP, and the resutls from image generators trained on it

0 0