//=time() ?>
We release our weights from recreated Stanford Alpaca 7B - LLaMA finetuned on synthetic instruction dataset.
https://t.co/nHT3XjMovb
And it's surprisingly good:
(Keep in mind, the following results is just from the smallest 7B model; GPT-3 is 175B)