Alpaca is an instruction-finetuned LLM based off of LLaMA.
The first of many instruct-finetuned versions of LLaMA, Alpaca is an instruction-following model introduced by Stanford researchers. Impressively, with only $600 of compute spend, the researchers demonstrated that on qualitative benchmarks Alpaca performed similarly to OpenAI's text-davinci-003, a significantly larger model.
Initial release: 2023-03-13
StableVicuna is an RLHF finetune of Vicuna using datasets such as the OpenAssistant Conversations Dataset and the GPT4All Prompt Generations dataset.
Initial release: 2023-04-28
|Products & Features