Alpaca is an instruction-finetuned LLM based off of LLaMA.
The first of many instruct-finetuned versions of LLaMA, Alpaca is an instruction-following model introduced by Stanford researchers. Impressively, with only $600 of compute spend, the researchers demonstrated that on qualitative benchmarks Alpaca performed similarly to OpenAI's text-davinci-003, a significantly larger model.
Initial release: 2023-03-13
OpenLLaMA is an effort from OpenLM Research to offer a non-gated version of LLaMa that can be used both for research and commercial applications. As of June 2023, the model is still training, with 3B, 7B, and 13B parameter models available.
Initial release: 2023-04-28
|Products & Features
|3B, 7B, 13B