Alpaca is an instruction-finetuned LLM based off of LLaMA.
The first of many instruct-finetuned versions of LLaMA, Alpaca is an instruction-following model introduced by Stanford researchers. Impressively, with only $600 of compute spend, the researchers demonstrated that on qualitative benchmarks Alpaca performed similarly to OpenAI's text-davinci-003, a significantly larger model.
Initial release: 2023-03-13
RedPajama-INCITE is the first family of models trained on the RedPajama base dataset. The goal of the RedPajama-INCITE models is to replicate the LLaMA recipe but make the model fully open source under the Apache license. As of the initial release, the 3B parameter model is best-in-class, with the 7B parameter model in progress. Update as of June 6, 2023: the 7B parameter model was made available, outperforming other models of the same size.
Initial release: 2023-05-05
|Products & Features|
|Model Sizes||7B||3B, 7B|