Alpaca is an instruction-finetuned LLM based off of LLaMA.
The first of many instruct-finetuned versions of LLaMA, Alpaca is an instruction-following model introduced by Stanford researchers. Impressively, with only $600 of compute spend, the researchers demonstrated that on qualitative benchmarks Alpaca performed similarly to OpenAI's text-davinci-003, a significantly larger model.
Initial release: 2023-03-13
The StableLM series of language models is Stability AI's entry into the LLM space. Trained on The Pile, the initial release included 3B and 7B parameter models with larger models on the way.
Initial release: 2023-04-19
|Products & Features|
|License||Noncommercial||CC BY-SA 4.0|
|Model Sizes||7B||3B, 7B|