Sapling Logo

Gemma vs. MPT

LLM Comparison


Gemma

Gemma

Overview

Gemma is a family of lightweight open models from Google built using the same processes used for the the larger Gemini models.


Gemma was first released as a family of open models from Google -- 2B and 7B-parameter models, as of February 2024 -- intended for developers and compute-constrained devices. Despite their size, Gemma models compare favorably to other models of the same size such as the Mistral 7B model.


Initial release: 2024-02-21

MPT

MPT

Overview

MPT-7B and MPT-30B are a set of models that are part of MosaicML's Foundation Series. Trained on 1T tokens, the developers state that MPT-7B matches the performance of LLaMA while also being open source, while MPT-30B outperforms the original GPT-3. In addition to the base model, the developers also offer MPT-Instruct, MPT-Chat, and MPT-7B-StoryWriter-65k+, the last of which is trained on a context length of 65K tokens.



Initial release: 2023-05-05

Looking for an LLM API/SDK that works out of the box? No prompts or ad hoc guardrails.

Sapling API
More Comparisons

Gemma

MPT

Products & Features
Instruct Models
Coding Capability
Customization
Finetuning
Open Source
License Custom Apache 2.0
Model Sizes 2B, 7B 7B, 30B