meta-llama/Meta-Llama-3-8B

Meta-Llama-3-8B is an 8 billion parameter, auto-regressive language model developed by Meta, utilizing an optimized transformer architecture with Grouped-Query Attention (GQA) for improved inference. Trained on over 15 trillion tokens of publicly available data with an 8k context length, this model is designed for commercial and research use in English. It excels in general language understanding, knowledge reasoning, and reading comprehension, making it suitable for a wide range of natural language generation tasks.

Warm
Public
8B
FP8
8192
License: llama3
Hugging Face
Gated