vanta-research/wraith-8b

Wraith-8B by VANTA Research is an 8.03 billion parameter fine-tune of Meta's Llama 3.1 8B Instruct model, featuring a 131,072 token context length. It is specifically optimized for superior mathematical reasoning, achieving a 70% accuracy on GSM8K, a 37% relative improvement over its base model. This model is the first in the VANTA Research Entity Series, designed with a distinctive 'cosmic intelligence' personality to enhance STEM analysis and logical deduction.

Warm
Public
8B
FP8
32768
License: llama3.1
Hugging Face