anthracite-org/magnum-v1-72b

Anthracite's Magnum-v1-72b is a 72.7 billion parameter language model, fine-tuned from Qwen-2 72B Instruct, specifically designed to replicate the prose quality of Claude 3 models like Sonnet and Opus. It was trained on 55 million tokens of high-quality roleplay (RP) data over 1.5 epochs. This model excels in generating high-quality, nuanced prose, making it suitable for creative writing and conversational applications requiring sophisticated language generation.

Warm
Public
72.7B
FP8
131072
License: tongyi-qianwen
Hugging Face