abacusai/Liberated-Qwen1.5-72B

Liberated-Qwen1.5-72B is a 72.3 billion parameter language model developed by AbacusAI and Eric Hartford, fine-tuned from Qwen/Qwen1.5-72B with a 32768 token context length. It is specifically designed to enhance compliance with system prompts and handle long, multi-turn conversations, addressing a common limitation in open-source models. This model is trained on open-source datasets, including the novel SystemChat dataset, and is released without guardrails or censorship, requiring users to implement their own alignment layers.

Cold
Public
72.3B
FP8
32768
License: tongyi-qianwen
Hugging Face

No reviews yet. Be the first to review!