stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated

The stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated model is an 8 billion parameter language model, likely derived from the Llama architecture, and potentially optimized through distillation from a DeepSeek-R1 model. With a substantial 32,768 token context length, it is designed for tasks requiring extensive contextual understanding. The 'Abliterated' designation suggests a specialized or modified version, potentially focusing on efficiency or specific performance characteristics.

Warm
Public
8B
FP8
32768
Hugging Face