mlx-community/DeepSeek-R1-Distill-Qwen-32B-abliterated

The mlx-community/DeepSeek-R1-Distill-Qwen-32B-abliterated model is a 32 billion parameter language model, converted to MLX format from huihui-ai's DeepSeek-R1-Distill-Qwen-32B-abliterated. This model leverages the Qwen architecture, offering a substantial parameter count for complex language understanding and generation tasks. It is primarily designed for deployment and inference within the MLX ecosystem, providing a performant option for MLX-based applications.

Cold
Public
32B
FP8
32768
Hugging Face