lmstudio-community/Qwen3-1.7B-MLX-bf16

Qwen3-1.7B-MLX-bf16 is a 1.7 billion parameter language model, converted to the MLX format from the original Qwen3-1.7B developed by Qwen. This model is optimized for efficient deployment and inference on Apple Silicon, leveraging the MLX framework. It is suitable for general-purpose text generation and understanding tasks on compatible hardware.

Warm
Public
2B
BF16
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!