Qwen/Qwen3-Coder-30B-A3B-Instruct
Qwen/Qwen3-Coder-30B-A3B-Instruct is a 30.5 billion parameter (3.3 billion activated) causal language model developed by Qwen, featuring a Mixture-of-Experts (MoE) architecture with 128 experts. This model is specifically optimized for agentic coding, agentic browser-use, and foundational coding tasks, offering native support for a 262,144-token context length. It excels in tool calling capabilities and is designed for repository-scale code understanding.