microsoft/Phi-4-mini-instruct
microsoft/Phi-4-mini-instruct is a 3.8 billion parameter instruction-tuned decoder-only Transformer model from Microsoft, featuring a 128K token context length. Built on synthetic data and filtered public websites, it focuses on high-quality, reasoning-dense data. This model is optimized for memory/compute-constrained environments and latency-bound scenarios, excelling particularly in strong reasoning tasks like math and logic.
No reviews yet. Be the first to review!