apexion-ai/Nous-1-8B

Apollo-1-8B by Noema Research is an 8 billion parameter instruction-tuned causal language model based on Qwen3-8B, featuring a 32k token context length. It is optimized for advanced reasoning, instruction following, and high-performance deployment across multi-domain applications. This model excels in complex problem-solving, code generation, and serves as a robust knowledge assistant.

Cold
Public
8B
FP8
32768
License: anvdl-1.0
Hugging Face