kehanlu/llama-3.2-8B-Instruct
kehanlu/llama-3.2-8B-Instruct is an 8 billion parameter instruction-tuned causal language model derived from Meta's Llama-3.2-11B-Vision-Instruct. This model has been specifically re-engineered to be a text-only variant, removing the cross-attention layers associated with vision capabilities. It offers a robust foundation for general-purpose text generation and instruction following, maintaining the core linguistic strengths of the Llama 3.2 series with a 32768 token context length.