meta-llama/Llama-Guard-3-8B
Llama Guard 3 is an 8 billion parameter Llama-3.1-based model developed by Meta, fine-tuned for content safety classification of both LLM inputs and responses. It identifies unsafe content across 14 hazard categories, including specialized support for search and code interpreter tool calls. This model excels at providing multilingual content moderation in 8 languages, offering improved performance and lower false positive rates compared to previous versions and other models.