aisingapore/Llama-SEA-LION-v3-8B

Llama-SEA-LION-v3-8B is an 8 billion parameter multilingual decoder-only large language model developed by AI Singapore, based on the Llama 3.1 architecture. It has undergone continued pre-training on approximately 200 billion tokens across 11 Southeast Asian languages, including Burmese, Chinese, English, Filipino, Indonesian, Khmer, Lao, Malay, Tamil, Thai, and Vietnamese. This model is specifically designed for the Southeast Asia region, excelling in general language capabilities and constraint-following behavior across these diverse languages, with a context length of 32768 tokens. Its primary strength lies in its specialized multilingual support and performance in SEA-specific linguistic tasks.

Warm
Public
8B
FP8
32768
License: llama3.1
Hugging Face

Popular Sampler Settings

Most commonly used values from Featherless users

temperature
This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.
top_p
This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.
top_k
This limits the number of top tokens to consider. Set to -1 to consider all tokens.
frequency_penalty
This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.
presence_penalty
This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.
repetition_penalty
This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.
min_p
This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.