OPI-PG/Qra-1b

OPI-PG/Qra-1b is a 1.1 billion parameter causal language model developed by the National Information Processing Institute (OPI) and Gdańsk University of Technology (PG). Adapted from TinyLlama-1.1B, it was further trained on a 90 billion token Polish text corpus, making it highly optimized for Polish language understanding and generation. This foundation model is designed for tasks requiring strong Polish language capabilities, with a context length of 4096 tokens.

Warm
Public
1.1B
BF16
2048
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!