Sao10K/L3-8B-Niitama-v1

Sao10K/L3-8B-Niitama-v1 is an 8 billion parameter experimental language model developed by Sao10K, exploring novel data shuffling and formatting methods. This model is part of the L3 series, which has shown distinct performance characteristics compared to its L3.1 counterparts. It is designed for research into the impact of data presentation on model outcomes, offering insights into training methodologies.

Warm
Public
8B
FP8
8192
License: cc-by-nc-4.0
Hugging Face

No reviews yet. Be the first to review!