jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0

jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0 is a 7.6 billion parameter model built upon the Qwen2.5-7B architecture. It is a composite model created by programmatically selecting and merging optimal layers from various Qwen2.5-7B fine-tunes based on their Normalized Effective Rank (NER) scores. This unique layer-by-layer merging approach aims to combine the most efficient dimensional utilization from multiple source models, resulting in a specialized model for general text generation with enhanced performance characteristics.

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face