athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit
athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit is an 8 billion parameter Llama-3.1-Instruct model further pretrained for one epoch on a filtered dataset of Reddit dirty stories. This model aims to address the repetition and token overconfidence issues observed in base Llama-3.1 models within the 8B parameter constraint. It is specifically designed for niche use cases requiring Llama-3.1's logical capabilities while mitigating its common generative pitfalls.