mlabonne/NeuralMonarch-7B
NeuralMonarch-7B is a 7 billion parameter language model developed by mlabonne, fine-tuned using DPO on preference datasets. It is based on a merge of several 7B models, including OmniTruthyBeagle-7B-v0, NeuBeagle-7B, and NeuralOmniBeagle-7B. This model features an 8k context window and demonstrates strong performance in instruction following and reasoning tasks. It is suitable for applications requiring robust conversational abilities and logical processing.