mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini
mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini is an 8 billion parameter language model fine-tuned from Meta-Llama-3.1-8B. This model was fine-tuned by mlfoundations-dev on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset, achieving a validation loss of 0.6408. It is designed for general language tasks, leveraging its 32768 token context length for processing longer inputs.