abacusai/Dracarys-Llama-3.1-70B-Instruct
Dracarys-Llama-3.1-70B-Instruct is a 70 billion parameter instruction-tuned causal language model developed by Abacus.AI, fine-tuned from Meta-Llama-3.1-70B-Instruct. This model is specifically optimized for coding performance, demonstrating improved scores on LiveCodeBench for code generation and test output prediction. With a 32768 token context length, it excels in data science coding tasks, particularly in Python with Pandas and Numpy.
No reviews yet. Be the first to review!