arcee-ai/Arcee-SuperNova-v1
Arcee-SuperNova-v1 is a 70 billion parameter instruction-following language model developed by arcee-ai, based on the Llama-3.1-70B-Instruct architecture with a 32768 token context length. It is a merged model, incorporating a distilled version of Llama-3.1-405B-Instruct, an instruction-tuned Llama-3.1-70B using synthetic data, and a DPO-aligned version. This combination results in strong human preference alignment and advanced instruction-following capabilities, making it suitable for general intelligence tasks and as a base for further RLHF training.