StanfordAIMI/GREEN-RadLlama2-7b

StanfordAIMI/GREEN-RadLlama2-7b is a 7 billion parameter language model developed by StanfordAIMI, fine-tuned from RadLLaMA-7b. This model specializes in evaluating the difference between reference and candidate radiology reports for Chest X-rays. With a context length of 4096 tokens, it is optimized for specific medical text analysis tasks.

Warm
Public
7B
FP8
4096
License: llama2
Hugging Face