--- license: cc-by-nc-4.0 datasets: - starmpcc/Asclepius-Synthetic-Clinical-Notes language: - en pipeline_tag: text2text-generation tags: - medical --- # Model Card for Model ID This is an official model checkpoint for Asclepius-Llama2-7B [(arxiv)](https://arxiv.org/abs/2309.00237). This model is an enhanced version of Asclepius-7B, by replacing the base model with Llama-2 and increasing the max sequence length to 4096. ## UPDATE ### 2024.01.10 - Asclepius-R, the variant of Asclepius that trained on MIMIC-III discharge summaries, is now available on [Physionet](https://physionet.org/content/asclepius-r/1.0.0/)! ## Model Details ### Model Description - **Model type:** Clinical LLM (Large Language Model) - **Language(s) (NLP):** English - **License:** CC-BY-NC-SA 4.0 - **Finetuned from model [optional]:** Llama2-7B ### Model Sources [optional] - **Repository:** https://github.com/starmpcc/Asclepius - **Paper:** https://arxiv.org/abs/2309.00237 - **Data:** https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes ## Uses This model can perform below 8 clinical NLP tasks, with clincal notes. - Named Entity Recognition - Abbreviation Expansion - Relation Extraction - Temporal Information Extraction - Coreference Resolution - Paraphrasing - Summarization - Question Answering ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use ONLY USE THIS MODEL FOR RESEARCH PURPOSE!! ## How to Get Started with the Model ```python prompt = """You are an intelligent clinical languge model. Below is a snippet of patient's discharge summary and a following instruction from healthcare professional. Write a response that appropriately completes the instruction. The response should provide the accurate answer to the instruction, while being concise. [Discharge Summary Begin] {note} [Discharge Summary End] [Instruction Begin] {question} [Instruction End] """ from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-Llama2-7B", use_fast=False) model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-Llama2-7B") note = "This is a sample note" question = "What is the diagnosis?" model_input = prompt.format(note=note, question=question) input_ids = tokenizer(model_input, return_tensors="pt").input_ids output = model.generate(input_ids) print(tokenizer.decode(output[0])) ``` ## Training Details ### Training Data https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes ### Training Procedure - Initial training was conducted using causal language modeling on synthetic clinical notes. - It was then fine-tuned with clinical instruction-response pairs. - For a comprehensive overview of our methods, our upcoming paper will serve as a resource. #### Training Hyperparameters - We followed config used in [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) - #### Speeds, Sizes, Times - Pre-Training (1 epoch): 1h 7m with 8x A100 80G - Instruction Fine-Tuning (3 epoch): 6h 47m with 8x A100 80G ## Citation **BibTeX:** ``` @misc{kweon2023publicly, title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes}, author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi}, year={2023}, eprint={2309.00237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_starmpcc__Asclepius-Llama2-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 42.38 | | ARC (25-shot) | 50.85 | | HellaSwag (10-shot) | 76.53 | | MMLU (5-shot) | 43.61 | | TruthfulQA (0-shot) | 43.31 | | Winogrande (5-shot) | 68.27 | | GSM8K (5-shot) | 0.3 | | DROP (3-shot) | 13.8 |