zhangjh0501 commited on
Commit
fba89a8
·
1 Parent(s): 5e62b7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -7,10 +7,9 @@ datasets:
7
  ---
8
  # BioMedGPT-LM-7B
9
 
10
- In this repo, we present a medical language model named BioMedGPT-LM which is the first commercial-friendly GPT model in the biomedical domain and has demonstrated
11
- superior performance over existing LLMs of the same parameter size. We are releasing a 7B model **BioMedGPT-LM-7B** which is LLaMA2-7b-chat finetuned on the PMC abstracts and papers from the S2ORC.
12
-
13
-
14
 
15
 
16
  ### Training Details
@@ -24,19 +23,21 @@ The model was trained with the following hyperparameters:
24
  * Cutoff length: 2048
25
  * Learning rate: 2e-5
26
 
27
- Overview BioMedGPT-LM-7B was finetuned on over 26 billion tokens highly pertinent to the field of biomedicine. The fine-tuning data are extracted from 5.5 million biomedical papers in S2ORC data using PubMed Central
28
- (PMC)-ID and PubMed ID as criteria.
29
 
30
 
31
  ### Model Developers
32
  PharMolix
33
 
34
  ### How to Use
35
- BioMedGPT-LM-7B is a part of **[BioMedGPT-10B](https://github.com/BioFM/OpenBioMed)**, an open-source version of BioMedGPT. BioMedGPT is a multimodal generative pre-trained transformer (GPT) for biomedicine, which bridges the natural language modality and diverse biomed-
36
- ical data modalities via a single GPT model. BioMedGPT aligns different biological modalities with the text modality via BioMedGPT-LM. The details of BioMedGPT-10B and BioMedGPT-LM-7B can be found in the [technical report]().
37
- ![The architecture of BioMedGPT-10B](BioMedGPT-10B.jpeg)
38
 
 
 
39
 
 
 
 
40
 
41
 
42
  **Intended Use Cases**
 
7
  ---
8
  # BioMedGPT-LM-7B
9
 
10
+ **BioMedGPT-LM-7B** is the first large generative language model based on Llama2 in the biomedical domain.
11
+ It was fine-tuned from the Llama2-7B-Chat with millions of biomedical papers from the [S2ORC corpus](https://github.com/allenai/s2orc/blob/master/README.md).
12
+ Through further fine-tuning, BioMedGPT-LM-7B outperforms or is on par with human and significantly larger general-purpose foundation models on several biomedical QA benchmarks.
 
13
 
14
 
15
  ### Training Details
 
23
  * Cutoff length: 2048
24
  * Learning rate: 2e-5
25
 
26
+ BioMedGPT-LM-7B is finetuned on over 26 billion tokens highly pertinent to the field of biomedicine.
27
+ The fine-tuning data are extracted from 5.5 million biomedical papers in S2ORC data using PubMed Central (PMC)-ID and PubMed ID as criteria.
28
 
29
 
30
  ### Model Developers
31
  PharMolix
32
 
33
  ### How to Use
 
 
 
34
 
35
+ BioMedGPT-LM-7B is the generative language model of **[BioMedGPT-10B](https://github.com/BioFM/OpenBioMed)**, an open-source version of BioMedGPT.
36
+ BioMedGPT is an open multimodal generative pre-trained transformer (GPT) for biomedicine, which bridges the natural language modality and diverse biomedical data modalities via large generative language models.
37
 
38
+ More technical details of BioMedGPT-LM-7B, BioMedGPT-10B, and BioMedGPT can be found in the [technical report](https://pan.baidu.com/s/1iAMBkuoZnNAylhopP5OgEg?pwd=7a6b).
39
+
40
+ ![The architecture of BioMedGPT-10B](BioMedGPT-10B.jpeg)
41
 
42
 
43
  **Intended Use Cases**