instruction-pretrain
commited on
Commit
•
a8fec69
1
Parent(s):
3254bdb
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ datasets:
|
|
12 |
- WizardLM/WizardLM_evol_instruct_V2_196k
|
13 |
---
|
14 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
15 |
-
This repo contains the **biomedicine model developed from Llama3-8B** in our paper
|
16 |
|
17 |
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.**
|
18 |
|
|
|
12 |
- WizardLM/WizardLM_evol_instruct_V2_196k
|
13 |
---
|
14 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
15 |
+
This repo contains the **biomedicine model developed from Llama3-8B** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
|
16 |
|
17 |
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.**
|
18 |
|