0-hero commited on
Commit
91a03bd
1 Parent(s): 76ebd7d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - tatsu-lab/alpaca
5
+ ---
6
+
7
+ ## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
8
+
9
+ Thanks to [declare-lab](https://huggingface.co/declare-lab) for the training [repository](https://github.com/declare-lab/flan-alpaca), contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
10
+ synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
11
+ The pretrained models and demos are available on HuggingFace 🤗 :
12
+
13
+ | Model | Parameters | Training GPUs |
14
+ |---------------------------------------------------------------------------|------------|-----------------|
15
+ | [Flan-Alpaca-Base](https://huggingface.co/declare-lab/flan-alpaca-base) | 220M | 1x A6000 |
16
+ | [Flan-Alpaca-Large](https://huggingface.co/declare-lab/flan-alpaca-large) | 770M | 1x A6000 |
17
+ | [Flan-Alpaca-XL](https://huggingface.co/declare-lab/flan-alpaca-xl) | 3B | 1x A6000 |
18
+ | [Flan-Alpaca-XXL](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | 4x A6000 (FSDP) |
19
+ | [Flan-Alpaca-UL2](https://huggingface.co/0-hero/flan-alpaca-ul2) | 20B | 4x A100 (80G) (FSDP) |
20
+
21
+ ### Why?
22
+
23
+ [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) represents an exciting new direction
24
+ to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily.
25
+ Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data.
26
+ The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model.
27
+ However, the original implementation is less accessible due to licensing constraints of the
28
+ underlying [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
29
+ Furthermore, users have noted [potential noise](https://github.com/tloen/alpaca-lora/issues/65) in the synthetic
30
+ dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but
31
+ less diverse) instructions such as [Flan-T5](https://arxiv.org/abs/2210.11416).
32
+
33
+ ### Usage
34
+
35
+ ```
36
+ from transformers import pipeline
37
+
38
+ prompt = "Write an email about an alpaca that likes flan"
39
+ model = pipeline(model="0-hero/flan-alpaca-ul2")
40
+ model(prompt, max_length=128, do_sample=True)
41
+
42
+ ```
43
+
44
+ Readme forked from declare-lab/flan-alpaca-xxl