dfurman commited on
Commit
a833a3c
·
1 Parent(s): 85698cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -1,10 +1,126 @@
1
  ---
 
2
  library_name: peft
 
 
 
 
 
 
 
3
  base_model: meta-llama/Llama-2-7b-hf
4
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ## Training procedure
6
 
7
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
8
 
 
9
 
10
  - PEFT 0.6.0.dev0
 
1
  ---
2
+ license: unknown
3
  library_name: peft
4
+ tags:
5
+ - llama-2
6
+ datasets:
7
+ - ehartford/dolphin
8
+ - garage-bAInd/Open-Platypus
9
+ inference: false
10
+ pipeline_tag: text-generation
11
  base_model: meta-llama/Llama-2-7b-hf
12
  ---
13
+
14
+ # llama-2-7b-instruct-peft 🦙
15
+
16
+ This instruction model was built via parameter-efficient QLoRA finetuning of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k rows of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 4x A6000s (48 GB RTX) for roughly 32 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
17
+
18
+ ### Benchmark metrics
19
+
20
+ | Metric | Value |
21
+ |-----------------------|-------|
22
+ | MMLU (5-shot) | Coming |
23
+ | ARC (25-shot) | Coming |
24
+ | HellaSwag (10-shot) | Coming |
25
+ | TruthfulQA (0-shot) | Coming |
26
+ | Avg. | Coming |
27
+
28
+ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as Hugging Face's [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
29
+
30
+ ### Helpful links
31
+
32
+ * Model license: coming
33
+ * Basic usage: coming
34
+ * Finetuning code: coming
35
+ * Loss curves: coming
36
+ * Runtime stats: coming
37
+
38
+ ## Loss curve
39
+
40
+ ![loss curve](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/sep_12_23_9_20_00_log_loss_curves_Llama-2-7b-instruct.png)
41
+
42
+ The above loss curve was generated from the run's private wandb.ai log.
43
+
44
+ ### Example prompts and responses
45
+
46
+ Example 1:
47
+
48
+ **User**:
49
+ > You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
50
+
51
+ **llama-2-7b-instruct-peft**:
52
+ coming
53
+
54
+ <br>
55
+
56
+ Example 2:
57
+
58
+ **User**:
59
+
60
+ > You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
61
+
62
+ **llama-2-7b-instruct-peft**:
63
+
64
+ coming
65
+
66
+ <br>
67
+
68
+ Example 3:
69
+
70
+ **User**:
71
+
72
+ > You are a helpful assistant. Tell me a recipe for vegan banana bread.\n
73
+
74
+ **llama-2-7b-instruct-peft**:
75
+
76
+ coming
77
+
78
+ <br>
79
+
80
+ ## Limitations and biases
81
+
82
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
83
+
84
+ This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
85
+ This model was trained on various public datasets.
86
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
87
+
88
+ ## How to use
89
+
90
+ coming
91
+
92
+ ### Runtime tests
93
+
94
+ coming
95
+
96
+ ## Acknowledgements
97
+
98
+ This model was finetuned by Daniel Furman on Sep 10, 2023 and is for research applications only.
99
+
100
+ ## Disclaimer
101
+
102
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
103
+
104
+ ## meta-llama/Llama-2-7b-hf citation
105
+
106
+ ```
107
+ coming
108
+ ```
109
+
110
  ## Training procedure
111
 
112
+ The following `bitsandbytes` quantization config was used during training:
113
+ - quant_method: bitsandbytes
114
+ - load_in_8bit: False
115
+ - load_in_4bit: True
116
+ - llm_int8_threshold: 6.0
117
+ - llm_int8_skip_modules: None
118
+ - llm_int8_enable_fp32_cpu_offload: False
119
+ - llm_int8_has_fp16_weight: False
120
+ - bnb_4bit_quant_type: nf4
121
+ - bnb_4bit_use_double_quant: False
122
+ - bnb_4bit_compute_dtype: bfloat16
123
 
124
+ ### Framework versions
125
 
126
  - PEFT 0.6.0.dev0