WorkInTheDark commited on
Commit
793c26f
1 Parent(s): fd3ec4b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - yahma/alpaca-cleaned
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - llama-2
10
+ - alpaca
11
+ ---
12
+
13
+
14
+
15
+ # Model Card for Llama-2-7b-alpaca-cleaned
16
+
17
+ <!-- Provide a quick summary of what the model is/does. -->
18
+
19
+ This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters.
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+
25
+ This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters. \
26
+ The original Alpaca model is fine-tuned on Llama with the alpaca dataset by researchers from Stanford University
27
+
28
+
29
+ - **Developed by:** NEU Human-centered AI Lab
30
+ - **Shared by [optional]:** NEU Human-centered AI Lab
31
+ - **Model type:** Text-generation
32
+ - **Language(s) (NLP):** English
33
+ - **License:** cc-by-nc-4.0 (comply with the alpaca-cleaned dataset)
34
+ - **Finetuned from model [optional]:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
35
+
36
+ ### Model Sources
37
+
38
+ <!-- Provide the basic links for the model. -->
39
+
40
+ - **Repository:** https://huggingface.co/meta-llama/Llama-2-7b
41
+
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ The model is intended to be used for research purposes only in English, complying with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca). \
52
+ The model has been fine-tuned on the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) for assistant-like chat and general natural language generation tasks. \
53
+ The use of this model should also comply with the restrictions from [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
54
+
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ The out-of-Scope use of this model should also comply with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca) and [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ {{ bias_risks_limitations | default("[More Information Needed]", true)}}
67
+
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+ ```
73
+ # Load model directly
74
+ from transformers import AutoTokenizer, AutoModelForCausalLM
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
77
+ model = AutoModelForCausalLM.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
78
+ ```
79
+ ## Training Details
80
+
81
+ ### Training Data
82
+
83
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
84
+
85
+ We use the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is the cleaned version of the original [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) created by researchers from Stanford University.
86
+
87
+ ### Training Procedure
88
+
89
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
90
+ We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca).
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ ```
96
+ --bf16 True \
97
+ --num_train_epochs 3 \
98
+ --per_device_train_batch_size 4 \
99
+ --per_device_eval_batch_size 4 \
100
+ --gradient_accumulation_steps 8 \
101
+ --evaluation_strategy "no" \
102
+ --save_strategy "steps" \
103
+ --save_steps 2000 \
104
+ --save_total_limit 1 \
105
+ --learning_rate 2e-5 \
106
+ --weight_decay 0. \
107
+ --warmup_ratio 0.03 \
108
+ --lr_scheduler_type "cosine" \
109
+ --logging_steps 1 \
110
+ --fsdp "full_shard auto_wrap" \
111
+ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
112
+ --tf32 True
113
+ ```
114
+
115
+ ## Evaluation
116
+
117
+ <!-- This section describes the evaluation protocols and provides the results. -->
118
+
119
+ ### Testing Data, Factors & Metrics
120
+
121
+ #### Testing Data
122
+
123
+ <!-- This should link to a Data Card if possible. -->
124
+
125
+ N/A
126
+
127
+ #### Factors
128
+
129
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
130
+
131
+ N/A
132
+
133
+ #### Metrics
134
+
135
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
136
+
137
+ N/A
138
+
139
+ ### Results
140
+
141
+ N/A
142
+
143
+ #### Summary
144
+
145
+ N/A
146
+
147
+
148
+
149
+ <!--
150
+ ## Environmental Impact
151
+
152
+ Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
153
+
154
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
155
+
156
+ - **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}}
157
+ - **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
158
+ - **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
159
+ - **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
160
+ - **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
161
+ -->
162
+
163
+
164
+
165
+ ## Citation
166
+
167
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
168
+
169
+ Please cite the [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca)
170
+
171
+ ```
172
+ @misc{alpaca,
173
+ author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
174
+ title = {Stanford Alpaca: An Instruction-following LLaMA model},
175
+ year = {2023},
176
+ publisher = {GitHub},
177
+ journal = {GitHub repository},
178
+ howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
179
+ }
180
+ ```
181
+
182
+
183
+
184
+ ## Model Card Authors
185
+
186
+ Northeastern Human-centered AI Lab
187
+
188
+ ## Model Card Contact
189
+
190
+
191
+
192
+