Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,132 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- generated_from_trainer
|
4 |
+
- code
|
5 |
+
- coding
|
6 |
+
- phi-2
|
7 |
+
- phi2
|
8 |
+
model-index:
|
9 |
+
- name: phi-2-coder
|
10 |
+
results: []
|
11 |
+
license: apache-2.0
|
12 |
+
language:
|
13 |
+
- code
|
14 |
+
thumbnail: https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png
|
15 |
+
datasets:
|
16 |
+
- HuggingFaceH4/CodeAlpaca_20K
|
17 |
+
pipeline_tag: text-generation
|
18 |
---
|
19 |
+
|
20 |
+
<div style="text-align:center;width:250px;height:250px;">
|
21 |
+
<img src="https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png" alt="llama-2 coder logo"">
|
22 |
+
</div>
|
23 |
+
|
24 |
+
|
25 |
+
# Phi-2 Coder π©βπ»
|
26 |
+
**Phi-2** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
|
27 |
+
|
28 |
+
## Model description π§
|
29 |
+
|
30 |
+
[Phi-2](https://huggingface.co/microsoft/phi-2)
|
31 |
+
|
32 |
+
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
|
33 |
+
|
34 |
+
|
35 |
+
## Training and evaluation data π
|
36 |
+
|
37 |
+
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
### LoRa config
|
42 |
+
|
43 |
+
```py
|
44 |
+
config = LoraConfig(
|
45 |
+
r=32,
|
46 |
+
lora_alpha=64,
|
47 |
+
target_modules=[
|
48 |
+
"Wqkv",
|
49 |
+
"fc1",
|
50 |
+
"fc2",
|
51 |
+
"out_proj"
|
52 |
+
],
|
53 |
+
bias="none",
|
54 |
+
lora_dropout=0.05,
|
55 |
+
task_type="CAUSAL_LM",
|
56 |
+
)
|
57 |
+
```
|
58 |
+
|
59 |
+
### Training hyperparameters β
|
60 |
+
|
61 |
+
```py
|
62 |
+
per_device_train_batch_size=4,
|
63 |
+
gradient_accumulation_steps=32,
|
64 |
+
num_train_epochs=2,
|
65 |
+
learning_rate=2.5e-5,
|
66 |
+
optim="paged_adamw_8bit",
|
67 |
+
seed=66,
|
68 |
+
load_best_model_at_end=True,
|
69 |
+
save_strategy="steps",
|
70 |
+
save_steps=50,
|
71 |
+
evaluation_strategy="steps",
|
72 |
+
eval_steps=50,
|
73 |
+
```
|
74 |
+
|
75 |
+
### Training results ποΈ
|
76 |
+
|
77 |
+
|
78 |
+
| Step | Training Loss | Validation Loss |
|
79 |
+
|------|----------|----------|
|
80 |
+
| 50 | 0.624400 | 0.600070 |
|
81 |
+
| 100 | 0.634100 | 0.592757 |
|
82 |
+
| 150 | 0.545800 | 0.586652 |
|
83 |
+
| 200 | 0.572500 | 0.577525 |
|
84 |
+
| 250 | 0.528000 | 0.590118 |
|
85 |
+
|
86 |
+
|
87 |
+
### HumanEval results π
|
88 |
+
|
89 |
+
WIP
|
90 |
+
|
91 |
+
|
92 |
+
### Example of usage π©βπ»
|
93 |
+
```py
|
94 |
+
import torch
|
95 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
96 |
+
|
97 |
+
model_id = "mrm8488/phi-2-coder"
|
98 |
+
|
99 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)
|
100 |
+
|
101 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")
|
102 |
+
|
103 |
+
def generate(
|
104 |
+
instruction,
|
105 |
+
max_new_tokens=128,
|
106 |
+
temperature=0.1,
|
107 |
+
top_p=0.75,
|
108 |
+
top_k=40,
|
109 |
+
num_beams=2,
|
110 |
+
**kwargs,
|
111 |
+
):
|
112 |
+
prompt = "Instruct: " + instruction + "\nOutput:"
|
113 |
+
print(prompt)
|
114 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
115 |
+
input_ids = inputs["input_ids"].to("cuda")
|
116 |
+
attention_mask = inputs["attention_mask"].to("cuda")
|
117 |
+
|
118 |
+
with torch.no_grad():
|
119 |
+
generation_output = model.generate(
|
120 |
+
input_ids=input_ids,
|
121 |
+
attention_mask=attention_mask,
|
122 |
+
max_new_tokens=max_new_tokens,
|
123 |
+
eos_token_id = tokenizer.eos_token_id,
|
124 |
+
use_cache=True,
|
125 |
+
early_stopping=True
|
126 |
+
)
|
127 |
+
output = tokenizer.decode(generation_output[0])
|
128 |
+
return output.split("\nOutput:")[1].lstrip("\n")
|
129 |
+
|
130 |
+
instruction = "Design a class for representing a person in Python."
|
131 |
+
print(generate(instruction))
|
132 |
+
```
|