Translation
PEFT
English
Vietnamese
bnjmnmarie commited on
Commit
25edbfc
1 Parent(s): 86fd779

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -14
README.md CHANGED
@@ -1,21 +1,80 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: nf4
16
- - bnb_4bit_use_double_quant: True
17
- - bnb_4bit_compute_dtype: float16
18
- ### Framework versions
19
 
 
20
 
21
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ license: mit
4
+ language:
5
+ - en
6
+ - vi
7
+ datasets:
8
+ - kaitchup/opus-Vietnamese-to-English
9
+ tags:
10
+ - translation
11
  ---
12
+ # Model Card for Model ID
13
 
14
+ This is an adapter for Meta's Llama 2 7B fine-tuned for translating Vietnamese text into English.
15
+ ## Model Details
16
 
17
+ ### Model Description
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ <!-- Provide a longer summary of what this model is. -->
20
 
21
+
22
+
23
+ - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
24
+ - **Model type:** LoRA Adapter for Llama 2 7B
25
+ - **Language(s) (NLP):** Vietnamese, English
26
+ - **License:** MIT license
27
+
28
+
29
+
30
+ ## Uses
31
+
32
+ This adapter must be loaded on top of Llama 2 7B. It has been fine-tuned with QLoRA. For optimal results, the base model must be loaded with the exact same configuration used during fine-tuning.
33
+ You can use the following code to load the model:
34
+ ```
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
36
+ import torch
37
+ from peft import PeftModel
38
+
39
+ base_model = "meta-llama/Llama-2-7b-hf"
40
+ compute_dtype = getattr(torch, "float16")
41
+ bnb_config = BitsAndBytesConfig(
42
+ load_in_4bit=True,
43
+ bnb_4bit_quant_type="nf4",
44
+ bnb_4bit_compute_dtype=compute_dtype,
45
+ bnb_4bit_use_double_quant=True,
46
+ )
47
+ model = AutoModelForCausalLM.from_pretrained(
48
+ original_model_directory, device_map={"": 0}, quantization_config=bnb_config
49
+ )
50
+ tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True)
51
+ model = PeftModel.from_pretrained(model, "kaitchup/Llama-2-7b-mt-Vietnamese-to-English")
52
+ ```
53
+
54
+ Then, run the model as follows:
55
+
56
+ ```
57
+ my_text = "" #put your text to translate here
58
+
59
+ prompt = my_text+" ###>"
60
+
61
+ tokenized_input = tokenizer(prompt, return_tensors="pt")
62
+ input_ids = tokenized_input["input_ids"].cuda()
63
+
64
+ generation_output = model.generate(
65
+ input_ids=input_ids,
66
+ num_beams=10,
67
+ return_dict_in_generate=True,
68
+ output_scores=True,
69
+ max_new_tokens=130
70
+
71
+ )
72
+ for seq in generation_output.sequences:
73
+ output = tokenizer.decode(seq, skip_special_tokens=True)
74
+ print(output.split("###>")[1].strip())
75
+ ```
76
+
77
+
78
+ ## Model Card Contact
79
+
80
+ [The Kaitchup](https://kaitchup.substack.com/)