monidew commited on
Commit
2c7a8cf
·
verified ·
1 Parent(s): 4d4305a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-Coder-7B-Instruct
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ ## Model Description
11
+
12
+ This model is a fine-tuned variant of Qwen2.5-Coder-7B-Instruct, optimized for code-related tasks by training on a specialized code dataset. Fine-tuning was performed using the Hugging Face Transformers library, Low-Rank Adaptation(LoRA) and Parameter-Efficient Fine-Tuning (PEFT) techniques,.
13
+
14
+ - Intended Use:
15
+ The provided fine-tuned weights are ready for immediate use, making this model particularly suitable for developers and researchers working on code comprehension, generation, and related applications.
16
+ - Performance:
17
+ Users can anticipate enhanced performance on coding-specific tasks compared to the base model. The actual improvement varies based on the task and input data specifics.
18
+
19
+ ## Use
20
+
21
+ The base model Qwen2.5-Coder-7B-Instruct needs to be downloaded and loaded in advance, then provide the following information: "## Problem Description: {} ## Test Cases: {} ## Error Code: {}"
22
+
23
+ ```python
24
+ from transformers import AutoModelForCausalLM, AutoTokenizer
25
+ import torch
26
+ from peft import PeftModel
27
+
28
+ model_path = 'Qwen/Qwen2.5-Coder-7B-Instruct'
29
+ lora_path = 'Code-AiHelper'
30
+
31
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
32
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto",torch_dtype=torch.bfloat16)
33
+ model = PeftModel.from_pretrained(model, model_id=lora_path)
34
+
35
+ system_prompt = '你是一位经验丰富的Python编程专家和技术顾问,擅长分析Python题目和学生编写的代码。你的任务是理解题目要求和测试样例,分析学生代码,找出潜在的语法或逻辑错误,提供具体的错误位置和修复建议,并用专业且易懂的方式帮助学生改进代码。请以markdown格式返回你的答案。'
36
+ user_prompt = '''## 题目描述:{} ## 测试样例:{} ## 错误代码:{}'''
37
+ messages = [
38
+ {"role": "system", "content": system_prompt},
39
+ {"role": "user", "content": user_prompt}
40
+ ]
41
+
42
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
43
+ model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
44
+ generated_ids = model.generate(
45
+ model_inputs.input_ids,
46
+ max_new_tokens=1024
47
+ )
48
+ generated_ids = [
49
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
50
+ ]
51
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
52
+
53
+ print(response)
54
+ ```