Update README.md
Browse files
README.md
CHANGED
@@ -28,21 +28,38 @@ Our goal is to translate texts of unlimited length at high speed.
|
|
28 |
It is trained to output translated text (Japanese/English) in response to user input after being given an initial system prompt-like text (Japanese/English).
|
29 |
Additionally, by using apply_chat_template, it eliminates the need for manual writing of prompt templates, which can be prone to errors.
|
30 |
|
31 |
-
###
|
|
|
|
|
|
|
|
|
32 |
|
33 |
```
|
|
|
34 |
import torch
|
35 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
model_name = "webbigdata/gemma-2-2b-jpn-it-translate"
|
38 |
model = AutoModelForCausalLM.from_pretrained(
|
39 |
model_name,
|
|
|
40 |
device_map="auto",
|
41 |
)
|
42 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
43 |
tokenizer.pad_token = tokenizer.unk_token
|
44 |
|
45 |
-
system_prompt = "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating
|
|
|
46 |
|
47 |
# 文章を区切る関数
|
48 |
def split_sentences(text):
|
@@ -117,7 +134,7 @@ def main(text):
|
|
117 |
|
118 |
# Initialize context with system prompt
|
119 |
context = deque([
|
120 |
-
{"role": "user", "content": system_prompt},
|
121 |
{"role": "assistant", "content": "OK"}
|
122 |
], maxlen=6) # Maximum 10 elements (5 user, 5 assistant)
|
123 |
|
@@ -152,9 +169,149 @@ text = """こんにちは。私は田中です。今日はとても良い天気
|
|
152 |
とてもきれいな家でした。時間が経つのがあっという間でした。夕方になり、私は家に帰りました。夕食にはカレーを作りました。カレーはとても美味しかったです。今日一日、とても楽しかったです。"""
|
153 |
translated = main(text)
|
154 |
print(translated)
|
|
|
|
|
155 |
|
156 |
|
|
|
|
|
157 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
|
159 |
<!-- Provide a longer summary of what this model is. -->
|
160 |
|
|
|
28 |
It is trained to output translated text (Japanese/English) in response to user input after being given an initial system prompt-like text (Japanese/English).
|
29 |
Additionally, by using apply_chat_template, it eliminates the need for manual writing of prompt templates, which can be prone to errors.
|
30 |
|
31 |
+
### 日英翻訳用サンプルコード Japanese-English Translation sample script.
|
32 |
+
|
33 |
+
'''
|
34 |
+
pip install -U transformers
|
35 |
+
'''
|
36 |
|
37 |
```
|
38 |
+
import re
|
39 |
import torch
|
40 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
41 |
|
42 |
+
def get_torch_dtype():
|
43 |
+
if torch.cuda.is_available():
|
44 |
+
device = torch.device("cuda")
|
45 |
+
prop = torch.cuda.get_device_properties(device)
|
46 |
+
# Ampere (Compute Capability 8.0 above), for example L4 support bfloat16, but T4 not support.
|
47 |
+
if prop.major >= 8:
|
48 |
+
return torch.bfloat16
|
49 |
+
return torch.float16
|
50 |
+
|
51 |
+
|
52 |
model_name = "webbigdata/gemma-2-2b-jpn-it-translate"
|
53 |
model = AutoModelForCausalLM.from_pretrained(
|
54 |
model_name,
|
55 |
+
torch_dtype=get_torch_dtype(),
|
56 |
device_map="auto",
|
57 |
)
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
59 |
tokenizer.pad_token = tokenizer.unk_token
|
60 |
|
61 |
+
system_prompt = "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.\n\n"
|
62 |
+
instruct = """Translate Japanese to English.\nWhen translating, please use the following hints:\n[writing_style: casual]"""
|
63 |
|
64 |
# 文章を区切る関数
|
65 |
def split_sentences(text):
|
|
|
134 |
|
135 |
# Initialize context with system prompt
|
136 |
context = deque([
|
137 |
+
{"role": "user", "content": system_prompt + instruct},
|
138 |
{"role": "assistant", "content": "OK"}
|
139 |
], maxlen=6) # Maximum 10 elements (5 user, 5 assistant)
|
140 |
|
|
|
169 |
とてもきれいな家でした。時間が経つのがあっという間でした。夕方になり、私は家に帰りました。夕食にはカレーを作りました。カレーはとても美味しかったです。今日一日、とても楽しかったです。"""
|
170 |
translated = main(text)
|
171 |
print(translated)
|
172 |
+
```
|
173 |
+
|
174 |
|
175 |
|
176 |
+
### 英日翻訳用サンプルコード English-Japanese Translation sample script.
|
177 |
+
|
178 |
```
|
179 |
+
import re
|
180 |
+
import torch
|
181 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
182 |
+
|
183 |
+
def get_torch_dtype():
|
184 |
+
if torch.cuda.is_available():
|
185 |
+
device = torch.device("cuda")
|
186 |
+
prop = torch.cuda.get_device_properties(device)
|
187 |
+
# Ampere (Compute Capability 8.0 above), for example L4 support bfloat16, but T4 not support.
|
188 |
+
if prop.major >= 8:
|
189 |
+
return torch.bfloat16
|
190 |
+
return torch.float16
|
191 |
+
|
192 |
+
model_name = "gemma-2-2b-jpn-it-translate"
|
193 |
+
model = AutoModelForCausalLM.from_pretrained(
|
194 |
+
model_name,
|
195 |
+
torch_dtype=get_torch_dtype(),
|
196 |
+
device_map="auto",
|
197 |
+
)
|
198 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
199 |
+
tokenizer.pad_token = tokenizer.unk_token
|
200 |
+
|
201 |
+
system_prompt = "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.\n\n"
|
202 |
+
instruct = """Translate English to Japanese.\nWhen translating, please use the following hints:\n[writing_style: business]"""
|
203 |
+
|
204 |
+
# Function to split English sentences
|
205 |
+
def split_sentences(text):
|
206 |
+
sentences = []
|
207 |
+
# Split by newlines, periods, exclamation marks, question marks, or two or more consecutive spaces
|
208 |
+
pattern = r'(?:\r?\n|\.|\!|\?|(?:\s{2,}))'
|
209 |
+
splits = re.split(pattern, text)
|
210 |
+
|
211 |
+
for split in splits:
|
212 |
+
split = split.strip()
|
213 |
+
if split:
|
214 |
+
sentences.append(split)
|
215 |
+
|
216 |
+
return sentences
|
217 |
+
|
218 |
+
# Function to translate a sentence
|
219 |
+
def translate_sentence(sentence, previous_context):
|
220 |
+
if sentence.strip() == '':
|
221 |
+
return sentence
|
222 |
+
|
223 |
+
messages = previous_context + [
|
224 |
+
{"role": "user", "content": sentence}
|
225 |
+
]
|
226 |
+
|
227 |
+
inputs = tokenizer.apply_chat_template(
|
228 |
+
messages,
|
229 |
+
tokenize=True,
|
230 |
+
add_generation_prompt=True,
|
231 |
+
return_tensors="pt",
|
232 |
+
).to("cuda")
|
233 |
+
|
234 |
+
translation = ""
|
235 |
+
with torch.no_grad():
|
236 |
+
generated_ids = model.generate(
|
237 |
+
input_ids=inputs,
|
238 |
+
num_beams=3, max_new_tokens=1200, do_sample=True, temperature=0.5, top_p=0.3
|
239 |
+
)
|
240 |
+
full_output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
|
241 |
+
translation = full_output.split('\nmodel\n')[-1].strip()
|
242 |
+
return translation
|
243 |
+
|
244 |
+
from collections import deque
|
245 |
+
|
246 |
+
# Main processing function
|
247 |
+
def main(text):
|
248 |
+
sentences = split_sentences(text)
|
249 |
+
translated_sentences = []
|
250 |
+
|
251 |
+
context = deque([
|
252 |
+
{"role": "user", "content": system_prompt + instruct},
|
253 |
+
{"role": "assistant", "content": "OK"}
|
254 |
+
], maxlen=6)
|
255 |
+
|
256 |
+
for i, sentence in enumerate(sentences):
|
257 |
+
if i == 0:
|
258 |
+
translation_context = list(context)
|
259 |
+
else:
|
260 |
+
translation_context = list(context)[2:]
|
261 |
+
|
262 |
+
translated_sentence = translate_sentence(sentence, translation_context)
|
263 |
+
translated_sentences.append(translated_sentence)
|
264 |
+
|
265 |
+
if sentence.strip() != '':
|
266 |
+
context.append({"role": "user", "content": sentence})
|
267 |
+
else:
|
268 |
+
context.append({"role": "user", "content": sentence})
|
269 |
+
|
270 |
+
if translated_sentence.strip() != '':
|
271 |
+
context.append({"role": "assistant", "content": translated_sentence})
|
272 |
+
else:
|
273 |
+
context.append({"role": "assistant", "content": translated_sentence})
|
274 |
+
|
275 |
+
return translated_sentences
|
276 |
+
|
277 |
+
# Sample English text for translation (business context)
|
278 |
+
text = """Dear valued clients and partners,
|
279 |
+
|
280 |
+
I hope this email finds you well. I am writing to provide you with an important update regarding our company's recent developments and future plans.
|
281 |
+
|
282 |
+
Firstly, I am pleased to announce that our Q3 financial results have exceeded expectations, with a 15% increase in revenue compared to the same period last year. This success is largely attributed to the launch of our new product line and the expansion of our services into emerging markets.
|
283 |
+
|
284 |
+
In light of this growth, we are planning to implement several strategic initiatives in the coming months:
|
285 |
+
|
286 |
+
1. Expansion of our R&D department: We will be investing significantly in research and development to maintain our competitive edge in the market.
|
287 |
+
|
288 |
+
2. Sustainability efforts: We are committed to reducing our carbon footprint by 30% over the next five years. This includes transitioning to renewable energy sources and implementing eco-friendly practices across all our operations.
|
289 |
+
|
290 |
+
3. Digital transformation: We will be upgrading our IT infrastructure to enhance efficiency and provide better service to our clients.
|
291 |
+
|
292 |
+
Additionally, we are excited to announce our upcoming annual conference, which will be held virtually this year due to ongoing global health concerns. The conference will take place on November 15-16, 2024, and will feature keynote speeches from industry leaders, interactive workshops, and networking opportunities.
|
293 |
+
|
294 |
+
We value your continued support and partnership. If you have any questions or would like further information about any of these initiatives, please don't hesitate to reach out to your account manager or contact our customer support team.
|
295 |
+
|
296 |
+
Thank you for your trust in our company. We look forward to achieving new milestones together.
|
297 |
+
|
298 |
+
Best regards,
|
299 |
+
John Smith
|
300 |
+
CEO, XYZ Corporation"""
|
301 |
+
|
302 |
+
|
303 |
+
```
|
304 |
+
|
305 |
+
結果 result
|
306 |
+
```
|
307 |
+
['貴社にご愛顧いただき、誠にありがとうございます。', 'このメールがご健在であることを心よりお祈り申し上げます。', '弊社の最近の進展と今後の計画について、重要なお知らせをご提供いたします。', 'まず、第3四半期の収益が予想を上回ったことをお知らせいたします。昨年の同時期と比較して、売上高が15%増加しました。', 'この成功は、新製品ラインの発売と、新興市場へのサービスの拡大が大きく貢献しています。', 'この成長を踏まえ、今後の数ヶ月にわたって、いくつかの戦略的イニシアティブを実施する予定です。', '1', 'R&D部門の拡大:市場での競争力を維持するために、大幅に研究開発に投資する予定です。', '2', 'サステナビリティの取り組み:次の5年間で、炭素排出量を30%削減することを目指しています。', 'これは、再生可能エネルギー源への移行と、すべての事業活動における環境にやさしい実践の導入を含むものです。', '3', 'デジタルトランスフォーメーション: 私たちのITインフラを強化し、効率を向上させ、より良いサービスを提供する', 'さらに、今年は新型コロナウイルス感染症の懸念が続くため、オンラインで開催されますが、毎年恒例の年次カンファレンスをお知らせいたします。', 'カンファレンスは2024年11月15日~16日に開催され、業界のリーダーによるキーノートスピーチ、インタラクティブワークショップ、ネットワークングの機会が盛りだくさんです。', '引き続きご支援とご協力を賜りますようお願い申し上げます。', 'これらのイニシアチブについてご質問がある場合や、さらに詳しい情報をご希望の場合は、ご担当マネジャーにご連絡するか、弊社のカスタマーサポートチームにご連絡ください。', '弊社の信頼を賜り、誠にありがとうございます。', '共に新たな目標を達成できることを楽しみにしています。', 'ご清栄のこととお慶び申し上げます。', 'ジョン・スミス', 'XYZ株式会社のCEO']
|
308 |
+
```
|
309 |
+
|
310 |
+
|
311 |
+
|
312 |
+
制限
|
313 |
+
|
314 |
+
空行を渡すと翻訳をせずに元文をそのまま出力してしまう減少が確認されています
|
315 |
|
316 |
<!-- Provide a longer summary of what this model is. -->
|
317 |
|