File size: 2,495 Bytes
8071ffb
 
 
 
 
 
35ae3df
 
ace8927
 
 
 
 
 
 
 
 
 
35ae3df
8071ffb
 
 
 
9199113
8071ffb
 
 
 
f62bfb6
 
 
 
8071ffb
a55b368
 
 
 
8071ffb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
inference: false
language:
- ja
- en
---
# New Version has been released.

2024/03/04  
[webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)  
Memory GPU requirement has increased to 8.1 GB. However, it is possible to run it with the free version of Colab and the performance is much improved!  

2023/10/21  
[ALMA-7B-Ja-V2](https://huggingface.co/webbigdata/ALMA-7B-Ja-V2)  
Overall performance has been raised.  

Below is a description of the old version. We urge you to try the newer version above.  


# webbigdata/ALMA-7B-Ja-GPTQ-Ja-En

Original ALMA Model [ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B). (26.95GB)  is a new paradigm translation model.  

[ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja) is a machine translation model that uses ALMA's learning method to translate Japanese to English.(13.3GB)  

This model is GPTQ quantized version model that reduces model size(3.9GB) and memory usage, although the performance is probably lower.  
And translation ability for languages other than Japanese and English has deteriorated significantly.   

[Free Colab Sample](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_GPTQ_Ja_En_Free_Colab_sample.ipynb)  

If you want to translate the entire file at once, try Colab below.  
[ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample.ipynb)

if you enconter error below.
```RuntimeError: probability tensor contains either `inf`, `nan` or element < 0```  
It's mean your memory is not enough.  decrease your num_beams or token size.  


**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. 
Please find more details in their [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
      title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, 
      author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
      year={2023},
      eprint={2309.11674},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## about this work
- **This work was done by :** [webbigdata](https://webbigdata.jp/).