File size: 2,944 Bytes
27798a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: cc-by-nd-4.0
language:
- en
- taw
metrics:
- bleu
base_model:
- repleeka/eng-tagin-nmt
pipeline_tag: translation
library_name: transformers
tags:
- tawra (Digaro Mishmi)
- english
- NMT
---
# Model Card for Model ID


Digaro Mishmi, also known as Tawra, Taoran, Taraon, or Darang, is a member of the Digarish language family, spoken by the Mishmi people in northeastern Arunachal Pradesh, India, and parts of Zayü County, Tibet, China. The language has several autonyms, including tɑ31 rɑŋ53 or da31 raŋ53 in Arunachal Pradesh, and tɯŋ53 in China, where the Deng (登) also refer to the language. The language holds an essential place in the Anjaw district of Arunachal Pradesh, spoken in Hayuliang, Changlagam, and Goiliang circles, as well as in the Dibang Valley district and parts of Assam. Although Ethnologue’s 2001 census estimated around 35,000 native speakers, Digaro Mishmi remains critically under-resourced in terms of computational linguistics and digital preservation. 
 - source: Wikipedia


## Model Details

### Model Description

- **Developed by:** Tungon Dugi
- **Affiliation:** National Institute of Technology Arunachal Pradesh, India  
- **Email:** [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected])
- **Model type:** Translation
- **Language(s) (NLP):** English (en) and Tawra (taw)
- **Finetuned from model:** repleeka/eng-tagin-nmt


### Direct Use

This model can be used for translation and text-to-text generation.


## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("repleeka/eng-taw-nmt")
model = AutoModelForSeq2SeqLM.from_pretrained("repleeka/eng-taw-nmt")
```

## Training Details

### Training Data

[English-Tawra Corpus](#)

## Evaluation

The model achieved the following metrics after 10 training epochs:

| Metric                | Value             |
|----------------------|-------------------|
| BLEU Score           | 0.25157           |
| Evaluation Runtime    | 644.278 seconds  |

The model’s BLEU score suggests promising results, with the low evaluation loss indicating strong translation performance on the English-Tawra Corpus, suitable for practical applications. This model represents a significant advancement for Tawra language resources, enabling English-to-Tawra translation in NLP applications.

#### Summary

The `eng_taw_nmt` model is currently in its early phase of development. To enhance its performance, it requires a more substantial dataset and improved training resources. This would facilitate better generalization and accuracy in translating between English and Tawra, addressing the challenges faced by this extremely low-resource language. As the model evolves, ongoing efforts will be necessary to refine its capabilities further.