IsmatS commited on
Commit
e8a4789
·
verified ·
1 Parent(s): 2fa99ad

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +158 -3
README.md CHANGED
@@ -1,3 +1,158 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # XLM-RoBERTa Azerbaijani NER Model
2
+
3
+ [![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Model-blue)](https://huggingface.co/IsmatS/xlm-roberta-az-ner)
4
+
5
+ This model is a fine-tuned version of **XLM-RoBERTa** for Named Entity Recognition (NER) in the Azerbaijani language. It recognizes several entity types commonly used in Azerbaijani text, providing high accuracy on tasks requiring entity extraction, such as personal names, locations, organizations, and dates.
6
+
7
+ ## Model Details
8
+
9
+ - **Base Model**: `xlm-roberta-base`
10
+ - **Fine-tuned on**: [Azerbaijani Named Entity Recognition Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset)
11
+ - **Task**: Named Entity Recognition (NER)
12
+ - **Language**: Azerbaijani (az)
13
+ - **Dataset**: Custom Azerbaijani NER dataset with entity tags such as `PERSON`, `LOCATION`, `ORGANISATION`, `DATE`, etc.
14
+
15
+ ### Data Source
16
+
17
+ The model was trained on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset), which provides annotated data with 25 distinct entity types specifically for the Azerbaijani language. This dataset is an invaluable resource for improving NLP tasks in Azerbaijani, including entity recognition and language understanding.
18
+
19
+ ### Entity Types
20
+ The model recognizes the following entities:
21
+ - **PERSON**: Names of people
22
+ - **LOCATION**: Geographical locations
23
+ - **ORGANISATION**: Companies, institutions
24
+ - **DATE**: Dates and periods
25
+ - **MONEY**: Monetary values
26
+ - **TIME**: Time expressions
27
+ - **GPE**: Countries, cities, states
28
+ - **FACILITY**: Buildings, landmarks, etc.
29
+ - **EVENT**: Events and occurrences
30
+ - **...and more**
31
+
32
+ For the full list of entities, please refer to the dataset description.
33
+
34
+ ## Performance Metrics
35
+
36
+ ### Epoch-wise Performance
37
+
38
+ | Epoch | Training Loss | Validation Loss | Precision | Recall | F1 |
39
+ |-------|---------------|-----------------|-----------|--------|--------|
40
+ | 1 | 0.323100 | 0.275503 | 0.775799 | 0.694886 | 0.733117 |
41
+ | 2 | 0.272500 | 0.262481 | 0.739266 | 0.739900 | 0.739583 |
42
+ | 3 | 0.248600 | 0.252498 | 0.751478 | 0.741152 | 0.746280 |
43
+ | 4 | 0.236800 | 0.249968 | 0.754882 | 0.741449 | 0.748105 |
44
+ | 5 | 0.223800 | 0.252187 | 0.764390 | 0.740460 | 0.752235 |
45
+ | 6 | 0.218600 | 0.249887 | 0.756352 | 0.741646 | 0.748927 |
46
+ | 7 | 0.209700 | 0.250748 | 0.760696 | 0.739438 | 0.749916 |
47
+
48
+ ### Detailed Classification Report (Epoch 7)
49
+
50
+ This table summarizes the precision, recall, and F1-score for each entity type, calculated on the validation dataset.
51
+
52
+ | Entity Type | Precision | Recall | F1-Score | Support |
53
+ |----------------|-----------|--------|----------|---------|
54
+ | ART | 0.54 | 0.20 | 0.29 | 1857 |
55
+ | DATE | 0.52 | 0.47 | 0.50 | 880 |
56
+ | EVENT | 0.69 | 0.35 | 0.47 | 96 |
57
+ | FACILITY | 0.69 | 0.69 | 0.69 | 1170 |
58
+ | LAW | 0.60 | 0.61 | 0.60 | 1122 |
59
+ | LOCATION | 0.77 | 0.82 | 0.80 | 9132 |
60
+ | MONEY | 0.61 | 0.57 | 0.59 | 540 |
61
+ | ORGANISATION | 0.69 | 0.68 | 0.69 | 544 |
62
+ | PERCENTAGE | 0.79 | 0.82 | 0.81 | 3591 |
63
+ | PERSON | 0.87 | 0.83 | 0.85 | 7037 |
64
+ | PRODUCT | 0.83 | 0.85 | 0.84 | 2808 |
65
+ | TIME | 0.55 | 0.51 | 0.53 | 1569 |
66
+
67
+ **Overall Metrics**:
68
+ - **Micro Average**: Precision = 0.76, Recall = 0.74, F1-Score = 0.75
69
+ - **Macro Average**: Precision = 0.68, Recall = 0.62, F1-Score = 0.64
70
+ - **Weighted Average**: Precision = 0.75, Recall = 0.74, F1-Score = 0.74
71
+
72
+ ## Usage
73
+
74
+ You can use this model with the Hugging Face `transformers` library to perform NER on Azerbaijani text. Here’s an example:
75
+
76
+ ### Installation
77
+
78
+ Make sure you have the `transformers` library installed:
79
+
80
+ ```bash
81
+ pip install transformers
82
+ ```
83
+
84
+ ### Inference Example
85
+
86
+ Load the model and tokenizer, then run the NER pipeline on Azerbaijani text:
87
+
88
+ ```python
89
+ from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
90
+
91
+ # Load the model and tokenizer
92
+ model_name = "IsmatS/xlm-roberta-az-ner"
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
94
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
95
+
96
+ # Set up the NER pipeline
97
+ nlp_ner = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
98
+
99
+ # Example sentence
100
+ sentence = "Bakı şəhərində Azərbaycan Respublikasının prezidenti İlham Əliyev."
101
+ entities = nlp_ner(sentence)
102
+
103
+ # Display entities
104
+ for entity in entities:
105
+ print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']}")
106
+ ```
107
+
108
+ ### Sample Output
109
+ ```json
110
+ [
111
+ {
112
+ "entity_group": "PERSON",
113
+ "score": 0.99,
114
+ "word": "İlham Əliyev",
115
+ "start": 34,
116
+ "end": 46
117
+ },
118
+ {
119
+ "entity_group": "LOCATION",
120
+ "score": 0.98,
121
+ "word": "Bakı",
122
+ "start": 0,
123
+ "end": 4
124
+ }
125
+ ]
126
+ ```
127
+
128
+ ## Training Details
129
+
130
+ - **Training Data**: This model was fine-tuned on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset) with 25 entity types.
131
+ - **Training Framework**: Hugging Face `transformers`
132
+ - **Optimizer**: AdamW
133
+ - **Epochs**: 8
134
+ - **Batch Size**: 64
135
+ - **Evaluation Metric**: F1-score
136
+
137
+ ## Limitations
138
+
139
+ - The model is trained specifically for the Azerbaijani language and may not generalize well to other languages.
140
+ - Certain rare entities may be misclassified due to limited training data in those categories.
141
+
142
+ ## Citation
143
+
144
+ If you use this model in your research or application, please consider citing:
145
+
146
+ ```
147
+ @model{ismats_az_ner_2024,
148
+ title={XLM-RoBERTa Azerbaijani NER Model},
149
+ author={Ismat Samadov},
150
+ year={2024},
151
+ publisher={Hugging Face},
152
+ url={https://huggingface.co/IsmatS/xlm-roberta-az-ner}
153
+ }
154
+ ```
155
+
156
+ ## License
157
+
158
+ This model is available under the [MIT License](https://opensource.org/licenses/MIT).