File size: 7,344 Bytes
5dba21e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26343ae
3f2489e
5dba21e
26343ae
3f2489e
5dba21e
26343ae
3f2489e
5dba21e
26343ae
 
5dba21e
26343ae
 
5dba21e
26343ae
 
 
 
5dba21e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0fa4790
 
 
 
 
 
 
6253e34
4ffb3a7
6253e34
db7a2e9
6253e34
db7a2e9
6253e34
343ab74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6ad351e
343ab74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6253e34
db7a2e9
6253e34
db7a2e9
6253e34
 
 
343ab74
6253e34
 
 
343ab74
6253e34
 
e8d21fd
6253e34
 
 
 
 
 
 
 
343ab74
6ad351e
343ab74
6253e34
 
 
 
 
 
 
8628640
 
6253e34
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: accent
    dtype: string
  - name: language
    dtype: string
  splits:
  - name: dev
    num_bytes: 387850434.764
    num_examples: 3437
  - name: dev_clean
    num_bytes: 399259723.816
    num_examples: 3428
  - name: test
    num_bytes: 397670702.349
    num_examples: 3437
  - name: test_clean
    num_bytes: 378342487.48
    num_examples: 3477
  - name: train
    num_bytes: 3121855664.292
    num_examples: 27692
  - name: train_clean
    num_bytes: 3117730545.272
    num_examples: 27648
  download_size: 7793287789
  dataset_size: 7802709557.973
configs:
- config_name: default
  data_files:
  - split: dev
    path: data/dev-*
  - split: dev_clean
    path: data/dev_clean-*
  - split: test
    path: data/test-*
  - split: test_clean
    path: data/test_clean-*
  - split: train
    path: data/train-*
  - split: train_clean
    path: data/train_clean-*
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- cy
size_categories:
- 10K<n<100K
---
[See below for English](https://huggingface.co/datasets/cymen-arfor/lleisiau-arfor/blob/main/README.md#voices-of-arfor)

# Lleisiau ARFOR

Cafodd y set ddata hon ei chreu gan Cymen fel rhan o brosiect a ariannwyd gan [ARFOR](https://www.rhaglenarfor.cymru/index.html) ar y cyd â’r [Uned Technolegau Iaith](https://huggingface.co/techiaith) ym Mhrifysgol Bangor.   

Nod y prosiect oedd casglu llawer iawn o ddata llafar Cymraeg o ansawdd uchel, ynghyd â’u trawsgrifiadau cyfatebol, gan ganolbwyntio’n benodol ar iaith anffurfiol, sgyrsiol a digymell o ardal Arfor. Bydd y set ddata sy’n deillio ohoni wedyn yn cael ei defnyddio i wella technoleg adnabod llais yng Nghymru, ac i sicrhau bod y Gymraeg ar gael gyda'r datblygiadau technolegol diweddaraf.   

Er mwyn cyflawni hyn, aeth swyddog y prosiect ati i gael ganiatâd i ddefnyddio podlediadau sydd eisoes yn bodoli, yn ogystal â recordio digwyddiadau cyhoeddus a sgyrsiau anffurfiol rhwng gwirfoddolwyr. Mae’r holl ddata wedi cael ei anonymeiddio, ac mae wedi'i ryddhau o dan drwydded agored (CC0).  

Mae arddull y trawsgrifiadau'n dilyn yn fras ganllawiau [Banc Trawsgrifiadau](https://huggingface.co/datasets/techiaith/banc-trawsgrifiadau-bangor)’r Uned Technolegau Iaith, yn enwedig o ran atalnodi a fformatio'r data ble mae'n wahanol iawn i Gymraeg Safonol.   

Mae’r set ddata yn cynnwys tair rhan, sef `test`, `train` a `dev` yn ogystal â fersiwn glân (`clean`) ar gyfer pob un o’r rhaniadau data hynny. Mae’r rhan `train` yn cynnwys 80% o’r data ac mae `test` a `dev` yn cynnwys 10% yr un. Yn y fersiynau glân, mae’r holl anodiadau ieithyddol a'r nodau arbennig wedi cael eu tynnu, er mwyn lleihau’r angen am fformatio data. Fodd bynnag, bydd dal yr opsiwn gennych i rhaniadau data wedi’u hanodi’n llawn er mwyn creu set ddata wedi'i phersonoleiddio.   

Mae'r anodiadau yn cynnwys gwybodaeth fel:  
- Geiriau ac ymadroddion Saesneg, wedi'u hamlygu gyda sêr. Er enghraifft: \*spooky*.  
- dau ddewis gwahanol ar gyfer trawsysgrifio rhifau wedi’u gwahanu gan y nod bibell | ac wedi’u hamgylchynu gan gromfachau cyrliog, er enghraifft:   
- seiniau paraieithyddol, fel \<chwerthin>   
- geiriau a synau llenwi, fel “yy” ac “yym”

Dyma enghraifft o’r data:   

```
path	sentence	accents	language
file30436.wav	{GPT|en} {pedwar|4} os 'di o yn rhan o'r meddalwedd dach chi'n iwsio.	Gogledd Orllewin	cy
file1726.wav 	Trwy'r ymgyrch *Black Lives Matter* wnaeth bobl ifanc, a lot o bobl ifanc sylwi...	Gogledd Orllewin	cy
file10784.wav	<ochneidio> A dwi'n bron â cael digon!	Canolbarth	cy
```

Mae’r set ddata yn cynnwys pedair colofn: path, sentence, accent, language.   

| Colofn| Disgrifiad |
| ------ | ------ |
| `path`| Llwybr neu enw'r ffeil yn y ffolder 'clips'|
| `sentence`| Y trawsgrifiad|
| `accent`| Acen y siaradwr. Naill ai: `Gogledd Orllewin`, `Gogledd Ddwyrain`, `Canolbarth`, `De Ddwyrain`, `De Orllewin`, `Patagonia`|
| `language`| Iaith y segment cyfan. Naill ai: `en`, os mae pob un o'r geiriau yn Saesneg, neu `cy`, os oes o leiaf un gair Cymraeg yn y segment|

Os oes gennych chi unrhyw gwestiynau am y set ddata hon, cysylltwch â [email protected]

---

# Voices of ARFOR

This dataset was created at Cymen as part of a project funded by [ARFOR](https://www.rhaglenarfor.cymru/index.en.html) in collaboration with the [Language Technologies Unit](https://huggingface.co/techiaith) at Bangor University. 

The goal of the project was to collect a large amount of high quality Welsh speech data and their corresponding transcriptions with a particular focus on informal, conversational and spontaneous speech from the Arfor area. The resulting dataset will then be used to improve Welsh speech recognition technology and ensure the availability of the Welsh language in the latest technological advancements. 

To achieve this, the project officer obtained permission to use already existing podcasts and to record meetings, public events and conversations between volunteers. All of the data has been anonymised and is being released under an open (CC0) license.

The transcription style loosely follows the guidelines of the Language Technologies Unit’s [Banc Trawsgrifiadau](https://huggingface.co/datasets/techiaith/banc-trawsgrifiadau-bangor), particularly, in punctuation and data formatting while it diverges particularly with regards to formalising spelling and improving readability. 

The dataset consists of three splits `test`, `train` and `dev` as well as a `clean` version for each of those data splits. The `train` split contains 80% of the data while `test` and `dev` contain 10% each. In the clean versions, all linguistic annotations and special characters have been removed to minimise the need for data formatting although the fully annotated data splits can still be used to customise the dataset. 

Annotations include information such as:
- English or other foreign language words and segments indicated by asterisks, for example \*spooky*
- two different options for transcribing numbers separated by the pipe character | and surrounded by curly brackets, for example {dau|2} 
- paralinguistic sounds, such as \<chwerthin> 
- filler words and sounds, such as “yy” and “yym”

This is an example of the data: 

```
path	sentence	accents	language
file30436.wav	{GPT|en} {pedwar|4} os 'di o yn rhan o'r meddalwedd dach chi'n iwsio.	Gogledd Orllewin	cy
file1726.wav 	Trwy'r ymgyrch *Black Lives Matter* wnaeth bobl ifanc, a lot o bobl ifanc sylwi...	Gogledd Orllewin	cy
file10784.wav	<ochneidio> A dwi'n bron â cael digon!	Canolbarth	cy
```
The dataset consits of four columns: path, sentence, accent and language.

| Column| Description |
| ------ | ------ |
| `path`| The path or file name in the 'clips' folder|
| `sentence`| The transcription|
| `accent`| The accent of the speaker. Either: `Gogledd Orllewin`, `Gogledd Ddwyrain`, `Canolbarth`, `De Ddwyrain`, `De Orllewin`, `Patagonia`|
| `language`| The language of the entire segment. Either: `en`, if all of the words are English, or `cy`, if at least one word in the segment is Welsh|

If you have any questions about this dataset please contact [email protected]