Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

KazParC

Kazakh Parallel Corpus (KazParC) is a parallel corpus designed for machine translation across Kazakh, English, Russian, and Turkish. The first and largest publicly available corpus of its kind, KazParC contains a collection of 372,164 parallel sentences covering different domains and developed with the assistance of human translators.

Data Sources and Domains

The data sources include

The sources are categorised into five broad domains:

Domain lines tokens
EN KK RU TR
# % # % # % # % # %
Mass media 120,547 32.4 1,817,276 28.3 1,340,346 28.6 1,454,430 29.0 1,311,985 28.5
General 94,988 25.5 844,541 13.1 578,236 12.3 618,960 12.3 608,020 13.2
Legal documents 77,183 20.8 2,650,626 41.3 1,925,561 41.0 1,991,222 39.7 1,880,081 40.8
Education and science 46,252 12.4 522,830 8.1 392,348 8.4 444,786 8.9 376,484 8.2
Fiction 32,932 8.9 589,001 9.2 456,385 9.7 510,168 10.2 433,968 9.4
Total 371,902 100 6,424,274 100 4,692,876 100 5,019,566 100 4,610,538 100
Pair # lines # sents # tokens # types
KK↔EN 363,594 362,230
361,087
4,670,789
6,393,381
184,258
59,062
KK↔RU 363,482 362,230
362,748
4,670,593
4,996,031
184,258
183,204
KK↔TR 362,150 362,230
361,660
4,668,852
4,586,421
184,258
175,145
EN↔RU 363,456 361,087
362,748
6,392,301
4,994,310
59,062
183,204
EN↔TR 362,392 361,087
361,660
6,380,703
4,579,375
59,062
175,145
RU↔TR 363,324 362,748
361,660
4,999,850
4,591,847
183,204
175,145

Synthetic Corpus

To make our parallel corpus more extensive, we carried out web crawling to gather a total of 1,797,066 sentences from English-language websites. These sentences were then automatically translated into Kazakh, Russian, and Turkish using the Google Translate service. We refer to this collection of data as 'SynC' (Synthetic Corpus).

Pair # lines # sents # tokens # types
KK↔EN 1,787,050 1,782,192
1,781,019
26,630,960
35,291,705
685,135
300,556
KK↔RU 1,787,448 1,782,192
1,777,500
26,654,195
30,241,895
685,135
672,146
KK↔TR 1,791,425 1,782,192
1,782,257
26,726,439
27,865,860
685,135
656,294
EN↔RU 1,784,513 1,781,019
1,777,500
35,244,800
30,175,611
300,556
672,146
EN↔TR 1,788,564 1,781,019
1,782,257
35,344,188
27,806,708
300,556
656,294
RU↔TR 1,788,027 1,777,500
1,782,257
30,269,083
27,816,210
672,146
656,294

Data Splits

KazParC

We first created a test set by randomly selecting 250 unique and non-repeating rows from each of the sources outlined in Data Sources and Domains. The remaining data were divided into language pairs, following an 80/20 split, while ensuring that the distribution of domains was maintained within both the training and validation sets.

Pair Train Valid Test
#
lines
#
sents
#
tokens
#
types
#
lines
#
sents
#
tokens
#
lines
#
lines
#
sents
#
tokens
#
lines
KK↔EN 290,877 286,958
286,197
3,693,263
5,057,687
164,766
54,311
72,719 72,426
72,403
920,482
1,259,827
83,057
32,063
4,750 4,750
4,750
57,044
75,867
17,475
9,729
KK↔RU 290,785 286,943
287,215
3,689,799
3,945,741
164,995
165,882
72,697 72,413
72,439
923,750
988,374
82,958
87,519
4,750 4,750
4,750
57,044
61,916
17,475
18,804
KK↔TR 289,720 286,694
286,279
3,691,751
3,626,361
164,961
157,460
72,430 72,211
72,190
920,057
904,199
82,698
80,885
4,750 4,750
4,750
57,044
55,861
17,475
17,284
EN↔RU 290,764 286,185
287,261
5,058,530
3,950,362
54,322
165,701
72,692 72,377
72,427
1,257,904
982,032
32,208
87,541
4,750 4,750
4,750
75,867
61,916
9,729
18,804
EN↔TR 289,913 285,967
286,288
5,048,274
3,621,531
54,224
157,369
72,479 72,220
72,219
1,256,562
901,983
32,269
80,838
4,750 4,750
4,750
75,867
55,861
9,729
17,284
RU↔TR 290,899 287,241
286,475
3,947,809
3,626,436
165,482
157,470
72,725 72,455
72,362
990,125
909,550
87,831
80,962
4,750 4,750
4,750
61,916
55,861
18,804
17,284

SynC

We divided the synthetic corpus into training and validation sets with a 90/10 ratio.

Pair Train Valid
# lines # sents # tokens # types # lines # sents # tokens # types
KK↔EN 1,608,345 1,604,414
1,603,426
23,970,260
31,767,617
650,144
286,372
178,705 178,654
178,639
2,660,700
3,524,088
208,838
105,517
KK↔RU 1,608,703 1,604,468
1,600,643
23,992,148
27,221,583
650,170
642,604
178,745 178,691
178,642
2,662,047
3,020,312
209,188
235,642
KK↔TR 1,612,282 1,604,793
1,604,822
24,053,671
25,078,688
650,384
626,724
179,143 179,057
179,057
2,672,768
2,787,172
209,549
221,773
EN↔RU 1,606,061 1,603,199
1,600,372
31,719,781
27,158,101
286,645
642,686
178,452 178,419
178,379
3,525,019
3,017,510
104,834
235,069
EN↔TR 1,609,707 1,603,636
1,604,545
31,805,393
25,022,782
286,387
626,740
178,857 178,775
178,796
3,538,795
2,783,926
105,641
221,372
RU↔TR 1,609,224 1,600,605
1,604,521
27,243,278
25,035,274
642,797
626,587
178,803 178,695
178,750
3,025,805
2,780,936
235,970
221,792

Corpus Structure

The entire corpus is organised into two distinct groups based on their file prefixes. Files "01" through "19" have the "kazparc" prefix, while Files "20" to "32" have the "sync" prefix.

β”œβ”€β”€ kazparc
   β”œβ”€β”€ 01_kazparc_all_entries.csv
   β”œβ”€β”€ 02_kazparc_train_kk_en.csv
   β”œβ”€β”€ 03_kazparc_train_kk_ru.csv
   β”œβ”€β”€ 04_kazparc_train_kk_tr.csv
   β”œβ”€β”€ 05_kazparc_train_en_ru.csv
   β”œβ”€β”€ 06_kazparc_train_en_tr.csv
   β”œβ”€β”€ 07_kazparc_train_ru_tr.csv
   β”œβ”€β”€ 08_kazparc_valid_kk_en.csv
   β”œβ”€β”€ 09_kazparc_valid_kk_ru.csv
   β”œβ”€β”€ 10_kazparc_valid_kk_tr.csv
   β”œβ”€β”€ 11_kazparc_valid_en_ru.csv
   β”œβ”€β”€ 12_kazparc_valid_en_tr.csv
   β”œβ”€β”€ 13_kazparc_valid_ru_tr.csv
   β”œβ”€β”€ 14_kazparc_test_kk_en.csv
   β”œβ”€β”€ 15_kazparc_test_kk_ru.csv
   β”œβ”€β”€ 16_kazparc_test_kk_tr.csv
   β”œβ”€β”€ 17_kazparc_test_en_ru.csv
   β”œβ”€β”€ 18_kazparc_test_en_tr.csv
   β”œβ”€β”€ 19_kazparc_test_ru_tr.csv
β”œβ”€β”€ sync
   β”œβ”€β”€ 20_sync_all_entries.csv
   β”œβ”€β”€ 21_sync_train_kk_en.csv
   β”œβ”€β”€ 22_sync_train_kk_ru.csv
   β”œβ”€β”€ 23_sync_train_kk_tr.csv
   β”œβ”€β”€ 24_sync_train_en_ru.csv
   β”œβ”€β”€ 25_sync_train_en_tr.csv
   β”œβ”€β”€ 26_sync_train_ru_tr.csv
   β”œβ”€β”€ 27_sync_valid_kk_en.csv
   β”œβ”€β”€ 28_sync_valid_kk_ru.csv
   β”œβ”€β”€ 29_sync_valid_kk_tr.csv
   β”œβ”€β”€ 30_sync_valid_en_ru.csv
   β”œβ”€β”€ 31_sync_valid_en_tr.csv
   β”œβ”€β”€ 32_sync_valid_ru_tr.csv

KazParC files

  • File "01" contains the original, unprocessed text data for the four languages considered within KazParC.
  • Files "02" through "19" represent pre-processed texts divided into language pairs for training (Files "02" to "07"), validation (Files "08" to "13"), and testing (Files "14" to "19"). Language pairs are indicated within the filenames using two-letter language codes (e.g., kk_en).

SynC files

  • File "20" contains raw, unprocessed text data for the four languages.
  • Files "21" to "32" contain pre-processed text divided into language pairs for training (Files "21" to "26") and validation (Files "27" to "32") purposes.

Data Fields

In both "01" and "20", each line consists of specific components:

  • id: the unique line identifier
  • kk: the sentence in Kazakh
  • en: the sentence in English
  • ru: the sentence in Russian
  • tr: the sentence in Turkish
  • domain: the domain of the sentence

For the other files, the fields are:

  • id: the unique line identifier
  • source_lang: the source language code
  • target_lang: the target language code
  • domain: the domain of the sentence
  • pair: the language pair

How to Use

To load the subsets of KazParC separately:

from datasets import load_dataset

kazparc_raw = load_dataset("issai/kazparc", "kazparc_raw")
kazparc = load_dataset("issai/kazparc", "kazparc")
sync_raw = load_dataset("issai/kazparc", "sync_raw")
sync = load_dataset("issai/kazparc", "sync")
Downloads last month
57

Models trained or fine-tuned on issai/kazparc