uyghur_ner_dataset / README.md
codemurt's picture
updated readme
ac068e3
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: extra
        path: data/extra-*
dataset_info:
  features:
    - name: tokens
      sequence: string
    - name: ner_tags
      sequence: int64
    - name: langs
      sequence: string
    - name: spans
      sequence: string
  splits:
    - name: train
      num_bytes: 538947
      num_examples: 473
    - name: extra
      num_bytes: 11497
      num_examples: 109
  download_size: 140314
  dataset_size: 550444
license: mit
task_categories:
  - token-classification
language:
  - ug
size_categories:
  - n<1K

Uyghur NER dataset

Description

This dataset is in WikiAnn format. The dataset is assembled from named entities parsed from Wikipedia, Wiktionary and Dbpedia. For some words, new case forms have been created using Apertium-uig. Some locations have been translated using the Google Translate API.

The dataset is divided into two parts: train and extra. Train has full sentences, extra has only named entities.

Tags: O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6)

Data example

{
    'tokens': ['قاراماي', 'شەھىرى', '«مەملىكەت', 'بويىچە', 'مىللەتل…'],
     'ner_tags': [5, 0, 0, 0, 0],
     'langs': ['ug', 'ug', 'ug', 'ug', 'ug'],
     'spans': ['LOC: قاراماي']
}

Usage with datasets library

from datasets import load_dataset

dataset = load_dataset("codemurt/uyghur_ner_dataset")