File size: 2,612 Bytes
a0deb73
 
b4de310
a0deb73
 
 
 
 
 
 
 
 
 
 
b4de310
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0deb73
 
 
 
 
b4de310
 
 
 
d4e61dd
 
 
 
a0deb73
229371f
 
 
 
 
 
 
 
 
 
 
 
 
 
450fe81
229371f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
dataset_info:
- config_name: default
  features:
  - name: utterance
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 924830
    num_examples: 11514
  download_size: 347436
  dataset_size: 924830
- config_name: intents
  features:
  - name: id
    dtype: int64
  - name: name
    dtype: string
  - name: tags
    sequence: 'null'
  - name: regexp_full_match
    sequence: 'null'
  - name: regexp_partial_match
    sequence: 'null'
  - name: description
    dtype: 'null'
  splits:
  - name: intents
    num_bytes: 2266
    num_examples: 60
  download_size: 3945
  dataset_size: 2266
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: intents
  data_files:
  - split: intents
    path: intents/intents-*
task_categories:
- text-classification
language:
- ru
---

# Russian massive

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).

## Usage

It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
from autointent import Dataset

massive_ru = Dataset.from_datasets("AutoIntent/massive_ru")
```

## Source

This dataset is taken from `mteb/amazon_massive_intent` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
from datasets import load_dataset

def convert_massive(massive_train):
    intent_names = sorted(massive_train.unique("label"))
    name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
    n_classes = len(intent_names)

    classwise_utterance_records = [[] for _ in range(n_classes)]
    intents = [
        {
            "id": i,
            "name": name,
            
        }
        for i, name in enumerate(intent_names)
    ]

    for batch in massive_train.iter(batch_size=16, drop_last_batch=False):
        for txt, name in zip(batch["text"], batch["label"], strict=False):
            intent_id = name_to_id[name]
            target_list = classwise_utterance_records[intent_id]
            target_list.append({"utterance": txt, "label": intent_id})

    utterances = [rec for lst in classwise_utterance_records for rec in lst]
    return Dataset.from_dict({"intents": intents, "train": utterances})

massive = load_dataset("mteb/amazon_massive_intent", "ru")
massive_converted = convert_massive(massive["train"])
```