darulm / README.md
dichspace's picture
add tags
2f4b141 verified
---
language:
- ru
- en
size_categories:
- 100M<n<1B
tags:
- not-for-all-audiences
pretty_name: DaruLM
---
# DaruLM dataset for LLM adaptation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
## Description
A growing collection of texts of various domains for Russian LLM adaptation extracted from other Hugging Face datasets and open resources.
**Usage of this dataset is possible only for scientific purposes on a non-commercial basis.**
**Credits:** Initial datasets were provided by Ilya Gusev
**NOTICE:** Some domain splits are based on vocabulary stats and may be noisy
**Current domains:** (used for ```domains``` argument in ```load_datasets```):
| | | | |
|------------|------------|------------|----------------|
| accounting | antique | aphorisms | art |
| biography | biology | buriy | business |
| cinema | computers | design | dramaturgy |
| economics | enwiki | essay | fantasy |
| gazeta | geography | guidebooks | habr |
| history | humor | language | law |
| lenta | literature | medicine | military |
| music | ods-tass | philosophy | pikabu |
| politic | prose | psychology | reference |
| religion | science | sociology | taiga-fontanka |
| textbook | wiki | UNDEFINED | |
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
import datasets
# Load habr and textbooks
for example in datasets.load_dataset('dichspace/darulm', domains=["habr","textbook"], split="train", streaming=True):
print(example.keys())
print(example)
break
```