File size: 2,848 Bytes
267e93d e6e8723 267e93d e6e8723 267e93d 0761b97 e6e8723 0761b97 1739361 0761b97 1739361 0761b97 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: cc-by-nc-nd-4.0
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
language:
- tr
tags:
- medical
pretty_name: MedTurkQuAD
size_categories:
- 1K<n<10K
dataset_info:
total_examples: 8200
total_paragraphs: 875
source_articles: 618
source_datasets:
- original
paperswithcode_id: medturkquad-medical-turkish-question
---
# MedTurkQuAD: Medical Turkish Question-Answering Dataset
MedTurkQuAD is a dataset specifically designed for question-answering (QA) tasks in the medical domain in Turkish. It contains context paragraphs derived from medical texts, paired with questions and answers related to specific diseases or medical issues.
For more details about the dataset, methodology, and experiments, you can refer to the corresponding [research paper](https://ieeexplore.ieee.org/abstract/document/10711128).
---
## Dataset Overview
- **Number of Paragraphs**: 875
- **Number of QA Pairs**: 8,200
- **Sources**: 618 medical articles (110 Wikipedia, 508 Thesis in medicine)
- **Languages**: Turkish
### Dataset Structure
The dataset is divided into three subsets for training, validation, and testing:
| Split | Number of Paragraphs | Number of QA Pairs |
|--------------|-----------------------|---------------------|
| Training | 700 | 6560 |
| Validation | 87 | 820 |
| Testing | 88 | 820 |
---
## How to Use
This dataset can be used with libraries such as [🤗 Datasets](https://huggingface.co/docs/datasets) or [pandas](https://pandas.pydata.org/). Below are examples of the use of the dataset:
```python
from datasets import load_dataset
ds = load_dataset("incidelen/MedTurkQuAD")
```
```python
import pandas as pd
splits = {'train': 'train.json', 'validation': 'validation.json', 'test': 'test.json'}
df = pd.read_json("hf://datasets/incidelen/MedTurkQuAD/" + splits["train"])
```
---
## Citation
If you use this dataset, please cite the following paper:
```
@INPROCEEDINGS{10711128,
author={İncidelen, Mert and Aydoğan, Murat},
booktitle={2024 8th International Artificial Intelligence and Data Processing Symposium (IDAP)},
title={Developing Question-Answering Models in Low-Resource Languages: A Case Study on Turkish Medical Texts Using Transformer-Based Approaches},
year={2024},
volume={},
number={},
pages={1-4},
keywords={Training;Adaptation models;Natural languages;Focusing;Encyclopedias;Transformers;Data models;Internet;Online services;Text processing;Natural Language Processing;Medical Domain;BERTurk;Question-Answering},
doi={10.1109/IDAP64064.2024.10711128}}
```
---
## Acknowledgments
Special thanks to [maydogan](https://huggingface.co/maydogan) for their contributions and support in the development of this dataset.
---
|