Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,11 @@
|
|
1 |
-
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
# MILQA Hungarian question-answer benchmark database
|
4 |
|
5 |
MILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following:
|
@@ -83,5 +89,4 @@ Stroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-
|
|
83 |
doi = "10.18653/v1/2023.law-1.19",
|
84 |
pages = "188--198",
|
85 |
}
|
86 |
-
```
|
87 |
-
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- question-answering
|
4 |
+
language:
|
5 |
+
- hu
|
6 |
+
size_categories:
|
7 |
+
- 10K<n<100K
|
8 |
+
---
|
9 |
# MILQA Hungarian question-answer benchmark database
|
10 |
|
11 |
MILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following:
|
|
|
89 |
doi = "10.18653/v1/2023.law-1.19",
|
90 |
pages = "188--198",
|
91 |
}
|
92 |
+
```
|
|