File size: 7,466 Bytes
0b2626b 49369ef 0b2626b 49369ef 0b2626b 49369ef 0b2626b 1371d81 0b2626b 1371d81 193d5b9 1371d81 af74e4e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
---
dataset_info:
features:
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 474769
num_examples: 267
- name: validation
num_bytes: 74374
num_examples: 48
- name: test
num_bytes: 387629
num_examples: 225
download_size: 358300
dataset_size: 936772
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- question-answering
language:
- gr
tags:
- finance
- qa
pretty_name: Plutus QA
size_categories:
- n<1K
---
----------------------------------------------------------------
# Dataset Card for Plutus QA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/collections/TheFinAI/plutus-benchmarking-greek-financial-llms-67bc718fb8d897c65f1e87db
- **Repository:** https://huggingface.co/datasets/TheFinAI/plutus-QA
- **Paper:** Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance
- **Leaderboard:** https://huggingface.co/spaces/TheFinAI/Open-Greek-Financial-LLM-Leaderboard#/
- **Model:** https://huggingface.co/spaces/TheFinAI/plutus-8B-instruct
### Dataset Summary
Plutus QA is a question-answering dataset designed for financial applications in the Greek language. This resource contains a diverse set of queries, each paired with an answer, additional context text, a set of multiple-choice options, and a gold label index indicating the correct choice. By integrating textual context with a multiple-choice format, the dataset is aimed at benchmarking the capabilities of large language models in resolving financial questions in low-resource settings.
### Supported Tasks
- **Task:** Question Answering
- **Evaluation Metrics:** Accuracy
### Languages
- Greek
## Dataset Structure
### Data Instances
Each instance in the dataset consists of the following five fields:
- **query:** A question or prompt regarding financial matters.
- **answer:** The corresponding answer text paired with the query.
- **text:** Additional context or background information to support the query.
- **choices:** A sequence field containing multiple-choice answer options.
- **gold:** An integer field representing the index of the correct answer in the choices.
### Data Fields
- **query:** String – Represents the financial question or prompt.
- **answer:** String – Contains the answer aligned with the given query.
- **text:** String – Provides extra contextual information related to the query.
- **choices:** Sequence of strings – Lists all available answer options.
- **gold:** Int64 – Denotes the index of the correct answer from the choices.
### Data Splits
The dataset is organized into three splits:
- **Train:** 267 instances (474,769 bytes)
- **Validation:** 48 instances (74,374 bytes)
- **Test:** 225 instances (387,629 bytes)
## Dataset Creation
### Curation Rationale
The Plutus QA dataset was created to enable the evaluation of large language models on question-answering tasks within the financial domain for Greek language texts. Its design—including multiple-choice answers with additional context—aims to reflect complex financial decision-making and reasoning processes in a low-resource language environment.
### Source Data
#### Initial Data Collection and Normalization
The source data is derived from Greek university financial exams. Standardization and normalization procedures were applied to ensure consistency across queries, choices, and textual context.
#### Who are the Source Language Producers?
Greek university financial exams.
### Annotations
#### Annotation Process
Annotations were performed by domain experts proficient in both finance and linguistics. The process included verifying the correct answer for each query and marking the corresponding correct index among provided choices.
#### Who are the Annotators?
A team of financial analysts and linguists collaborated to ensure the annotations are accurate and reflective of real-world financial reasoning.
### Personal and Sensitive Information
This dataset is curated to exclude any personally identifiable information (PII) and only contains public financial text data necessary for question-answering tasks.
## Considerations for Using the Data
### Social Impact of Dataset
The Plutus QA dataset is instrumental in enhancing automated question-answering systems in the financial sector, particularly for Greek language applications. Improved QA systems can support better financial decision-making, increase the efficiency of financial services, and contribute to academic research in financial natural language processing.
### Discussion of Biases
- The domain-specific financial language may limit generalizability to non-financial question-answering tasks.
- Annotation subjectivity could introduce biases in determining the correct answer among multiple choices.
- The dataset's focus on Greek financial documents may not fully represent other financial or multilingual contexts.
### Other Known Limitations
- Pre-processing may be required to handle variations in question and answer formats.
- The dataset is specialized for the financial domain and may need adaptation for different QA tasks or domains.
## Additional Information
### Dataset Curators
- Xueqing Peng
- Triantafillos Papadopoulos
- Efstathia Soufleri
- Polydoros Giannouris
- Ruoyu Xiang
- Yan Wang
- Lingfei Qian
- Jimin Huang
- Qianqian Xie
- Sophia Ananiadou
The research is supported by NaCTeM, Archimedes RC, and The Fin AI.
### Licensing Information
- **License:** Apache License 2.0
### Citation Information
If you use this dataset in your research, please consider citing it as follows:
```bibtex
@misc{peng2025plutusbenchmarkinglargelanguage,
title={Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance},
author={Xueqing Peng and Triantafillos Papadopoulos and Efstathia Soufleri and Polydoros Giannouris and Ruoyu Xiang and Yan Wang and Lingfei Qian and Jimin Huang and Qianqian Xie and Sophia Ananiadou},
year={2025},
eprint={2502.18772},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.18772},
}
``` |