Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<expected_answer: string, questions: list<item: string>, source: string, statements: list<item: string>, target: string>
to
{'boolean_algebra_edge_yes_1': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_yes_2': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_yes_3': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_inv_4': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_inv_5': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None
...
type='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_436': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_437': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_438': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_hierarchy_439': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<expected_answer: string, questions: list<item: string>, source: string, statements: list<item: string>, target: string>
              to
              {'boolean_algebra_edge_yes_1': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_yes_2': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_yes_3': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_inv_4': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'boolean_algebra_edge_inv_5': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None
              ...
              type='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_436': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_437': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_edge_no_438': {'source': Value(dtype='string', id=None), 'target': Value(dtype='string', id=None), 'expected_answer': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'branch_of_history_hierarchy_439': {'expected_answer': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'questions': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'statements': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1438, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

domain
string
version
string
question_clusters
dict
academic_disciplines
V2
{"boolean_algebra_edge_yes_1":{"expected_answer":"yes","source":"boolean algebra","target":"academic(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'foam cake' a subconcept of 'Battenberg cake' ?","is 'foam (...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'dish' a subconcept of 'Battenberg cake' ?","is 'dish' a ty(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'bakers' confection' a subconcept of 'Battenberg cake' ?","(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'cake' a subconcept of 'Battenberg cake' ?","is 'cake' a ty(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'confection' a subconcept of 'Battenberg cake' ?","is 'conf(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'sponge cake' a subconcept of 'Battenberg cake' ?","is 'spo(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'sweet dish' a subconcept of 'Battenberg cake' ?","is 'swee(...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'Battenberg cake' a subconcept of 'Beef Stroganoff' ?","is (...TRUNCATED)
dishes
V2
{"expected_answer":"no","questions":["is 'Battenberg cake' a subconcept of 'beefsteak' ?","is 'Batte(...TRUNCATED)
End of preview.

What it is:

Each dataset in this delivery is made up of query clusters that test an aspect of the consistency of the LLM knowledge about a particular domain. All the questions in each cluster are meant to be answered either 'yes' or 'no'. When the answers vary within a cluster, the knowledge is said to be inconsistent. When all the questions in a cluster are answered 'no' when the expected answer is 'yes' (or viceversa), the knowledge is said to be 'incomplete' (i.e., maybe the LLM wasn't trained in that particular domain). It is our experience that incomplete clusters are very few (less than 3%) meaning that the LLMs we have tested know about the domains included here (see below for a list of the individual datasets), as opposed to inconsistent clusters, which can be between 6%-20% of the total clusters.

The image below indicates the types of edges the query clusters are supposed to test. It is worth noting that these correspond to common sense axioms about conceptualization, like the fact that subConceptOf is transitive (4) or that subconcepts inherit the properties of their parent concepts (5). These axioms are listed in the accompanying paper (see below)

image/png

How it is made:

The questions and clusters are automatically generated from a knowledge graph from seed concepts and properties. In our case, we have used Wikidata, a well known knowledge graph. The result is an RDF/OWL subgraph that can be queried and reasoned over using Semantic Web technology. The figure below summarizes the steps used. The last two steps refer to a possible use case for this dataset, including using in-context learning to improve the performance of the dataset.

image/png

Types of query clusters

There are different types of query clusters depending on what aspect of the knowledge graph and its deductive closure they capture:

Edge clusters test a single edge using different questions. For example, to test the edge ('orthopedic pediatric surgeon', IsA, 'orthopedic surgeon), the positive or 'edge_yes' (expected answer is 'yes') cluster is:

  "is 'orthopedic pediatric surgeon' a subconcept of 'orthopedic surgeon' ?",
  "is 'orthopedic pediatric surgeon' a type of 'orthopedic surgeon' ?",
  "is every kind of 'orthopedic pediatric surgeon' also a kind of 'orthopedic surgeon' ?",
  "is 'orthopedic pediatric surgeon' a subcategory of 'orthopedic surgeon' ?"

There are also inverse edge clusters (with questions like "is 'orthopedic surgeon' a subconcept of 'orthopedic pediatric surgeon' ?") and negative or 'edge_no' clusters (with questions like "is 'orthopedic pediatric surgeon' a subconcept of 'dermatologist' ?")

Hierarchy clusters measure the consistency of a given path, including n-hop virtual edges (in graph's the deductive closure). For example, the path ('orthopedic surgeon', 'surgeon', 'medical specialist', 'medical occupation') is tested by the cluster below

  "is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
  "is 'orthopedic surgeon' a type of 'surgeon' ?",
  "is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
  "is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
  "is 'orthopedic surgeon' a subconcept of 'medical specialist' ?",
  "is 'orthopedic surgeon' a type of 'medical specialist' ?",
  "is every kind of 'orthopedic surgeon' also a kind of 'medical specialist' ?",
  "is 'orthopedic surgeon' a subcategory of 'medical specialist' ?",
  "is 'orthopedic surgeon' a subconcept of 'medical_occupation' ?",
  "is 'orthopedic surgeon' a type of 'medical_occupation' ?",
  "is every kind of 'orthopedic surgeon' also a kind of 'medical_occupation' ?",
  "is 'orthopedic surgeon' a subcategory of 'medical_occupation' ?"

Property inheritance clusters test the most basic property of conceptualization. If an orthopedic surgeon is a type of surgeon, we expect that all the properties of surgeons, e.g., having to be board certified, having attended medical school or working on the field of surgery, are inherited by orthopedic surgeons. The example below tests the later:

  "is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
  "is 'orthopedic surgeon' a type of 'surgeon' ?",
  "is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
  "is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
  "is the following statement true? 'orthopedic surgeon works on the field of  surgery' ",
  "is the following statement true? 'surgeon works on the field of  surgery' ",
  "is it accurate to say that  'orthopedic surgeon works on the field of  surgery'? ",
  "is it accurate to say that  'surgeon works on the field of  surgery'? "

List of datasets

To show the versatility of our approach, we have constructed similar datasets in the domains below. We test one property inheritance per dataset. The Wikidata main QNode (the node corresponding to the entities) and PNode (the node corresponding to the property) are indicated in parenthesis.

domain top concept WD concept main property WD property
Academic Disciplines "Academic Discipline" https://www.wikidata.org/wiki/Q11862829 "has use" https://www.wikidata.org/wiki/Property:P366
Dishes "Dish" https://www.wikidata.org/wiki/Q746549 "has parts" https://www.wikidata.org/wiki/Property:P527
Financial products "Financial product" https://www.wikidata.org/wiki/Q15809678 "used by" https://www.wikidata.org/wiki/Property:P1535
Home appliances "Home appliance" https://www.wikidata.org/wiki/Q212920 "has use" https://www.wikidata.org/wiki/Property:P366
Medical specialties "Medical specialty" https://www.wikidata.org/wiki/Q930752 "field of occupation" https://www.wikidata.org/wiki/Property:P425
Music genres "Music genre" https://www.wikidata.org/wiki/Q188451 "practiced by" https://www.wikidata.org/wiki/Property:P3095
Natural disasters "Natural disaster" https://www.wikidata.org/wiki/Q8065 "has cause" https://www.wikidata.org/wiki/Property:P828
Software "Software" https://www.wikidata.org/wiki/Q7397 "studied in" https://www.wikidata.org/wiki/Property:P7397

The size and configuration of the datasets is listed below

domain edges_yes edges_no edges_in hierarchies property hierarchies
Academic Disciplines 52 308 52 30 1
Dishes 197 519 197 62 121
Financial product 112 433 108 40 32
Home appliances 58 261 58 31 13
Medical specialties 122 386 114 55 63
Music genres 490 807 488 212 139
Natural disasters 45 225 44 21 22
Software 80 572 79 114 4

Want to know more?

For background and motivation on this dataset, please check https://arxiv.org/abs/2405.20163 Also to be published in COLM 2024,

@inproceedings{Uceda_2024_1,
  title={Reasoning about concepts with LLMs: Inconsistencies abound},
  author={Rosario Uceda Sosa and Karthikeyan Natesan Ramamurthy and Maria Chang and Moninder Singh},
  booktitle={Proc.\ 1st Conference on Language Modeling (COLM 24)},
  year={2024}
}

Questions? Comments?

Please contact [email protected], [email protected], [email protected] or [email protected]

Downloads last month
290