Datasets:

Languages:
English
Size:
n<1K
ArXiv:
License:
Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/connections) changed from object to array in row 656
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 334)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/connections) changed from object to array in row 656

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Achieve the Core

This repository includes Common Core math standards, their descriptions, and metadata obtained from Achieve the Core.

Example of a math standard:

{
  "id": "K.CC.B.4",
  "description": "Understand the relationship between numbers and quantities; connect counting to cardinality.",
  "source": "Achieve the Core",
  "level": "Standard",
  "cluster_type": "major cluster",
  "aspects": [],
  "parent": "K.CC.B",
  "children": ["K.CC.B.4c", "K.CC.B.4b", "K.CC.B.4a"],
  "connections": {"progress to": ["1.OA.C.5", "K.CC.B.5"], "progress from": [], "related": ["K.CC.A.2", "K.CC.C.6", "K.CC.A.1"]},
  "modeling": false
}

See MathFish for more details on uses of this data.

This data can be used to evaluate language models' abilities to assess whether math problems enable students to learn specific skills/concepts. Code to support this can be found in this Github repository.

Dataset Details

Dataset Description

  • Curated by: Lucy Li, Tal August, Rose E Wang, Luca Soldaini, Courtney Allison, Kyle Lo
  • Funded by: The Gates Foundation
  • Language(s) (NLP): English
  • License: ODC-By 1.0

Dataset Sources

Dataset Structure

This repository includes two key files: domain_groups.json and standards.jsonl.

We created domain_groups.json because the "domains" we evaluate with for our tagging task do not have a one-to-one mapping to K-8 domains and high school (HS) categories in Common Core State Standards (CCSS). Some HS categories are equivalent or similar to a domain in K-8, and some differences in K-8 domains are difficult to explain a brief description at the domain-level. Thus, a "domain" in our paper sometimes groups multiple actual CCSS domains/categories. We mostly retain the original CCSS K-8 domains and HS categories, but make exceptions for the following: we group OA (Operations & Algebraic Thinking), EE (Expressions & Equations), and A (HS Algebra) into Operations & Algebra, S (HS Statistics & Probability) and SP (K-8 Statistics & Probability) to \textit{Statistics & Probability}, and finally NS (K-8 The Number System) and N (HS Number and Quantity) to Number Systems and Quantity. Since CCSS and Achieve the Core do not provide brief descriptions of domains, we worked with a curriculum specialist to write domains' descriptions.

Within standards.jsonl, each line is a standard, sub-standard, cluster, domain, or grade level:

{
  id: '', # e.g. 'K.OA.A.1'
  description: 'description of standard from achieve the core',
  source: 'Achieve the Core',
  level: '', # one of Grade, HS Category, Domain, Cluster, Standard, Sub-standard
  cluster_type: '', # e.g. major cluster, additional cluster, minor cluster
  aspects: [], # a list containing items such as "Application", "conceptual understanding", "Procedural Skill and Fluency"
  parent: '',
  children: [],
  connections: {''progress to': [], 'progress from': [], 'related': []} # standard-level Achieve the Core connections
  modeling: # True or False depending on whether the standard is a "modeling" standard
}

Citation

@misc{lucy2024evaluatinglanguagemodelmath,
      title={Evaluating Language Model Math Reasoning via Grounding in Educational Curricula}, 
      author={Li Lucy and Tal August and Rose E. Wang and Luca Soldaini and Courtney Allison and Kyle Lo},
      year={2024},
      eprint={2408.04226},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.04226}, 
}

Dataset Card Contact

[email protected]

Downloads last month
65