Datasets:
🪐 spaCy Project: Dataset builder to HuggingFace Hub
This project contains utility scripts for uploading a dataset to HuggingFace Hub. We want to separate the spaCy dependencies from the loading script, so we're parsing the spaCy files independently.
The process goes like this: we download the raw corpus from Google Cloud
Storage (GCS), convert the spaCy files into a readable IOB format, and parse
that using our loading script (i.e., tlunified-ner.py
).
We're also shipping the IOB file so that it's easier to access.
📋 project.yml
The project.yml
defines the data assets required by the
project, as well as the available commands and workflows. For details, see the
spaCy projects documentation.
⏯ Commands
The following commands are defined by the project. They
can be executed using spacy project run [name]
.
Commands are only re-run if their inputs have changed.
Command | Description |
---|---|
setup-data |
Prepare the Tagalog corpora used for training various spaCy components |
upload-to-hf |
Upload dataset to HuggingFace Hub |
🗂 Assets
The following assets are defined by the project. They can
be fetched by running spacy project assets
in the project directory.
File | Source | Description |
---|---|---|
assets/corpus.tar.gz |
URL | Annotated TLUnified corpora in spaCy format with train, dev, and test splits. |