dataset_info: | |
features: | |
- name: input_id_x | |
sequence: int64 | |
- name: input_id_y | |
sequence: int64 | |
splits: | |
- name: train | |
num_bytes: 59707798880 | |
num_examples: 17070828 | |
download_size: 4618055901 | |
dataset_size: 59707798880 | |
task_categories: | |
- text-generation | |
- translation | |
pretty_name: tokenizedproteins | |
size_categories: | |
- 10M<n<100M | |
# Dataset Card for "fulldataset" | |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |