full-tokenized-512 / README.md
adrianhenkel's picture
Update README.md
d0294a1
|
raw
history blame contribute delete
No virus
527 Bytes
---
dataset_info:
features:
- name: input_id_x
sequence: int64
- name: input_id_y
sequence: int64
splits:
- name: train
num_bytes: 59707798880
num_examples: 17070828
download_size: 4618055901
dataset_size: 59707798880
task_categories:
- text-generation
- translation
pretty_name: tokenizedproteins
size_categories:
- 10M<n<100M
---
# Dataset Card for "fulldataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)