--- dataset_info: features: - name: input_id_x sequence: int8 - name: input_id_y sequence: int8 splits: - name: train num_bytes: 7582970656 num_examples: 17070828 download_size: 4615653058 dataset_size: 7582970656 --- # Dataset Card for "tokenized-total-512-reduced" This dataset contains truncated tokenized protein sequences and their corresponding 3Di structure as stated in the [Foldseek](https://www.nature.com/articles/s41587-023-01773-0) paper. Redundancy reduction and data sequence filtering was performed by [Dr. Michael Heinzinger](https://scholar.google.com/citations?user=yXtPl58AAAAJ&hl=en) and [Prof. Dr. Martin Steinegger](https://github.com/martin-steinegger). The tokenizer used to encode the sequences can be found [here](https://huggingface.co/adrianhenkel/lucid-prot-tokenizer) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)