optimal_ph / README.md
Bo1015's picture
Update README.md
dd3c44f verified
metadata
dataset_info:
  features:
    - name: seq
      dtype: string
    - name: label
      dtype: float64
  splits:
    - name: train
      num_bytes: 3131241
      num_examples: 7124
    - name: valid
      num_bytes: 337164
      num_examples: 760
    - name: test
      num_bytes: 862758
      num_examples: 1971
  download_size: 4297980
  dataset_size: 4331163
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: valid
        path: data/valid-*
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - text-classification
tags:
  - chemistry
  - biology
size_categories:
  - 1K<n<10K

Dataset Card for Optimal PH Dataset

Dataset Summary

Enzyme functions normally under a specific range of pH of the surrounding environment. However, an optimal pH for the reaction will largely boost the catalytic ability.

Dataset Structure

Data Instances

For each instance, there is a string representing the protein sequence and a float value indicating the optimal ph for a given enzyme’s catalytic effect. See the optimal ph dataset viewer to explore more examples.

{'seq':'MEHVIDNFDNIDKCLKCGKPIKVVKLKYIKKKIENIPNSHLINFKYCSKCKRENVIENL'
'label':6.5}

The average for the seq and the label are provided below:

Feature Mean Count
seq 427
label 7.2

Data Fields

  • seq: a string containing the protein sequence
  • label: a float value indicating the optimal ph for a given enzyme’s catalytic effect.

Data Splits

The Optimal PH dataset has 3 splits: train, valid, and test. Below are the statistics of the dataset.

Dataset Split Number of Instances in Split
Train 7,124
Valid 760
Test 1,971

Source Data

Initial Data Collection and Normalization

The dataset is collected from EpHod, which established a deep learning method to predict the optimal enzyme catalytic pH based on protein sequence only.

Licensing Information

The dataset is released under the Apache-2.0 License.

Citation

If you find our work useful, please consider citing the following paper:

@misc{chen2024xtrimopglm,
  title={xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein},
  author={Chen, Bo and Cheng, Xingyi and Li, Pan and Geng, Yangli-ao and Gong, Jing and Li, Shen and Bei, Zhilei and Tan, Xu and Wang, Boyan and Zeng, Xin and others},
  year={2024},
  eprint={2401.06199},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  note={arXiv preprint arXiv:2401.06199}
}