lhallee commited on
Commit
62047ba
·
verified ·
1 Parent(s): 2dcab0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -87,12 +87,11 @@ You can load the weights from the ESM package instead of transformers by replaci
87
  We employ linear probing techniques on various PLMs and standard datasets, similar our previous [paper](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1), to access the intrinsic correlation between pooled hidden states and valuable properties. ESMC (and thus ESM++) perform very well.
88
 
89
  The plot below showcases performance normalized between the negative control (random vector embeddings) and the best performer. Classification task scores are averaged between MCC and F1 (or F1max for multilabel) and regression tasks are averaged between Spearman rho and R2.
90
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/88EweoycOIhJBk2RvGCKn.png)
91
 
92
  ## Inference speeds
93
  We look at various ESM models and their throughput on an H100. Adding efficient batching between ESMC and ESM++ significantly improves the throughput. ESM++ small is even faster than ESM2-35M with long sequences!
94
  The most gains will be seen with PyTorch > 2.5 on linux machines.
95
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/GWm69UubVABjsmAjcuN6N.png)
96
 
97
  ### Citation
98
  If you use any of this implementation or work please cite it (as well as the ESMC preprint). Bibtex for both coming soon.
 
87
  We employ linear probing techniques on various PLMs and standard datasets, similar our previous [paper](https://www.biorxiv.org/content/10.1101/2024.07.30.605924v1), to access the intrinsic correlation between pooled hidden states and valuable properties. ESMC (and thus ESM++) perform very well.
88
 
89
  The plot below showcases performance normalized between the negative control (random vector embeddings) and the best performer. Classification task scores are averaged between MCC and F1 (or F1max for multilabel) and regression tasks are averaged between Spearman rho and R2.
90
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/dcvZhbtR_fJ6KSFYgI6l6.png)
91
 
92
  ## Inference speeds
93
  We look at various ESM models and their throughput on an H100. Adding efficient batching between ESMC and ESM++ significantly improves the throughput. ESM++ small is even faster than ESM2-35M with long sequences!
94
  The most gains will be seen with PyTorch > 2.5 on linux machines.
 
95
 
96
  ### Citation
97
  If you use any of this implementation or work please cite it (as well as the ESMC preprint). Bibtex for both coming soon.