|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
model-index: |
|
- name: rttl-ai/SentyBert |
|
results: |
|
- task: |
|
type: task-classification |
|
name: Text Classification |
|
dataset: |
|
type: sst2 |
|
name: sst2 |
|
config: default |
|
split: validation |
|
metrics: |
|
- type: f1 |
|
value: 0.9992 |
|
name: F1 Macro |
|
- type: accuracy |
|
value: 0.9992 |
|
name: Accuracy |
|
datasets: |
|
- sst2 |
|
- sst |
|
--- |
|
|
|
# rttl-ai/SentyBert |
|
|
|
## Model Details |
|
**Model Description:** This model is a fine-tune checkpoint of [bert-large-uncased](https://huggingface.co/bert-large-uncased), fine-tuned on SST-2. |
|
This model reaches an accuracy of 99.92 on the dev set. |
|
- **Developed by:** rttl-ai |
|
- **Model Type:** Text Classification |
|
- **Language(s):** English |
|
- **License:** Apache-2.0 |
|
- **Resources for more information:** |
|
- The model was pre-trained with task-adaptive pre-training [TAPT](https://arxiv.org/pdf/2004.10964.pdf) with an increased masking rate, no corruption strategy, and using WWM, following [this paper](https://aclanthology.org/2023.eacl-main.217.pdf) |
|
- fine-tuned on sst with subtrees |
|
- fine-tuned on sst2 |