princeton-nlp's picture
Create README.md
32a857a
|
raw
history blame
429 Bytes
---
tags:
- gender-bias
- bert
---
# Model Card for `mabel-bert-base-uncased`
# Model Description
This is the model for MABEL, as described in our paper, "[MABEL: Attenuating Gender Bias using Textual Entailment Data](https://arxiv.org/abs/2210.14975)". MABEL is trained from an underlying `bert-base-uncased` backbone, and demonstrates a good bias-performance tradeoff across a suite of intrinsic and extrinsic bias metrics.