Feature Extraction
Transformers
PyTorch
English
albert
Inference Endpoints
Edit model card

ALBERT Large (CDA) - Counterfactual Data Augmentation

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. The model is pre-trained from scratch over Wikipedia. Word substitutions for data augmentation are determined using the word lists provided at corefBias (Zhao et al. (2018)).

Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the FairNLP team.

BibTeX entry and citation info

@misc{zari,
      title={Measuring and Reducing Gendered Correlations in Pre-trained Models},
      author={Kellie Webster and Xuezhi Wang and Ian Tenney and Alex Beutel and Emily Pitler and Ellie Pavlick and Jilin Chen and Slav Petrov},
      year={2020},
      eprint={2010.06032},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train fairnlp/albert-cda