Edit model card

This is a model checkpoint for "Should You Mask 15% in Masked Language Modeling" (code).

The original checkpoint is avaliable at princeton-nlp/efficient_mlm_m0.15-801010. Unfortunately this checkpoint depends on code that isn't part of the official transformers library. Additionally, the checkpoints contains unused weights due to a bug.

This checkpoint fixes the unused weights issue and uses the RobertaPreLayerNorm model from the transformers library.

Downloads last month
6
Inference Examples
Inference API (serverless) has been turned off for this model.