Divyasreepat
commited on
Commit
•
2cfd25f
1
Parent(s):
81c03b9
Update README.md with new model card content
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
library_name: keras-hub
|
3 |
---
|
4 |
-
|
5 |
ELECTRA model is a pretraining approach for language models published by Google. Two transformer models are trained, a generator and a discriminator. The generator replaces tokens in a sequence and is trained as a masked language model. The discriminator is trained to discern what tokens have been replaced. This method of pretraining is more efficient than comparable methods like masked language modeling, especially for small models.
|
6 |
|
7 |
Weights are released under the [MIT License](https://opensource.org/license/mit). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
|
|
1 |
---
|
2 |
library_name: keras-hub
|
3 |
---
|
4 |
+
### Model Overview
|
5 |
ELECTRA model is a pretraining approach for language models published by Google. Two transformer models are trained, a generator and a discriminator. The generator replaces tokens in a sequence and is trained as a masked language model. The discriminator is trained to discern what tokens have been replaced. This method of pretraining is more efficient than comparable methods like masked language modeling, especially for small models.
|
6 |
|
7 |
Weights are released under the [MIT License](https://opensource.org/license/mit). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|