# DistilBERT with word2vec token embeddings This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs. Then the model was trained on this dataset with MLM for 1M steps (batch size 64). The token embeddings were NOT updated.