qanastek commited on
Commit
97db79e
1 Parent(s): 559a80c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -116,15 +116,18 @@ You just need to change the name of the model to `Dr-BERT/DrBERT-7GB` in any of
116
 
117
  # Citation BibTeX
118
 
 
119
  ```bibtex
120
- @misc{labrak2023drbert,
121
- title={DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains},
122
- author={Yanis Labrak and Adrien Bazoge and Richard Dufour and Mickael Rouvier and Emmanuel Morin and Béatrice Daille and Pierre-Antoine Gourraud},
123
- year={2023},
124
- eprint={2304.00958},
125
- archivePrefix={arXiv},
126
- primaryClass={cs.CL}
 
 
127
  }
128
-
129
 
130
 
 
116
 
117
  # Citation BibTeX
118
 
119
+
120
  ```bibtex
121
+ @inproceedings{labrak2023drbert,
122
+ title = "DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains",
123
+ author = "Yanis, Labrak and Adrien, Bazoge and Richard, Dufour and Mickael, Rouvier and Emmanuel, Morin and Béatrice, Daille and Pierre-Antoine, Gourraud",
124
+ booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper",
125
+ month = july,
126
+ year = "2023",
127
+ address = "Toronto, Canada",
128
+ publisher = "Association for Computational Linguistics",
129
+ abstract = "In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.",
130
  }
131
+ ```
132
 
133