Prazzwal07
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -244,27 +244,19 @@ RoBERTa architecture with 12 transformer layers, hidden size of 768, 12 attentio
|
|
244 |
|
245 |
---
|
246 |
|
247 |
-
## Citation
|
248 |
-
|
249 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
250 |
-
|
251 |
-
**BibTeX:**
|
252 |
-
|
253 |
```
|
254 |
-
@misc{
|
255 |
-
|
256 |
-
|
257 |
-
|
258 |
-
|
|
|
|
|
|
|
259 |
}
|
260 |
```
|
261 |
|
262 |
-
**APA:**
|
263 |
-
|
264 |
-
```
|
265 |
-
Thapa, P., Nyachhyon, J., Sharma, M., & Bal, B. K. (2024). Development of pre-trained transformer-based models for the Nepali language. Manuscript submitted for publication to COLING 2025.
|
266 |
-
```
|
267 |
-
|
268 |
---
|
269 |
|
270 |
## Model Card Contact
|
|
|
244 |
|
245 |
---
|
246 |
|
247 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
248 |
```
|
249 |
+
@misc{thapa2024developmentpretrainedtransformerbasedmodels,
|
250 |
+
title={Development of Pre-Trained Transformer-based Models for the Nepali Language},
|
251 |
+
author={Prajwal Thapa and Jinu Nyachhyon and Mridul Sharma and Bal Krishna Bal},
|
252 |
+
year={2024},
|
253 |
+
eprint={2411.15734},
|
254 |
+
archivePrefix={arXiv},
|
255 |
+
primaryClass={cs.CL},
|
256 |
+
url={https://arxiv.org/abs/2411.15734},
|
257 |
}
|
258 |
```
|
259 |
|
|
|
|
|
|
|
|
|
|
|
|
|
260 |
---
|
261 |
|
262 |
## Model Card Contact
|