meliksahturker
commited on
Commit
•
1342f46
1
Parent(s):
2829d03
Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,12 @@ VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from
|
|
17 |
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
|
18 |
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
|
19 |
|
20 |
-
This repository contains pre-trained TensorFlow and Safetensors weights of
|
21 |
|
22 |
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
|
23 |
- **Model type:** Transformer encoder-decoder based on mBART architecture
|
24 |
- **Language(s) (NLP):** Turkish
|
25 |
- **License:** CC BY-NC-SA 4.0
|
26 |
-
- **Finetuned from:** VBART-Large
|
27 |
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
|
28 |
|
29 |
## Training Details
|
|
|
17 |
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
|
18 |
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
|
19 |
|
20 |
+
This repository contains pre-trained TensorFlow and Safetensors weights of VBART-Medium-Base.
|
21 |
|
22 |
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
|
23 |
- **Model type:** Transformer encoder-decoder based on mBART architecture
|
24 |
- **Language(s) (NLP):** Turkish
|
25 |
- **License:** CC BY-NC-SA 4.0
|
|
|
26 |
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
|
27 |
|
28 |
## Training Details
|