jorge-henao
commited on
Commit
·
a71ef62
1
Parent(s):
12513a6
Update README.md
Browse files
README.md
CHANGED
@@ -21,12 +21,10 @@ To address this issue, we are currently working on optimizing ways to integrate
|
|
21 |
|
22 |
This model is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA) inspired by [Baize project](https://github.com/project-baize/baize-chatbot/tree/main/). It was trained with the Baize datasets and the ask2democracy-cfqa-salud-pension dataset, wich contains almost 4k instructions to answers questions based on a context relevant to citizen concerns and public debate in spanish.
|
23 |
|
24 |
-
Two model variations was trained during the Hackathon Somos NLP 2023:
|
25 |
-
- A conversational style focused model
|
26 |
-
- A generative context focused model
|
27 |
-
|
28 |
-
This model variation is focused in a more conversational way of asking questions. See Pre-proccessing dataset section.
|
29 |
-
There is other model variation more focused on source based augmented retrieval generation [Baizemocracy-RAGfocused](https://huggingface.co/hackathon-somos-nlp-2023/baizemocracy-lora-7B-cfqa).
|
30 |
|
31 |
Testing is a work in progress, we decide to share both model variations with community in order to invovle more people experimenting what it works better and find other possible use cases.
|
32 |
|
|
|
21 |
|
22 |
This model is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA) inspired by [Baize project](https://github.com/project-baize/baize-chatbot/tree/main/). It was trained with the Baize datasets and the ask2democracy-cfqa-salud-pension dataset, wich contains almost 4k instructions to answers questions based on a context relevant to citizen concerns and public debate in spanish.
|
23 |
|
24 |
+
- Two model variations was trained during the Hackathon Somos NLP 2023:
|
25 |
+
- A conversational style focused model: focused in a more conversational way of asking questions, dee Pre-proccessing dataset section.
|
26 |
+
- A generative context focused model: This model variation is more focused on source based augmented retrieval generation [Baizemocracy-RAGfocused](https://huggingface.co/hackathon-somos-nlp-2023/baizemocracy-lora-7B-cfqa).
|
27 |
+
|
|
|
|
|
28 |
|
29 |
Testing is a work in progress, we decide to share both model variations with community in order to invovle more people experimenting what it works better and find other possible use cases.
|
30 |
|