Commit
·
d92331d
1
Parent(s):
dd3a543
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,10 @@ license: apache-2.0
|
|
6 |
</h1>
|
7 |
<hr>
|
8 |
|
|
|
|
|
|
|
|
|
9 |
## About Ask2Democracy project
|
10 |
This model was developed as part of the Ask2Democracy project during the 2023 Somos NLP Hackathon. Our focus during the hackathon was on enhancing the generative capabilities in spanish training an open source model for this purpose, which is intended to be incorporated into the space demo.
|
11 |
However, we encountered performance limitations due to the model's large size, which caused issues when running it on limited hardware. Specifically, we observed an inference time of approximately 70 seconds when using a GPU.
|
@@ -26,11 +30,6 @@ There is other model variation more focused on source based augmented retrieval
|
|
26 |
|
27 |
Testing is a work in progress, we decide to share both model variations with community in order to invovle more people experimenting what it works better and find other possible use cases.
|
28 |
|
29 |
-
|
30 |
-
- **Developed by:**
|
31 |
-
- 🇨🇴 [Jorge Henao](https://huggingface.co/jorge-henao)
|
32 |
-
- 🇨🇴 [David Torres ](https://github.com/datorresb)
|
33 |
-
|
34 |
## Training Parameters
|
35 |
|
36 |
- Base Model: [LLaMA-7B](https://arxiv.org/pdf/2302.13971.pdf)
|
|
|
6 |
</h1>
|
7 |
<hr>
|
8 |
|
9 |
+
**Developed by:**
|
10 |
+
- 🇨🇴 [Jorge Henao](https://huggingface.co/jorge-henao)
|
11 |
+
- 🇨🇴 [David Torres ](https://github.com/datorresb)
|
12 |
+
|
13 |
## About Ask2Democracy project
|
14 |
This model was developed as part of the Ask2Democracy project during the 2023 Somos NLP Hackathon. Our focus during the hackathon was on enhancing the generative capabilities in spanish training an open source model for this purpose, which is intended to be incorporated into the space demo.
|
15 |
However, we encountered performance limitations due to the model's large size, which caused issues when running it on limited hardware. Specifically, we observed an inference time of approximately 70 seconds when using a GPU.
|
|
|
30 |
|
31 |
Testing is a work in progress, we decide to share both model variations with community in order to invovle more people experimenting what it works better and find other possible use cases.
|
32 |
|
|
|
|
|
|
|
|
|
|
|
33 |
## Training Parameters
|
34 |
|
35 |
- Base Model: [LLaMA-7B](https://arxiv.org/pdf/2302.13971.pdf)
|