aashish1904 commited on
Commit
7f94740
1 Parent(s): 3ba6d91

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ language:
6
+ - pt
7
+ base_model: meta-llama/Llama-2-7b
8
+ pipeline_tag: text-generation
9
+
10
+ ---
11
+
12
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
13
+
14
+ # QuantFactory/Clinical-BR-LlaMA-2-7B-GGUF
15
+ This is quantized version of [pucpr-br/Clinical-BR-LlaMA-2-7B](https://huggingface.co/pucpr-br/Clinical-BR-LlaMA-2-7B) created using llama.cpp
16
+
17
+ # Original Model Card
18
+
19
+
20
+ # MED-LLM-BR: Medical Large Language Models for Brazilian Portuguese
21
+ MED-LLM-BR is a collaborative project between [HAILab](https://github.com/HAILab-PUCPR) and [Comsentimento](https://www.comsentimento.com.br/), which aims to develop multiple medical LLMs for Portuguese language, including base models and task-specific models, with different sizes.
22
+
23
+ ## Introduction
24
+ Clinical-BR-LlaMA-2-7B is a fine-tuned language model specifically designed for generating clinical notes in Portuguese. This model builds on the strengths of LlaMA 2 7B, adapting it through targeted fine-tuning techniques to meet the unique demands of clinical text generation. By focusing on the nuances and complexities of medical language in Portuguese, Clinical-BR-LlaMA-2-7B aims to support healthcare professionals with contextually accurate and relevant clinical documentation.
25
+
26
+ ## Fine-Tuning Approach
27
+ To enhance memory efficiency and reduce computational demands, we implemented LoRA with 16-bit precision on the q_proj and v_proj projections. We configured LoRA with R set to 8, Alpha to 16, and Dropout to 0.1, allowing the model to adapt effectively while preserving output quality. For optimization, the AdamW optimizer was used with parameters β1 = 0.9 and β2 = 0.999, achieving a balance between fast convergence and training stability. This careful tuning process ensures robust performance in generating accurate and contextually appropriate clinical text in Portuguese.
28
+
29
+ ## Data
30
+ The fine-tuning of Clinical-BR-LlaMA-2-7B utilized 2.4GB of text from three clinical datasets. The SemClinBr project provided diverse clinical narratives from Brazilian hospitals, while the BRATECA dataset contributed admission notes from various departments in 10 hospitals. Additionally, data from Lopes et al., 2019, added neurology-focused texts from European Portuguese medical journals. These datasets collectively improved the model’s ability to generate accurate clinical notes in Portuguese.
31
+
32
+ ## Provisional Citation:
33
+ ```
34
+ @inproceedings{pinto2024clinicalLLMs,
35
+ title = {Developing Resource-Efficient Clinical LLMs for Brazilian Portuguese},
36
+ author = {João Gabriel de Souza Pinto and Andrey Rodrigues de Freitas and Anderson Carlos Gomes Martins and Caroline Midori Rozza Sawazaki and Caroline Vidal and Lucas Emanuel Silva e Oliveira},
37
+ booktitle = {Proceedings of the 34th Brazilian Conference on Intelligent Systems (BRACIS)},
38
+ year = {2024},
39
+ note = {In press},
40
+ }
41
+ ```