PATTARA TIPAKSORN commited on
Commit
80fe1ed
1 Parent(s): 25bc5ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -65,13 +65,18 @@ print(response[0])
65
  Additional details are required.
66
 
67
  ## Limitations and Future Work
68
- At present, our model remains in the experimental research phase and is not yet fully suitable for practical applications as an assistant. Future work will focus on upgrading the language model to a newer version ([OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B](https://huggingface.co/nectec/OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B)), and curating more refined and robust datasets to improve performance. Additionally, we aim to address and prioritize the safety and reliability of the model's outputs.
69
-
70
- ## Citation
71
- Additional details are required.
72
 
73
  ## Acknowledgements
74
  We are grateful to ThaiSC, also known as NSTDA Supercomputer Centre, for providing the LANTA that was utilised for model training and finetuning. Additionally, we would like to express our gratitude to the SALMONN team for making their code publicly available, and to Typhoon Audio at SCB 10X for making available the huggingface project, source code, and technical paper, which served as a valuable guide for us. Many other open-source projects have contributed valuable information, code, data, and model weights; we are grateful to them all.
75
 
76
  ## Pathumma Audio Team
77
- *Pattara Tipkasorn*, Wayupuk Sommuang, Oatsada Chatthong, *Kwanchiva Thangthai*
 
 
 
 
 
 
 
 
 
65
  Additional details are required.
66
 
67
  ## Limitations and Future Work
68
+ At present, our model remains in the experimental research phase and is not yet fully suitable for practical applications as an assistant. Future work will focus on upgrading the language model to a newer version ([Pathumma-llm-text-1.0.0](https://huggingface.co/nectec/Pathumma-llm-text-1.0.0)), and curating more refined and robust datasets to improve performance. Additionally, we aim to address and prioritize the safety and reliability of the model's outputs.
 
 
 
69
 
70
  ## Acknowledgements
71
  We are grateful to ThaiSC, also known as NSTDA Supercomputer Centre, for providing the LANTA that was utilised for model training and finetuning. Additionally, we would like to express our gratitude to the SALMONN team for making their code publicly available, and to Typhoon Audio at SCB 10X for making available the huggingface project, source code, and technical paper, which served as a valuable guide for us. Many other open-source projects have contributed valuable information, code, data, and model weights; we are grateful to them all.
72
 
73
  ## Pathumma Audio Team
74
+ *Pattara Tipkasorn*, Wayupuk Sommuang, Oatsada Chatthong, *Kwanchiva Thangthai*
75
+
76
+ ## Citation
77
+ @misc{tipkasorn2024pathumma,
78
+ Note = {\href{https://huggingface.co/nectec/Pathumma-llm-audio-1.0.0}{Pathumma-Audio}},
79
+ author = {Pattara Tipkasorn and Wayupuk Sommuang and Oatsada Chatthong and Kwanchiva Thangthai},
80
+ publisher = { Hugging Face },
81
+ year = {2024},
82
+ }