metadata
license: mit
language:
- ml
MalayaLLM: Gemma [മലയാളം/Malayalam]
Introducing the Developer:
Discover the mind behind this model and stay updated on their contributions to the field https://www.linkedin.com/in/vishnu-prasad-j/
Model description
The MalayaLLM models have been improved and customized expanding upon the groundwork laid by the original Gemma model.
- Model type: A 7B Gemma finetuned model on Malayalam tokens.
- Language(s): Malayalam and English
- Datasets: CohereForAI/aya_dataset
- Source Model: MalayaLLM_Gemma_7B_Base_V1
- Instruct Model: MalayaLLM_Gemma_7B_Instruct_V1
- Training Precision:
float16
- Github Repo: MalayaLLM-Gemma
Model Update
Latest Gemma2-9B trained model is here :MalayaLLM:Gemma-2-9B
How to run GGUF
llama.cpp Web Server
- The web server is a lightweight HTTP server that can be used to serve local models and easily connect them to existing clients.
Building llama.cpp
- To build
llama.cpp
locally, follow the instructions provided in the build documentation.
- To build
Running llama.cpp as a Web Server
- Once you have built
llama.cpp
, you can run it as a web server. Below is an example of how to start the server:llama-server.exe -m gemma_7b_instruction.Q4_K_M.gguf -ngl 42 -c 128 -n 100
- Once you have built
Accessing the Web UI
- After starting the server, you can access the basic web UI via your browser at the following address: http://localhost:8080
Made Using UNSLOTH
Thanks to Unsloth, the process of fine-tuning large language models (LLMs) has become much easier and more efficient.