language:
- en
- es
Model Card for Carpincho-13b
This is Carpincho-13B an Instruction-tuned LLM based on LLama-13B. It is trained to answer in colloquial spanish Argentine language. It's based on LLama-13b (https://huggingface.co/decapoda-research/llama-13b-hf).
Model Details
The model is provided in ggml format, for use with the llama.cpp CPU-only LLM inference (https://github.com/ggerganov/llama.cpp)
Usage
Clone the llama.cpp repository:
git clone https://github.com/ggerganov/llama.cpp
Compile the tool:
make
Download the file carpincho-13b-ggml-model-q4_0.bin into the llama.cpp directory and run this command:
./main -m ./carpincho-13b-ggml-model-q4_0.bin -i -ins -t 4
Change -t 4 to the number of physical CPU cores you have.
This model requires at least 8GB of free RAM. No GPU is needed to run llama.cpp.
Model Description
- Developed by: Alfredo Ortega (@ortegaalfredo)
- Model type: 13B LLM
- Language(s): (NLP): English and colloquial Argentine Spanish
- License: Free for non-commercial use, but I'm not the police.
- Finetuned from model: https://huggingface.co/decapoda-research/llama-13b-hf
Model Sources [optional]
- Repository: https://huggingface.co/decapoda-research/llama-13b-hf
- Paper [optional]: https://arxiv.org/abs/2302.13971
Uses
This is a generic LLM chatbot that can be used to interact directly with humans.
Bias, Risks, and Limitations
This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Model Card Contact
Contact the creator at @ortegaalfredo on twitter/github