How to reproduce

# Prerequisites
apt update -y
apt install -y git git-lfs python3 python3-pip curl pkg-config libssl-dev
python3 -m pip install numpy==1.25.0 sentencepiece==0.1.99
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh && source "$HOME/.cargo/env"

# Clone repositories
git clone https://huggingface.co/OpenBuddy/openbuddy-openllama-7b-v5-fp16 # Commit hash 1fedac68b34952eecec849a5938b778d6004d632
git clone https://github.com/ggerganov/llama.cpp # Commit hash 16b9cd193965769089881bb8ec012fccca7b37b6
git clone --recurse-submodules https://github.com/rustformers/llm.git # Commit hash 3becd728c0d6eeb2d649f86158c7018d5aaaba40

# Build ggml model
cd llama.cpp/
python3 convert.py ../openbuddy-openllama-7b-v5-fp16/
cd ../llm/
cargo build --release
cargo run --release llama quantize ../openbuddy-openllama-7b-v5-fp16/ggml-model-f16.bin ../openbuddy-openllama-7b-v5-fp16/openbuddy-openllama-7b-v5-q4_0.bin q4_0

(The commit hashes are confirmed at the time of 2023/06/19)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.