GGUF
English
unsloth
Inference Endpoints
conversational
Edit model card

QuantFactory/Replete-LLM-Qwen2-7b_Beta-Preview-GGUF

This is quantized version of Replete-AI/Replete-LLM-Qwen2-7b_Beta-Preview created using llama.cpp

Original Model Card

Replete-LLM-Qwen2-7b_Beta-Preview

image/png

Thank you to TensorDock for sponsoring Replete-LLM you can check out their website for cloud compute rental below.


This is a preview look at our flagship model, Replete-LLM. This version of the model has only been trained for 1 epoch on the dataset (Linked bellow). The final model will be trained on a full 5 epochs using Qlora and Unsloth.

Model card:

Replete-LLM is Replete-AI's flagship model. We take pride in releasing a fully open-source, low parameter, and competitive AI model that not only surpasses its predecessor Qwen2-7B-Instruct in performance, but also competes with (if not surpasses) other flagship models such as gemma-2-9b-it and Meta-Llama-3.1-8B-Instruct in terms of overall performance across all fields and categories.

Replete-LLM-Qwen2-7b is a versatile model fine-tuned to excel on any imaginable task. The following types of generations were included in the fine-tuning process:

  • Science: (General, Physical Reasoning)

  • Social Media: (Reddit, Twitter)

  • General Knowledge: (Character-Codex), (Famous Quotes), (Steam Video Games), (How-To? Explanations)

  • Cooking: (Cooking Preferences, Recipes)

  • Writing: (Poetry, Essays, General Writing)

  • Medicine: (General Medical Data)

  • History: (General Historical Data)

  • Law: (Legal Q&A)

  • Role-Play: (Couple-RP, Roleplay Conversations)

  • News: (News Generation)

  • Coding: (3 million rows of coding data in over 100 coding languages)

  • Math: (Math data from TIGER-Lab/MathInstruct)

  • Function Calling: (Function calling data from "glaiveai/glaive-function-calling-v2")

  • General Instruction: (All of teknium/OpenHermes-2.5 fully filtered and uncensored)

At Replete-AI, we hope you utilize our open-source model locally for your work and enjoyment rather than paying companies like OpenAI and Anthropic AI, or anyone who charges fees for using AI models. We believe in complete freedom and openness for AI usage by everyone. Therefore, please enjoy our model and anticipate the final release within a few weeks.


You can find our highest quality quantization that runs under 10gb of vram with 8k context bellow


Prompt Template: ChatML

<|im_start|>system
{}<|im_end|>
<|im_start|>user
{}<|im_end|>
<|im_start|>assistant
{}

Want to know the secret sause of how this model was made? Find the write up bellow

Continuous Fine-tuning Without Loss Using Lora and Mergekit

https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing


The code to finetune this AI model can be found bellow

  • https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing

  • Note this model in particular was finetuned using an h100 using Tensordock.com using the Pytorch OS. In order to use Unsloth code with TensorDock you need to run the following code (Bellow) to reinstall drivers on TensorDock before unsloth works. After running the code bellow, your Virtual Machine will reset, and you will have to SSH back into it. And then you can run the normal unsloth code in order.

# Check Current Size
!df -h /dev/shm

# Increase Size Temporarily
!sudo mount -o remount,size=16G /dev/shm

# Increase Size Permanently
!echo "tmpfs /dev/shm tmpfs defaults,size=16G 0 0" | sudo tee -a /etc/fstab

# Remount /dev/shm
!sudo mount -o remount /dev/shm


# Verify the Changes
!df -h /dev/shm

!nvcc --version

!export TORCH_DISTRIBUTED_DEBUG=DETAIL
!export NCCL_DEBUG=INFO
!python -c "import torch; print(torch.version.cuda)"
!export PATH=/usr/local/cuda/bin:$PATH

!export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
!export NCCL_P2P_LEVEL=NVL

!export NCCL_DEBUG=INFO
!export NCCL_DEBUG_SUBSYS=ALL
!export TORCH_DISTRIBUTED_DEBUG=INFO
!export TORCHELASTIC_ERROR_FILE=/PATH/TO/torcherror.log
!sudo apt-get remove --purge -y '^nvidia-.*'
!sudo apt-get remove --purge -y '^cuda-.*'
!sudo apt-get autoremove -y
!sudo apt-get autoclean -y
!sudo apt-get update -y
!sudo apt-get install -y nvidia-driver-535 cuda-12-1
!sudo add-apt-repository ppa:graphics-drivers/ppa -y
!sudo apt-get update -y
!sudo apt-get update -y
!sudo apt-get install -y software-properties-common
!sudo add-apt-repository ppa:graphics-drivers/ppa -y
!sudo apt-get update -y
!latest_driver=$(apt-cache search '^nvidia-driver-[0-9]' | grep -oP 'nvidia-driver-\K[0-9]+' | sort -n | tail -1) && sudo apt-get install -y nvidia-driver-$latest_driver
!sudo reboot

Join the Replete-Ai discord! We are a great and Loving community!

Downloads last month
268
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .