image/png

🧪 Part of an Experiment

This model is meant to investigate the effects of changing LoRA rank on the same tune. Learning Rate was also increased to 2e-5 from 8e-6

Dumpling-Qwen2.5-7B-1k-r64-2e-5

nbeerbower/EVA-abliterated-Qwen2.5-7B finetuned on:

Method

QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch_dtype,
    bnb_4bit_use_double_quant=True,
)

# LoRA config
peft_config = LoraConfig(
    r=64,
    lora_alpha=64,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)
Downloads last month
3
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5

Finetuned
(5)
this model
Quantizations
2 models

Datasets used to train nbeerbower/Dumpling-Qwen2.5-7B-1k-r64-2e-5