view post Post 469 Fun fact: you can get any DeepSeek-R1-Qwen **abliterated** by using one of these LoRA adapters (GGUF available!) ngxson/extracted-lora-mergekit-677d5c3eea0b6a7661201846 See translation
view post Post 2055 Check out my collection of pre-made GGUF LoRA adapters!This allow you to use both normal + abliterated version of popular models like llama, qwen, etc, without having to double to amount of VRAM usage. ngxson/gguf_lora_collection See translation
Extracted LoRA (mergekit) PEFT-compatible LoRA adapters produced by mergekit-extract-lora Running 3 📁 Extracted LoRA - GGUF version Redirection to ggml-org collection ngxson/LoRA-Qwen2.5-1.5B-Instruct-abliterated Updated 2 days ago • 1 ngxson/LoRA-Qwen2.5-3B-Instruct-abliterated Updated 16 days ago ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 Updated 19 days ago
MiniThinky: extra small reasoning models My first trial to make reasoning models Running 161 🧠 Llama 3.2 Reasoning WebGPU Small and powerful reasoning LLM that runs in your browser ngxson/MiniThinky-v2-1B-Llama-3.2 Text Generation • Updated 16 days ago • 10.4k • 35 ngxson/MiniThinky-v2-1B-Llama-3.2-Q8_0-GGUF Updated 18 days ago • 272 • 6 ngxson/MiniThinky-1B-Llama-3.2 Text Generation • Updated 17 days ago • 253 • 4