fuzzy-mittenz's picture
Update README.md
5d31c69 verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
  - llama-cpp
  - gguf-my-repo
license: apache-2.0
language:
  - en
base_model: EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0
datasets:
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - Nopm/Opus_WritingStruct
  - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
  - Gryphe/Sonnet3.5-Charcard-Roleplay
  - Gryphe/ChatGPT-4o-Writing-Prompts
  - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
  - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
  - nothingiisreal/Reddit-Dirty-And-WritingPrompts
  - allura-org/Celeste-1.x-data-mixture
  - cognitivecomputations/dolphin-2.9.3
model-index:
  - name: EVA-Qwen2.5-1.5B-FFT-v0.0
    results: []

TEST! VERY FAST for CPU/EDGE work in progress, so far good for editing and responces with reasoing and common clarity

fuzzy-mittenz/Eva-E_Swarmth-Q2.5-1.5B-v0.0-Q5_K_S-GGUF

eva-e.png

This model was converted to GGUF format from EVA-UNIT-01/EVA-Qwen2.5-1.5B-v0.0 using llama.cpp Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.