eachadea's picture
Update README.md
1735dac
|
raw
history blame
269 Bytes
metadata
pipeline_tag: conversational
tags:
  - vicuna
  - llama
  - text-generation-inference

Converted for use with llama.cpp

  • 4-bit quantized
  • Needs ~10GB of CPU RAM

tags: - vicuna - llama - text-generation-inference