---
base_model: ValiantLabs/Esper-70b
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: brooketh
tags:
- esper
- dev-ops
- developer
- code
- code-instruct
- valiant
- valiant-labs
- code-llama
- llama
- llama-2
- llama-2-chat
- 70b
---
**
The official library of GGUF format models for use in the local AI chat app, Faraday.dev.
**Download Faraday here to get started.
Request Additional models at r/LLM_Quants.
*** # Esper 70b - **Creator:** [ValiantLabs](https://huggingface.co/ValiantLabs/) - **Original:** [Esper 70b](https://huggingface.co/ValiantLabs/Esper-70b) - **Date Created:** 2024-03-12 - **Trained Context:** 4096 tokens - **Description:** Esper 70b is a CodeLlama-based assistant with a DevOps focus, specializing in scripted language code, Terraform files, Dockerfiles, YAML, and more. Not recommended for roleplay. ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Faraday.dev. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. ***