MaziyarPanahi commited on
Commit
d13e1c6
·
verified ·
1 Parent(s): d43409c

Upload folder using huggingface_hub (#1)

Browse files

- 5df8b734793b9337a759ac0dee878637e990f6b5d935c526969314bef4a6bb73 (2a2294192e23b19c0db8bfdf807c74502bfd08e3)
- 28c5b68f26136f1629503c61da6bae27b389cc8358879a2d8cb70c3d7972ad38 (edb12695a9576b72450a3f927437686860bea0cc)
- bc0a828c36dc35a28ea9c111b24c6281286817f49c1f2c4864f8dc00541e98b7 (4b4bf76a0afe75dd3706b26ac48e87400462aeaf)
- 01bf964517536b461d2dc981bdf6cb2454e4bdfe8a943b8c04376810fd6662cb (553db9f78e08d8efa079c1dd3cec12532c037a9c)
- 68cd55d7bc022ef6b571e92a88d26e88ca3b763d0e6ad82b26dd11efc9e2f790 (0fc3ce97a46586c9952a2c62f3de312dce5af86c)
- a297b29b9fc991f213eee1f1b32f55b10c882298c113f78fe70877ead2ecdc5c (b91e70ba5d15ab230d845d4a1ecbb26019b65b77)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Lacerta-Opus-14B-Elite8.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Lacerta-Opus-14B-Elite8.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Lacerta-Opus-14B-Elite8.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Lacerta-Opus-14B-Elite8.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Lacerta-Opus-14B-Elite8.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Lacerta-Opus-14B-Elite8-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Lacerta-Opus-14B-Elite8-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:934ef1efb8a7d89bdc02e6ce5085f3c69e4df5c3592ba3d03605d2081f837ef0
3
+ size 8563586
Lacerta-Opus-14B-Elite8.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9388bb5a90f741abdd1ab44e95147dfd18cdcfb9934e05af03b8143a490308
3
+ size 10505784352
Lacerta-Opus-14B-Elite8.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87e9704668752e39d122efa741ab9a75f11b2589d78eda43603df886bb905746
3
+ size 10263464992
Lacerta-Opus-14B-Elite8.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec80efd51e350bdd917077f1962a18ec74d8b20980f50869c89a77ea8f0e4cb7
3
+ size 12121323616
Lacerta-Opus-14B-Elite8.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48303a3ce10d06a83f66e213fd3a0c80d51aca92c9cd827c02f0f93d07589e9e
3
+ size 15697247968
Lacerta-Opus-14B-Elite8.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9907eae89b0453b3b8cebb259c966888e48cd8dd901a394ffd87fe02742a6f2
3
+ size 29539535712
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: prithivMLmods/Lacerta-Opus-14B-Elite8
3
+ inference: false
4
+ model_creator: prithivMLmods
5
+ model_name: Lacerta-Opus-14B-Elite8-GGUF
6
+ pipeline_tag: text-generation
7
+ quantized_by: MaziyarPanahi
8
+ tags:
9
+ - quantized
10
+ - 2-bit
11
+ - 3-bit
12
+ - 4-bit
13
+ - 5-bit
14
+ - 6-bit
15
+ - 8-bit
16
+ - GGUF
17
+ - text-generation
18
+ ---
19
+ # [MaziyarPanahi/Lacerta-Opus-14B-Elite8-GGUF](https://huggingface.co/MaziyarPanahi/Lacerta-Opus-14B-Elite8-GGUF)
20
+ - Model creator: [prithivMLmods](https://huggingface.co/prithivMLmods)
21
+ - Original model: [prithivMLmods/Lacerta-Opus-14B-Elite8](https://huggingface.co/prithivMLmods/Lacerta-Opus-14B-Elite8)
22
+
23
+ ## Description
24
+ [MaziyarPanahi/Lacerta-Opus-14B-Elite8-GGUF](https://huggingface.co/MaziyarPanahi/Lacerta-Opus-14B-Elite8-GGUF) contains GGUF format model files for [prithivMLmods/Lacerta-Opus-14B-Elite8](https://huggingface.co/prithivMLmods/Lacerta-Opus-14B-Elite8).
25
+
26
+ ### About GGUF
27
+
28
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
29
+
30
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
31
+
32
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
33
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
34
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
35
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
36
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
37
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
38
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
39
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
40
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
41
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
42
+
43
+ ## Special thanks
44
+
45
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.