zackli4ai commited on
Commit
ae9f583
1 Parent(s): bdbedc2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -3
README.md CHANGED
@@ -1,3 +1,135 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: google/gemma-2b
4
+ model-index:
5
+ - name: Octopus-V2-2B
6
+ results: []
7
+ tags:
8
+ - function calling
9
+ - on-device language model
10
+ - android
11
+ inference: false
12
+ space: false
13
+ spaces: false
14
+ language:
15
+ - en
16
+ ---
17
+
18
+ # Quantized Octopus V2: On-device language model for super agent
19
+
20
+ This repo includes two types of quantized models: **GGUF** and **AWQ**, for ourOctopus V2 model at [NexaAIDev/Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2)
21
+
22
+
23
+ # GGUF Qauntization
24
+
25
+ ## Run with [Ollama](https://github.com/ollama/ollama)
26
+
27
+ ```bash
28
+ ollama run NexaAIDev/octopus-v2-Q4_K_M
29
+ ```
30
+
31
+ Input example:
32
+
33
+ ```json
34
+ def get_trending_news(category=None, region='US', language='en', max_results=5):
35
+ """
36
+ Fetches trending news articles based on category, region, and language.
37
+
38
+ Parameters:
39
+ - category (str, optional): News category to filter by, by default use None for all categories. Optional to provide.
40
+ - region (str, optional): ISO 3166-1 alpha-2 country code for region-specific news, by default, uses 'US'. Optional to provide.
41
+ - language (str, optional): ISO 639-1 language code for article language, by default uses 'en'. Optional to provide.
42
+ - max_results (int, optional): Maximum number of articles to return, by default, uses 5. Optional to provide.
43
+
44
+ Returns:
45
+ - list[str]: A list of strings, each representing an article. Each string contains the article's heading and URL.
46
+ """
47
+ ```
48
+
49
+ # AWQ Quantization
50
+
51
+ Input Python example:
52
+
53
+ ```python
54
+ from awq import AutoAWQForCausalLM
55
+ from transformers import AutoTokenizer
56
+ from transformers import AutoTokenizer, GemmaForCausalLM
57
+ import torch
58
+ import time
59
+ import numpy as np
60
+
61
+ def inference(input_text):
62
+
63
+ tokens = tokenizer(
64
+ input_text,
65
+ return_tensors='pt'
66
+ ).input_ids.cuda()
67
+
68
+ start_time = time.time()
69
+ generation_output = model.generate(
70
+ tokens,
71
+ do_sample=True,
72
+ temperature=0.7,
73
+ top_p=0.95,
74
+ top_k=40,
75
+ max_new_tokens=512
76
+ )
77
+ end_time = time.time()
78
+
79
+ res = tokenizer.decode(generation_output[0])
80
+ res = res.split(input_text)
81
+ latency = end_time - start_time
82
+ output_tokens = tokenizer.encode(res)
83
+ num_output_tokens = len(output_tokens)
84
+ throughput = num_output_tokens / latency
85
+
86
+ return {"output": res[-1], "latency": latency, "throughput": throughput}
87
+
88
+
89
+ model_id = "path/to/Octopus-v2-AWQ"
90
+ model = AutoAWQForCausalLM.from_quantized(model_id, fuse_layers=True,
91
+ trust_remote_code=False, safetensors=True)
92
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=False)
93
+
94
+ prompts = ["Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Can you take a photo using the back camera and save it to the default location? \n\nResponse:"]
95
+
96
+ avg_throughput = []
97
+ for prompt in prompts:
98
+ out = inference(prompt)
99
+ avg_throughput.append(out["throughput"])
100
+ print("nexa model result:\n", out["output"])
101
+
102
+ print("avg throughput:", np.mean(avg_throughput))
103
+ ```
104
+
105
+ ## Quantized GGUF & AWQ Models
106
+
107
+ | Name | Quant method | Bits | Size | Response (t/s) | Use Cases |
108
+ | ---------------------- | ------------ | ---- | -------- | -------------- | ----------------------------------- |
109
+ | Octopus-v2-AWQ | AWQ | 4 | 3.00 GB | 63.83 | fast, high quality, recommended |
110
+ | Octopus-v2-Q2_K.gguf | Q2_K | 2 | 1.16 GB | 57.81 | fast but high loss, not recommended |
111
+ | Octopus-v2-Q3_K.gguf | Q3_K | 3 | 1.38 GB | 57.81 | extremely not recommended |
112
+ | Octopus-v2-Q3_K_S.gguf | Q3_K_S | 3 | 1.19 GB | 52.13 | extremely not recommended |
113
+ | Octopus-v2-Q3_K_M.gguf | Q3_K_M | 3 | 1.38 GB | 58.67 | moderate loss, not very recommended |
114
+ | Octopus-v2-Q3_K_L.gguf | Q3_K_L | 3 | 1.47 GB | 56.92 | not very recommended |
115
+ | Octopus-v2-Q4_0.gguf | Q4_0 | 4 | 1.55 GB | 68.80 | moderate speed, recommended |
116
+ | Octopus-v2-Q4_1.gguf | Q4_1 | 4 | 1.68 GB | 68.09 | moderate speed, recommended |
117
+ | Octopus-v2-Q4_K.gguf | Q4_K | 4 | 1.63 GB | 64.70 | moderate speed, recommended |
118
+ | Octopus-v2-Q4_K_S.gguf | Q4_K_S | 4 | 1.56 GB | 62.16 | fast and accurate, very recommended |
119
+ | Octopus-v2-Q4_K_M.gguf | Q4_K_M | 4 | 1.63 GB | 64.74 | fast, recommended |
120
+ | Octopus-v2-Q5_0.gguf | Q5_0 | 5 | 1.80 GB | 64.80 | fast, recommended |
121
+ | Octopus-v2-Q5_1.gguf | Q5_1 | 5 | 1.92 GB | 63.42 | very big, prefer Q4 |
122
+ | Octopus-v2-Q5_K.gguf | Q5_K | 5 | 1.84 GB | 61.28 | big, recommended |
123
+ | Octopus-v2-Q5_K_S.gguf | Q5_K_S | 5 | 1.80 GB | 62.16 | big, recommended |
124
+ | Octopus-v2-Q5_K_M.gguf | Q5_K_M | 5 | 1.71 GB | 61.54 | big, recommended |
125
+ | Octopus-v2-Q6_K.gguf | Q6_K | 6 | 2.06 GB | 55.94 | very big, not very recommended |
126
+ | Octopus-v2-Q8_0.gguf | Q8_0 | 8 | 2.67 GB | 56.35 | very big, not very recommended |
127
+ | Octopus-v2-f16.gguf | f16 | 16 | 5.02 GB | 36.27 | extremely big |
128
+ | Octopus-v2.gguf | | | 10.00 GB | | |
129
+
130
+ _Quantized with llama.cpp_
131
+
132
+
133
+ **Acknowledgement**:
134
+ We sincerely thank our community members, [Mingyuan](https://huggingface.co/ThunderBeee), [Zoey](https://huggingface.co/ZY6), [Brian](https://huggingface.co/JoyboyBrian), [Perry](https://huggingface.co/PerryCheng614), [Qi](https://huggingface.co/qiqiWav), [David](https://huggingface.co/Davidqian123) for their extraordinary contributions to this quantization effort.
135
+