schroneko commited on
Commit
583ceaf
·
verified ·
1 Parent(s): b0003db

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: llm-jp-3-172b-instruct3-tou
4
+ license_link: https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3/raw/main/LICENSE
5
+ language:
6
+ - en
7
+ - ja
8
+ programming_language:
9
+ - C
10
+ - C++
11
+ - C#
12
+ - Go
13
+ - Java
14
+ - JavaScript
15
+ - Lua
16
+ - PHP
17
+ - Python
18
+ - Ruby
19
+ - Rust
20
+ - Scala
21
+ - TypeScript
22
+ pipeline_tag: text-generation
23
+ library_name: transformers
24
+ inference: false
25
+ tags:
26
+ - mlx
27
+ base_model: llm-jp/llm-jp-3-172b-instruct3
28
+ ---
29
+
30
+ # schroneko/llm-jp-3-172b-instruct3-Q4-mlx
31
+
32
+ The Model [schroneko/llm-jp-3-172b-instruct3-Q4-mlx](https://huggingface.co/schroneko/llm-jp-3-172b-instruct3-Q4-mlx) was converted to MLX format from [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) using mlx-lm version **0.20.5**.
33
+
34
+ ## Use with mlx
35
+
36
+ ```bash
37
+ pip install mlx-lm
38
+ ```
39
+
40
+ ```python
41
+ from mlx_lm import load, generate
42
+
43
+ model, tokenizer = load("schroneko/llm-jp-3-172b-instruct3-Q4-mlx")
44
+
45
+ prompt="hello"
46
+
47
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
48
+ messages = [{"role": "user", "content": prompt}]
49
+ prompt = tokenizer.apply_chat_template(
50
+ messages, tokenize=False, add_generation_prompt=True
51
+ )
52
+
53
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
54
+ ```