awni commited on
Commit
54c5e53
1 Parent(s): 3181180

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -9
README.md CHANGED
@@ -16,28 +16,59 @@ These are pre-converted weights and ready to be used in the example scripts.
16
 
17
  # Quick start for LLMs
18
 
19
- Check out the MLX examples repo:
20
 
21
  ```
22
- git clone [email protected]:ml-explore/mlx-examples.git
23
- cd mlx-examples/hf_llm
24
  ```
25
 
26
- Install the requirements:
27
 
28
  ```
29
- pip install -r requirements.txt
30
  ```
31
 
32
- Generate:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ```
35
- python generate.py --hf-path mistralai/Mistral-7B-v0.1 --prompt "hello"
36
  ```
37
 
38
- To upload a new model (for example a 4-bit quantized Mistral-7B), do:
 
 
39
 
40
  ```
41
- python convert.py --hf-path mistralai/Mistral-7B-v0.1 -q --upload-name mistral-v0.1-4bit
 
 
 
42
  ```
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  # Quick start for LLMs
18
 
19
+ Install `mlx-lm`:
20
 
21
  ```
22
+ pip install mlx-lm
 
23
  ```
24
 
25
+ You can use `mlx-lm` from the command line. For example:
26
 
27
  ```
28
+ python -m mlx_lm.generate --model mistralai/Mistral-7B-v0.1 --prompt "hello"
29
  ```
30
 
31
+ This will download a Mistral 7B model from the Hugging Face Hub and generate
32
+ text using the given prompt.
33
+
34
+ For a full list of options run:
35
+
36
+ ```
37
+ python -m mlx_lm.generate --help
38
+ ```
39
+
40
+ To quantize a model from the command line run:
41
+
42
+ ```
43
+ python -m mlx_lm.convert --hf-path mistralai/Mistral-7B-v0.1 -q
44
+ ```
45
+
46
+ For more options run:
47
 
48
  ```
49
+ python -m mlx_lm.convert --help
50
  ```
51
 
52
+ You can upload new models to Hugging Face by specifying `--upload-repo` to
53
+ `convert`. For example, to upload a quantized Mistral-7B model to the
54
+ [MLX Hugging Face community](https://huggingface.co/mlx-community) you can do:
55
 
56
  ```
57
+ python -m mlx_lm.convert \
58
+ --hf-path mistralai/Mistral-7B-v0.1 \
59
+ -q \
60
+ --upload-repo mlx-community/my-4bit-mistral
61
  ```
62
 
63
+ For more details on the API checkout the full [README](https://github.com/ml-explore/mlx-examples/tree/main/llms)
64
+
65
+
66
+ ### Other Examples:
67
+
68
+ For more examples, visit the [MLX Examples](https://github.com/ml-explore/mlx-examples) repo. The repo includes examples of:
69
+
70
+ - Parameter efficient fine tuning with LoRA
71
+ - Speech recognition with Whisper
72
+ - Image generation with Stable Diffusion
73
+
74
+ and many other examples of different machine learning applications and algorithms.