dev-slx commited on
Commit
f9e3420
·
verified ·
1 Parent(s): 1894e9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -35,16 +35,16 @@ In this version, we employed our new, improved decomposable ELM techniques on a
35
  There are three ELM Turbo slices derived from the `Meta-Llama-3.1-8B-Instruct` model:
36
  1. `slicexai/Llama3.1-elm-turbo-3B-instruct` (3B params)
37
  2. `slicexai/Llama3.1-elm-turbo-4B-instruct`(4B params)
38
- 3. `slicexai/Llama3.1-elm-turbo-6B-instruct` (6B params)
39
 
40
  Make sure to update your transformers installation via pip install --upgrade transformers.
41
 
42
- Example - To run the `slicexai/Llama3.1-elm-turbo-4B-instruct`
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
45
  import torch
46
 
47
- elm_turbo_model = "slicexai/Llama3.1-elm-turbo-4B-instruct"
48
  model = AutoModelForCausalLM.from_pretrained(
49
  elm_turbo_model,
50
  device_map="cuda",
 
35
  There are three ELM Turbo slices derived from the `Meta-Llama-3.1-8B-Instruct` model:
36
  1. `slicexai/Llama3.1-elm-turbo-3B-instruct` (3B params)
37
  2. `slicexai/Llama3.1-elm-turbo-4B-instruct`(4B params)
38
+ **3. `slicexai/Llama3.1-elm-turbo-6B-instruct` (6B params)**
39
 
40
  Make sure to update your transformers installation via pip install --upgrade transformers.
41
 
42
+ Example - To run the `slicexai/Llama3.1-elm-turbo-6B-instruct`
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
45
  import torch
46
 
47
+ elm_turbo_model = "slicexai/Llama3.1-elm-turbo-6B-instruct"
48
  model = AutoModelForCausalLM.from_pretrained(
49
  elm_turbo_model,
50
  device_map="cuda",