How to run this model on Macbook pro M3 pro
I have installed required dependencies.
But when I try to run this model using command: mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
It gives below error, how should I run this model, please help me.
Traceback (most recent call last):
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/bin/mistral-chat", line 10, in
sys.exit(mistral_chat())
^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/mistral_inference/main.py", line 179, in mistral_chat
fire.Fire(interactive)
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/fire/core.py", line 143, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/fire/core.py", line 477, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/mistral_inference/main.py", line 64, in interactive
transformer = Transformer.from_folder(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/mistral_inference/model.py", line 409, in from_folder
return model.to(device=device, dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1173, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 779, in _apply
module._apply(fn)
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 804, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1159, in convert
return t.to(
^^^^^
File "/Users/dt237131/Documents/HACKATHON/PycharmProjects/Project JTT/.venv/lib/python3.12/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled