Could we run a XOR converted model using OpenAssistant?

#7
by pevogam - opened

Hi all,

While the XOR conversion process is fully described and one can obtain the weights this way and it is possible to run the model with various minimalistic setup, how can we actually do so with OpenAssistant?

I have placed the converted weights in open-assistant-repo/data/models--OpenAssistant--oasst-sft-7-llama-30b and tried to export MODEL_CONFIG_NAME=OA_SFT_Llama_30B_7 but I guess I should either edit the models config file to include a XOR version or do something else since I get the rather expected error

open-assistant-inference-worker-1  | /bin/bash: /opt/miniconda/envs/text-generation/lib/libtinfo.so.6: no version information available (required by /bin/bash)
open-assistant-inference-worker-1  | Downloading model OpenAssistant/oasst-sft-7-llama-30b
open-assistant-inference-worker-1  | Traceback (most recent call last):
open-assistant-inference-worker-1  |   File "/worker/download_model.py", line 17, in <module>
open-assistant-inference-worker-1  |     transformers.LlamaTokenizer.from_pretrained(model_id)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/transformers-4.27.0.dev0-py3.9.egg/transformers/utils/import_utils.py", line 1112, in __getattr__
open-assistant-inference-worker-1  |     raise AttributeError(f"module {self.__name__} has no attribute {name}")
open-assistant-inference-worker-1  | AttributeError: module transformers has no attribute LlamaTokenizer
open-assistant-inference-worker-1  | Starting worker server on port 8300
open-assistant-inference-worker-1  | Starting worker
open-assistant-inference-worker-1  | 2023-05-18T06:35:01.641548Z  INFO text_generation_launcher: Args { model_id: "OpenAssistant/oasst-sft-7-llama-30b", revision: None, sharded: None, num_shard: Some(1), quantize: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 4096, max_total_tokens: 8192, max_batch_size: 32, max_waiting_tokens: 20, port: 8300, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some("/data"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None }
open-assistant-inference-worker-1  | 2023-05-18T06:35:01.641844Z  INFO text_generation_launcher: Starting shard 0
open-assistant-inference-worker-1  | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
open-assistant-inference-worker-1  | 2023-05-18 06:35:02.671 | INFO     | __main__:main:25 - Inference protocol version: 1
open-assistant-inference-worker-1  | 2023-05-18 06:35:02.671 | WARNING  | __main__:main:28 - Model config: model_id='OpenAssistant/oasst-sft-7-llama-30b' max_input_length=1024 max_total_length=1792 quantized=False
open-assistant-inference-worker-1  | Traceback (most recent call last):
open-assistant-inference-server-1  | INFO:     Will watch for changes in these directories: ['/opt/inference/server']
open-assistant-inference-server-1  | INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
open-assistant-inference-server-1  | INFO:     Started reloader process [7] using WatchFiles
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
open-assistant-inference-worker-1  |     response.raise_for_status()
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
open-assistant-inference-worker-1  |     raise HTTPError(http_error_msg, response=self)
open-assistant-inference-worker-1  | requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b/resolve/main/tokenizer_config.json
open-assistant-inference-worker-1  | 
open-assistant-inference-worker-1  | The above exception was the direct cause of the following exception:
open-assistant-inference-worker-1  | 
open-assistant-inference-worker-1  | Traceback (most recent call last):
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file
open-assistant-inference-worker-1  |     resolved_file = hf_hub_download(
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
open-assistant-inference-worker-1  |     return fn(*args, **kwargs)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
open-assistant-inference-worker-1  |     metadata = get_hf_file_metadata(
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
open-assistant-inference-worker-1  |     return fn(*args, **kwargs)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1541, in get_hf_file_metadata
open-assistant-inference-worker-1  |     hf_raise_for_status(r)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status
open-assistant-inference-worker-1  |     raise RepositoryNotFoundError(message, response) from e
open-assistant-inference-worker-1  | huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6465c717-62334eb312470f792fff2835)
open-assistant-inference-worker-1  | 
open-assistant-inference-worker-1  | Repository Not Found for url: https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b/resolve/main/tokenizer_config.json.
open-assistant-inference-worker-1  | Please make sure you specified the correct `repo_id` and `repo_type`.
open-assistant-inference-worker-1  | If you are trying to access a private or gated repo, make sure you are authenticated.
open-assistant-inference-worker-1  | Invalid username or password.
open-assistant-inference-worker-1  | 
open-assistant-inference-worker-1  | During handling of the above exception, another exception occurred:
open-assistant-inference-worker-1  | 
open-assistant-inference-worker-1  | Traceback (most recent call last):
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/runpy.py", line 196, in _run_module_as_main
open-assistant-inference-worker-1  |     return _run_code(code, main_globals, None,
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/runpy.py", line 86, in _run_code
open-assistant-inference-worker-1  |     exec(code, run_globals)
open-assistant-inference-worker-1  |   File "/worker/__main__.py", line 132, in <module>
open-assistant-inference-worker-1  |     main()
open-assistant-inference-worker-1  |   File "/worker/__main__.py", line 36, in main
open-assistant-inference-worker-1  |     tokenizer: transformers.PreTrainedTokenizer = transformers.AutoTokenizer.from_pretrained(model_config.model_id)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 643, in from_pretrained
open-assistant-inference-worker-1  |     tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 487, in get_tokenizer_config
open-assistant-inference-worker-1  |     resolved_config_file = cached_file(
open-assistant-inference-worker-1  |   File "/opt/miniconda/envs/worker/lib/python3.10/site-packages/transformers/utils/hub.py", line 433, in cached_file
open-assistant-inference-worker-1  |     raise EnvironmentError(
open-assistant-inference-worker-1  | OSError: OpenAssistant/oasst-sft-7-llama-30b is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
open-assistant-inference-worker-1  | If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.

Sign up or log in to comment