Oobabooga cannot determine model type

#2
by Dingus123 - opened

Fresh install of Oobabooga (on two different machines) cannot determine model type, no matter what is chosen within the UI it blanks out bits, groupsize and model type then crashes the ui. if run from commandline, it just states the model type cant be determined and to add --Model_type= to the command line, but the model type is unknown.

ERROR:Can't determine model type from model name. Please specify it manually using --model_type argument

Any ideas?

Same here - except my ooba does not crash.

Whether i try load the model inside my webUI or from cmd env the result is the same.

Win 11

Same here. I suppose because it's not related to llama or any other previously existing type of model. I'm sure they'll do something about it soon. The pace at which AI is going is amazing

I believe so since the Git hub page says the only model_types it Currently supports are LLaMA, OPT, and GPT-J. And mpt-7b-storywriter-4bit-128g is Mpt which is not supported.

The same problem

There are additional arguments that made to be updated for this to work in OOba, although I'm not sure whether quantizing and making it 128 changes anything > https://www.youtube.com/watch?v=O9Y_ZdsuKWQ

It can be run in KoboldAI (4bit-fork). I don't use ooba, personally.

Can you provide some short instructions for setting up this model with KoboldAI? I've installed KoboldAI (with the link provided) and loaded the model, but the responses are completely unusable. I'm assuming there are some settings I need to adjust.

Same issue here.

Same issue here.

Yes, there was an issue with the init device. I changed the model and removed the unused python files, it is now working properly for me. Let me know if it also works for you.

It's definitely loading faster for me with the updated files, but the responses are still incoherent.

Prompt: Can you write a short story about a boy that takes a flight to the moon?
Response: I'm not sure when I'll be back," he said. "We're going on another trip together." He was still in his shirt and jacket but had changed his mind again: this time it's okay for him to go there! This is what we do with our big-eyed friend here—he has been working hard all day long!"

Yes, there was an issue with the init device. I changed the model and removed the unused python files, it is now working properly for me. Let me know if it also works for you.

It seems we need those files:
models/OccamRazor_mpt-7b-storywriter-4bit-128g does not appear to have a file named configuration_mpt.py.

It can be run in KoboldAI (4bit-fork). I don't use ooba, personally.

I tried that fork does not work it says the model is not recognized or that config.json is missing.

INFO | main:do_connect:3545 - Client connected!
Exception in thread Thread-12:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3955, in get_message
get_model_info(msg['data'], directory=msg['path'])
File "aiserver.py", line 1550, in get_model_info
layer_count = get_layer_count(model, directory=directory)
File "aiserver.py", line 1596, in get_layer_count
model_config = AutoConfig.from_pretrained(model.replace('/', '
'), revision=args.revision, cache_dir="cache")
File "B:\python\lib\site-packages\transformers\models\auto\configuration_auto.py", line 779, in from_pretrained
raise ValueError(
ValueError: Loading D:\KoboldAI\models\OccamRazor_mpt-7b-storywriter-4bit-128g requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
WARNING | main:load_model:2259 - No model type detected, assuming Neo (If this is a GPT2 model use the other menu option or --model GPT2Custom)
INIT | Searching | GPU support
INIT | Found | GPU support
INIT | Starting | Transformers
WARNING | main:device_config:840 - --breakmodel_gpulayers is malformatted. Please use the --help option to see correct usage of --breakmodel_gpulayers. Defaulting to all layers on device 0.
INIT | Info | Final device configuration:
DEVICE ID | LAYERS | DEVICE NAME
Exception in thread Thread-13:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in _handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3918, in get_message
load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
File "aiserver.py", line 2526, in load_model
device_config(model_config)
File "aiserver.py", line 907, in device_config
device_list(n_layers, primary=breakmodel.primary_device)
File "aiserver.py", line 805, in device_list
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
TypeError: unsupported format string passed to NoneType.format

Yes, there was an issue with the init device. I changed the model and removed the unused python files, it is now working properly for me. Let me know if it also works for you.

It seems we need those files:
models/OccamRazor_mpt-7b-storywriter-4bit-128g does not appear to have a file named configuration_mpt.py.

That is a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/5

Can you provide some short instructions for setting up this model with KoboldAI? I've installed KoboldAI (with the link provided) and loaded the model, but the responses are completely unusable. I'm assuming there are some settings I need to adjust.

Also a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/4

INFO | main:do_connect:3545 - Client connected!
Exception in thread Thread-12:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3955, in get_message
get_model_info(msg['data'], directory=msg['path'])
File "aiserver.py", line 1550, in get_model_info
layer_count = get_layer_count(model, directory=directory)
File "aiserver.py", line 1596, in get_layer_count
model_config = AutoConfig.from_pretrained(model.replace('/', '
'), revision=args.revision, cache_dir="cache")
File "B:\python\lib\site-packages\transformers\models\auto\configuration_auto.py", line 779, in from_pretrained
raise ValueError(
ValueError: Loading D:\KoboldAI\models\OccamRazor_mpt-7b-storywriter-4bit-128g requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
WARNING | main:load_model:2259 - No model type detected, assuming Neo (If this is a GPT2 model use the other menu option or --model GPT2Custom)
INIT | Searching | GPU support
INIT | Found | GPU support
INIT | Starting | Transformers
WARNING | main:device_config:840 - --breakmodel_gpulayers is malformatted. Please use the --help option to see correct usage of --breakmodel_gpulayers. Defaulting to all layers on device 0.
INIT | Info | Final device configuration:
DEVICE ID | LAYERS | DEVICE NAME
Exception in thread Thread-13:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in _handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3918, in get_message
load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
File "aiserver.py", line 2526, in load_model
device_config(model_config)
File "aiserver.py", line 907, in device_config
device_list(n_layers, primary=breakmodel.primary_device)
File "aiserver.py", line 805, in device_list
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
TypeError: unsupported format string passed to NoneType.format

This is also a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/3

You need to set the option trust_remote_code=True.

We shouldn't mix every issue in one thread. Let's get back on topic.

Yes, there was an issue with the init device. I changed the model and removed the unused python files, it is now working properly for me. Let me know if it also works for you.

It seems we need those files:
models/OccamRazor_mpt-7b-storywriter-4bit-128g does not appear to have a file named configuration_mpt.py.

That is a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/5

Can you provide some short instructions for setting up this model with KoboldAI? I've installed KoboldAI (with the link provided) and loaded the model, but the responses are completely unusable. I'm assuming there are some settings I need to adjust.

Also a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/4

INFO | main:do_connect:3545 - Client connected!
Exception in thread Thread-12:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3955, in get_message
get_model_info(msg['data'], directory=msg['path'])
File "aiserver.py", line 1550, in get_model_info
layer_count = get_layer_count(model, directory=directory)
File "aiserver.py", line 1596, in get_layer_count
model_config = AutoConfig.from_pretrained(model.replace('/', '
'), revision=args.revision, cache_dir="cache")
File "B:\python\lib\site-packages\transformers\models\auto\configuration_auto.py", line 779, in from_pretrained
raise ValueError(
ValueError: Loading D:\KoboldAI\models\OccamRazor_mpt-7b-storywriter-4bit-128g requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
WARNING | main:load_model:2259 - No model type detected, assuming Neo (If this is a GPT2 model use the other menu option or --model GPT2Custom)
INIT | Searching | GPU support
INIT | Found | GPU support
INIT | Starting | Transformers
WARNING | main:device_config:840 - --breakmodel_gpulayers is malformatted. Please use the --help option to see correct usage of --breakmodel_gpulayers. Defaulting to all layers on device 0.
INIT | Info | Final device configuration:
DEVICE ID | LAYERS | DEVICE NAME
Exception in thread Thread-13:
Traceback (most recent call last):
File "B:\python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "B:\python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "B:\python\lib\site-packages\socketio\server.py", line 756, in trigger_event
return self.handlers[namespace]event
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 282, in _handler
return self.handle_event(handler, message, namespace, sid,
File "B:\python\lib\site-packages\flask_socketio_init
.py", line 828, in _handle_event
ret = handler(*args)
File "aiserver.py", line 469, in g
return f(*a, **k)
File "aiserver.py", line 3918, in get_message
load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model'])
File "aiserver.py", line 2526, in load_model
device_config(model_config)
File "aiserver.py", line 907, in device_config
device_list(n_layers, primary=breakmodel.primary_device)
File "aiserver.py", line 805, in device_list
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
TypeError: unsupported format string passed to NoneType.format

This is also a separate issue addressed here: https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g/discussions/3

You need to set the option trust_remote_code=True.

We shouldn't mix every issue in one thread. Let's get back on topic.

How do you set trust_remote_code=True in KoboldAI?

How do you set trust_remote_code=True in KoboldAI?

The best way to figure that out would be to ask this in the right thread as already mentioned and linked before ;)

Sign up or log in to comment