BulkCalcResults / error.log
meg's picture
meg HF staff
Upload folder using huggingface_hub
95c243d verified
Error executing job with overrides: ['backend.model=NousResearch/Hermes-3-Llama-3.1-8B', 'backend.processor=NousResearch/Hermes-3-Llama-3.1-8B']
Traceback (most recent call last):
File "/optimum-benchmark/optimum_benchmark/cli.py", line 65, in benchmark_cli
benchmark_report: BenchmarkReport = launch(experiment_config=experiment_config)
File "/optimum-benchmark/optimum_benchmark/experiment.py", line 102, in launch
raise error
File "/optimum-benchmark/optimum_benchmark/experiment.py", line 90, in launch
report = launcher.launch(run, experiment_config.benchmark, experiment_config.backend)
File "/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 47, in launch
while not process_context.join():
File "/opt/conda/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 189, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 76, in _wrap
fn(i, *args)
File "/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 63, in entrypoint
worker_output = worker(*worker_args)
File "/optimum-benchmark/optimum_benchmark/experiment.py", line 55, in run
backend: Backend = backend_factory(backend_config)
File "/optimum-benchmark/optimum_benchmark/backends/pytorch/backend.py", line 81, in __init__
self.load_model_with_no_weights()
File "/optimum-benchmark/optimum_benchmark/backends/pytorch/backend.py", line 246, in load_model_with_no_weights
self.load_model_from_pretrained()
File "/optimum-benchmark/optimum_benchmark/backends/pytorch/backend.py", line 204, in load_model_from_pretrained
self.pretrained_model = self.automodel_class.from_pretrained(
File "/opt/conda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3738, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 556, in load_state_dict
return safe_load_file(checkpoint_file)
File "/opt/conda/lib/python3.9/site-packages/safetensors/torch.py", line 315, in load_file
result[k] = f.get_tensor(k)
File "/opt/conda/lib/python3.9/site-packages/torch/utils/_device.py", line 79, in __torch_function__
return func(*args, **kwargs)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.