Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.3364556884765625, 0.04540927886962891, 0.04431769561767578, 0.044455936431884766, 0.04404633712768555 ], "count": 5, "total": 0.5146849365234375, "mean": 0.1029369873046875, "p50": 0.044455936431884766, "p90": 0.2200371246337891, "p95": 0.27824640655517574, "p99": 0.32481383209228515, "stdev": 0.11676025383848125, "stdev_": 113.42886254566366 }, "throughput": { "unit": "samples/s", "value": 97.14681050845776 }, "energy": { "unit": "kWh", "cpu": 0.000008402035351388153, "ram": 0.000004584910515179145, "gpu": 0.00001437112260800101, "total": 0.00002735806847456831 }, "efficiency": { "unit": "samples/kWh", "value": 365522.8807288009 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.3364556884765625, 0.04540927886962891 ], "count": 2, "total": 0.3818649673461914, "mean": 0.1909324836730957, "p50": 0.1909324836730957, "p90": 0.3073510475158691, "p95": 0.3219033679962158, "p99": 0.33354522438049317, "stdev": 0.1455232048034668, "stdev_": 76.21710146117609 }, "throughput": { "unit": "samples/s", "value": 20.94981389782047 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.04431769561767578, 0.044455936431884766, 0.04404633712768555 ], "count": 3, "total": 0.1328199691772461, "mean": 0.04427332305908203, "p50": 0.04431769561767578, "p90": 0.04442828826904297, "p95": 0.044442112350463865, "p99": 0.044453171615600584, "stdev": 0.000170136397178338, "stdev_": 0.3842864854560245 }, "throughput": { "unit": "samples/s", "value": 135.52179022101183 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.3364556884765625, 0.04540927886962891, 0.04431769561767578, 0.044455936431884766, 0.04404633712768555 ], "count": 5, "total": 0.5146849365234375, "mean": 0.1029369873046875, "p50": 0.044455936431884766, "p90": 0.2200371246337891, "p95": 0.27824640655517574, "p99": 0.32481383209228515, "stdev": 0.11676025383848125, "stdev_": 113.42886254566366 }, "throughput": { "unit": "samples/s", "value": 97.14681050845776 }, "energy": { "unit": "kWh", "cpu": 0.000008402035351388153, "ram": 0.000004584910515179145, "gpu": 0.00001437112260800101, "total": 0.00002735806847456831 }, "efficiency": { "unit": "samples/kWh", "value": 365522.8807288009 } }
{ "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.3364556884765625, 0.04540927886962891 ], "count": 2, "total": 0.3818649673461914, "mean": 0.1909324836730957, "p50": 0.1909324836730957, "p90": 0.3073510475158691, "p95": 0.3219033679962158, "p99": 0.33354522438049317, "stdev": 0.1455232048034668, "stdev_": 76.21710146117609 }, "throughput": { "unit": "samples/s", "value": 20.94981389782047 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1304.457216, "max_global_vram": 3176.660992, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "values": [ 0.04431769561767578, 0.044455936431884766, 0.04404633712768555 ], "count": 3, "total": 0.1328199691772461, "mean": 0.04427332305908203, "p50": 0.04431769561767578, "p90": 0.04442828826904297, "p95": 0.044442112350463865, "p99": 0.044453171615600584, "stdev": 0.000170136397178338, "stdev_": 0.3842864854560245 }, "throughput": { "unit": "samples/s", "value": 135.52179022101183 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "model": "google-bert/bert-base-uncased", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 5, "total": 0.7448790740966797, "mean": 0.14897581481933594, "stdev": 0.2054173633207176, "p50": 0.04632883071899414, "p90": 0.35471870422363283, "p95": 0.4572641067504882, "p99": 0.5393004287719726, "values": [ 0.5598095092773437, 0.04708249664306641, 0.04632883071899414, 0.04576665496826172, 0.04589158248901367 ] }, "throughput": { "unit": "samples/s", "value": 67.1249894630687 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 2, "total": 0.6068920059204101, "mean": 0.3034460029602051, "stdev": 0.25636350631713867, "p50": 0.3034460029602051, "p90": 0.508536808013916, "p95": 0.5341731586456299, "p99": 0.554682239151001, "values": [ 0.5598095092773437, 0.04708249664306641 ] }, "throughput": { "unit": "samples/s", "value": 13.181916917602548 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 3, "total": 0.13798706817626955, "mean": 0.045995689392089846, "stdev": 0.00024102431292157434, "p50": 0.04589158248901367, "p90": 0.04624138107299805, "p95": 0.0462851058959961, "p99": 0.04632008575439454, "values": [ 0.04632883071899414, 0.04576665496826172, 0.04589158248901367 ] }, "throughput": { "unit": "samples/s", "value": 130.4470066499722 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_fill-mask_hf-internal-testing/tiny-random-BertModel", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "model": "hf-internal-testing/tiny-random-BertModel", "processor": "hf-internal-testing/tiny-random-BertModel", "task": "fill-mask", "library": "transformers", "model_type": "bert", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.3042181091308594, 0.014543871879577636, 0.013530112266540528, 0.013849599838256836, 0.013922304153442382 ], "count": 5, "total": 0.36006399726867677, "mean": 0.07201279945373536, "p50": 0.013922304153442382, "p90": 0.1883484142303467, "p95": 0.246283261680603, "p99": 0.2926311396408081, "stdev": 0.1161031193492677, "stdev_": 161.22567131119266 }, "throughput": { "unit": "samples/s", "value": 138.86420297303542 }, "energy": { "unit": "kWh", "cpu": 0.000006509765044444057, "ram": 0.0000035414303178393125, "gpu": 0.000008136673176000137, "total": 0.000018187868538283507 }, "efficiency": { "unit": "samples/kWh", "value": 549817.0376012492 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.3042181091308594, 0.014543871879577636 ], "count": 2, "total": 0.318761981010437, "mean": 0.1593809905052185, "p50": 0.1593809905052185, "p90": 0.2752506854057312, "p95": 0.2897343972682953, "p99": 0.3013213667583466, "stdev": 0.1448371186256409, "stdev_": 90.87477632465747 }, "throughput": { "unit": "samples/s", "value": 25.097095879003405 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.013530112266540528, 0.013849599838256836, 0.013922304153442382 ], "count": 3, "total": 0.041302016258239746, "mean": 0.013767338752746583, "p50": 0.013849599838256836, "p90": 0.013907763290405273, "p95": 0.013915033721923827, "p99": 0.013920850067138672, "stdev": 0.00017035019553829535, "stdev_": 1.237350214138593 }, "throughput": { "unit": "samples/s", "value": 435.8140747283494 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_hf-internal-testing/tiny-random-BertModel
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "model": "hf-internal-testing/tiny-random-BertModel", "processor": "hf-internal-testing/tiny-random-BertModel", "task": "fill-mask", "library": "transformers", "model_type": "bert", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.3042181091308594, 0.014543871879577636, 0.013530112266540528, 0.013849599838256836, 0.013922304153442382 ], "count": 5, "total": 0.36006399726867677, "mean": 0.07201279945373536, "p50": 0.013922304153442382, "p90": 0.1883484142303467, "p95": 0.246283261680603, "p99": 0.2926311396408081, "stdev": 0.1161031193492677, "stdev_": 161.22567131119266 }, "throughput": { "unit": "samples/s", "value": 138.86420297303542 }, "energy": { "unit": "kWh", "cpu": 0.000006509765044444057, "ram": 0.0000035414303178393125, "gpu": 0.000008136673176000137, "total": 0.000018187868538283507 }, "efficiency": { "unit": "samples/kWh", "value": 549817.0376012492 } }
{ "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.3042181091308594, 0.014543871879577636 ], "count": 2, "total": 0.318761981010437, "mean": 0.1593809905052185, "p50": 0.1593809905052185, "p90": 0.2752506854057312, "p95": 0.2897343972682953, "p99": 0.3013213667583466, "stdev": 0.1448371186256409, "stdev_": 90.87477632465747 }, "throughput": { "unit": "samples/s", "value": 25.097095879003405 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1294.82752, "max_global_vram": 681.050112, "max_process_vram": 0, "max_reserved": 25.165824, "max_allocated": 19.652608 }, "latency": { "unit": "s", "values": [ 0.013530112266540528, 0.013849599838256836, 0.013922304153442382 ], "count": 3, "total": 0.041302016258239746, "mean": 0.013767338752746583, "p50": 0.013849599838256836, "p90": 0.013907763290405273, "p95": 0.013915033721923827, "p99": 0.013920850067138672, "stdev": 0.00017035019553829535, "stdev_": 1.237350214138593 }, "throughput": { "unit": "samples/s", "value": 435.8140747283494 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.3276912536621094, 0.039946239471435545, 0.03980799865722656, 0.040357887268066404, 0.040559616088867184 ], "count": 5, "total": 0.4883629951477051, "mean": 0.09767259902954102, "p50": 0.040357887268066404, "p90": 0.21283859863281251, "p95": 0.2702649261474609, "p99": 0.31620598815917966, "stdev": 0.11500964746293131, "stdev_": 117.75016596839684 }, "throughput": { "unit": "samples/s", "value": 102.38285966953235 }, "energy": { "unit": "kWh", "cpu": 0.000009605099461805368, "ram": 0.000005244525621049992, "gpu": 0.00001969779353600161, "total": 0.00003454741861885697 }, "efficiency": { "unit": "samples/kWh", "value": 289457.22719039023 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.3276912536621094, 0.039946239471435545 ], "count": 2, "total": 0.36763749313354493, "mean": 0.18381874656677247, "p50": 0.18381874656677247, "p90": 0.298916752243042, "p95": 0.31330400295257566, "p99": 0.3248138035202026, "stdev": 0.14387250709533692, "stdev_": 78.26868030735645 }, "throughput": { "unit": "samples/s", "value": 21.760566181137534 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.03980799865722656, 0.040357887268066404, 0.040559616088867184 ], "count": 3, "total": 0.12072550201416016, "mean": 0.040241834004720055, "p50": 0.040357887268066404, "p90": 0.04051927032470703, "p95": 0.0405394432067871, "p99": 0.040555581512451165, "stdev": 0.0003176302471286543, "stdev_": 0.7893036065190238 }, "throughput": { "unit": "samples/s", "value": 149.09857237859106 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.3276912536621094, 0.039946239471435545, 0.03980799865722656, 0.040357887268066404, 0.040559616088867184 ], "count": 5, "total": 0.4883629951477051, "mean": 0.09767259902954102, "p50": 0.040357887268066404, "p90": 0.21283859863281251, "p95": 0.2702649261474609, "p99": 0.31620598815917966, "stdev": 0.11500964746293131, "stdev_": 117.75016596839684 }, "throughput": { "unit": "samples/s", "value": 102.38285966953235 }, "energy": { "unit": "kWh", "cpu": 0.000009605099461805368, "ram": 0.000005244525621049992, "gpu": 0.00001969779353600161, "total": 0.00003454741861885697 }, "efficiency": { "unit": "samples/kWh", "value": 289457.22719039023 } }
{ "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.3276912536621094, 0.039946239471435545 ], "count": 2, "total": 0.36763749313354493, "mean": 0.18381874656677247, "p50": 0.18381874656677247, "p90": 0.298916752243042, "p95": 0.31330400295257566, "p99": 0.3248138035202026, "stdev": 0.14387250709533692, "stdev_": 78.26868030735645 }, "throughput": { "unit": "samples/s", "value": 21.760566181137534 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1727.619072, "max_global_vram": 2618.81856, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "values": [ 0.03980799865722656, 0.040357887268066404, 0.040559616088867184 ], "count": 3, "total": 0.12072550201416016, "mean": 0.040241834004720055, "p50": 0.040357887268066404, "p90": 0.04051927032470703, "p95": 0.0405394432067871, "p99": 0.040555581512451165, "stdev": 0.0003176302471286543, "stdev_": 0.7893036065190238 }, "throughput": { "unit": "samples/s", "value": 149.09857237859106 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "model": "google/vit-base-patch16-224", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 5, "total": 0.48406525039672854, "mean": 0.09681305007934571, "stdev": 0.1110534407118009, "p50": 0.04146995162963867, "p90": 0.20794796142578126, "p95": 0.26343380432128904, "p99": 0.3078224786376953, "values": [ 0.3189196472167969, 0.04103168106079102, 0.041490432739257815, 0.04146995162963867, 0.04115353775024414 ] }, "throughput": { "unit": "samples/s", "value": 103.29185984538483 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 2, "total": 0.3599513282775879, "mean": 0.17997566413879396, "stdev": 0.13894398307800293, "p50": 0.17997566413879396, "p90": 0.2911308506011963, "p95": 0.30502524890899657, "p99": 0.31614076755523685, "values": [ 0.3189196472167969, 0.04103168106079102 ] }, "throughput": { "unit": "samples/s", "value": 22.225227055782792 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 3, "total": 0.12411392211914063, "mean": 0.041371307373046874, "stdev": 0.00015421321911462263, "p50": 0.04146995162963867, "p90": 0.04148633651733399, "p95": 0.0414883846282959, "p99": 0.041490023117065435, "values": [ 0.041490432739257815, 0.04146995162963867, 0.04115353775024414 ] }, "throughput": { "unit": "samples/s", "value": 145.02804917180256 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.3525038146972656, 0.047110145568847656, 0.04701696014404297, 0.04772351837158203, 0.046860286712646484 ], "count": 5, "total": 0.5412147254943848, "mean": 0.10824294509887696, "p50": 0.047110145568847656, "p90": 0.2305916961669922, "p95": 0.29154775543212885, "p99": 0.34031260284423825, "stdev": 0.12213078611962705, "stdev_": 112.83025051476929 }, "throughput": { "unit": "samples/s", "value": 92.38477381473014 }, "energy": { "unit": "kWh", "cpu": 0.00000866679555902768, "ram": 0.000004729876788655486, "gpu": 0.000013611122000000558, "total": 0.000027007794347683724 }, "efficiency": { "unit": "samples/kWh", "value": 370263.4828770322 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.3525038146972656, 0.047110145568847656 ], "count": 2, "total": 0.3996139602661133, "mean": 0.19980698013305664, "p50": 0.19980698013305664, "p90": 0.3219644477844238, "p95": 0.33723413124084467, "p99": 0.34944987800598143, "stdev": 0.152696834564209, "stdev_": 76.42217226971961 }, "throughput": { "unit": "samples/s", "value": 20.019320633024414 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.04701696014404297, 0.04772351837158203, 0.046860286712646484 ], "count": 3, "total": 0.14160076522827147, "mean": 0.04720025507609049, "p50": 0.04701696014404297, "p90": 0.047582206726074217, "p95": 0.04765286254882812, "p99": 0.047709387207031245, "stdev": 0.00037549078846480644, "stdev_": 0.7955270323422744 }, "throughput": { "unit": "samples/s", "value": 127.11795710272185 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.3525038146972656, 0.047110145568847656, 0.04701696014404297, 0.04772351837158203, 0.046860286712646484 ], "count": 5, "total": 0.5412147254943848, "mean": 0.10824294509887696, "p50": 0.047110145568847656, "p90": 0.2305916961669922, "p95": 0.29154775543212885, "p99": 0.34031260284423825, "stdev": 0.12213078611962705, "stdev_": 112.83025051476929 }, "throughput": { "unit": "samples/s", "value": 92.38477381473014 }, "energy": { "unit": "kWh", "cpu": 0.00000866679555902768, "ram": 0.000004729876788655486, "gpu": 0.000013611122000000558, "total": 0.000027007794347683724 }, "efficiency": { "unit": "samples/kWh", "value": 370263.4828770322 } }
{ "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.3525038146972656, 0.047110145568847656 ], "count": 2, "total": 0.3996139602661133, "mean": 0.19980698013305664, "p50": 0.19980698013305664, "p90": 0.3219644477844238, "p95": 0.33723413124084467, "p99": 0.34944987800598143, "stdev": 0.152696834564209, "stdev_": 76.42217226971961 }, "throughput": { "unit": "samples/s", "value": 20.019320633024414 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1342.947328, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "values": [ 0.04701696014404297, 0.04772351837158203, 0.046860286712646484 ], "count": 3, "total": 0.14160076522827147, "mean": 0.04720025507609049, "p50": 0.04701696014404297, "p90": 0.047582206726074217, "p95": 0.04765286254882812, "p99": 0.047709387207031245, "stdev": 0.00037549078846480644, "stdev_": 0.7955270323422744 }, "throughput": { "unit": "samples/s", "value": 127.11795710272185 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 5, "total": 0.8026234703063965, "mean": 0.16052469406127928, "stdev": 0.22240891148008993, "p50": 0.04907724761962891, "p90": 0.38326721343994147, "p95": 0.49430444412231433, "p99": 0.5831342286682129, "values": [ 0.6053416748046875, 0.05015552139282226, 0.04897484970092773, 0.04907417678833008, 0.04907724761962891 ] }, "throughput": { "unit": "samples/s", "value": 62.295711313940046 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 2, "total": 0.6554971961975098, "mean": 0.3277485980987549, "stdev": 0.27759307670593264, "p50": 0.3277485980987549, "p90": 0.549823059463501, "p95": 0.5775823671340942, "p99": 0.5997898132705688, "values": [ 0.6053416748046875, 0.05015552139282226 ] }, "throughput": { "unit": "samples/s", "value": 12.204476306546239 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 3, "total": 0.14712627410888673, "mean": 0.049042091369628914, "stdev": 0.000047563564546161045, "p50": 0.04907417678833008, "p90": 0.04907663345336914, "p95": 0.049076940536499025, "p99": 0.04907718620300293, "values": [ 0.04897484970092773, 0.04907417678833008, 0.04907724761962891 ] }, "throughput": { "unit": "samples/s", "value": 122.34388527149387 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.34298776245117185, 0.04679270553588867, 0.04601036834716797, 0.0463185920715332, 0.04643635177612305 ], "count": 5, "total": 0.5285457801818848, "mean": 0.10570915603637696, "p50": 0.04643635177612305, "p90": 0.2245097396850586, "p95": 0.28374875106811515, "p99": 0.3311399601745605, "stdev": 0.1186395674859121, "stdev_": 112.23206383852454 }, "throughput": { "unit": "samples/s", "value": 94.59918492357247 }, "energy": { "unit": "kWh", "cpu": 0.000008532537268055446, "ram": 0.0000046570323626979856, "gpu": 0.00001633917973799942, "total": 0.00002952874936875285 }, "efficiency": { "unit": "samples/kWh", "value": 338653.01490153663 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.34298776245117185, 0.04679270553588867 ], "count": 2, "total": 0.3897804679870605, "mean": 0.19489023399353025, "p50": 0.19489023399353025, "p90": 0.3133682567596435, "p95": 0.32817800960540766, "p99": 0.340025811882019, "stdev": 0.14809752845764157, "stdev_": 75.9902255864488 }, "throughput": { "unit": "samples/s", "value": 20.524373736104128 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.04601036834716797, 0.0463185920715332, 0.04643635177612305 ], "count": 3, "total": 0.13876531219482421, "mean": 0.046255104064941405, "p50": 0.0463185920715332, "p90": 0.046412799835205076, "p95": 0.046424575805664066, "p99": 0.046433996582031255, "stdev": 0.0001796079353700373, "stdev_": 0.3882986299584813 }, "throughput": { "unit": "samples/s", "value": 129.71541457513746 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.34298776245117185, 0.04679270553588867, 0.04601036834716797, 0.0463185920715332, 0.04643635177612305 ], "count": 5, "total": 0.5285457801818848, "mean": 0.10570915603637696, "p50": 0.04643635177612305, "p90": 0.2245097396850586, "p95": 0.28374875106811515, "p99": 0.3311399601745605, "stdev": 0.1186395674859121, "stdev_": 112.23206383852454 }, "throughput": { "unit": "samples/s", "value": 94.59918492357247 }, "energy": { "unit": "kWh", "cpu": 0.000008532537268055446, "ram": 0.0000046570323626979856, "gpu": 0.00001633917973799942, "total": 0.00002952874936875285 }, "efficiency": { "unit": "samples/kWh", "value": 338653.01490153663 } }
{ "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.34298776245117185, 0.04679270553588867 ], "count": 2, "total": 0.3897804679870605, "mean": 0.19489023399353025, "p50": 0.19489023399353025, "p90": 0.3133682567596435, "p95": 0.32817800960540766, "p99": 0.340025811882019, "stdev": 0.14809752845764157, "stdev_": 75.9902255864488 }, "throughput": { "unit": "samples/s", "value": 20.524373736104128 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1329.053696, "max_global_vram": 3384.27904, "max_process_vram": 0, "max_reserved": 2728.394752, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "values": [ 0.04601036834716797, 0.0463185920715332, 0.04643635177612305 ], "count": 3, "total": 0.13876531219482421, "mean": 0.046255104064941405, "p50": 0.0463185920715332, "p90": 0.046412799835205076, "p95": 0.046424575805664066, "p99": 0.046433996582031255, "stdev": 0.0001796079353700373, "stdev_": 0.3882986299584813 }, "throughput": { "unit": "samples/s", "value": 129.71541457513746 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 5, "total": 0.778464241027832, "mean": 0.1556928482055664, "stdev": 0.21129125718980005, "p50": 0.05020159912109375, "p90": 0.36730080566406254, "p95": 0.4727875488281249, "p99": 0.557176943359375, "values": [ 0.5782742919921875, 0.050840576171875, 0.04969574356079102, 0.049452030181884765, 0.05020159912109375 ] }, "throughput": { "unit": "samples/s", "value": 64.22902602948513 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 2, "total": 0.6291148681640625, "mean": 0.31455743408203124, "stdev": 0.26371685791015625, "p50": 0.31455743408203124, "p90": 0.5255309204101563, "p95": 0.5519026062011718, "p99": 0.5729999548339844, "values": [ 0.5782742919921875, 0.050840576171875 ] }, "throughput": { "unit": "samples/s", "value": 12.716278703357135 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 3, "total": 0.14934937286376954, "mean": 0.049783124287923176, "stdev": 0.0003121857804388603, "p50": 0.04969574356079102, "p90": 0.05010042800903321, "p95": 0.050151013565063476, "p99": 0.0501914820098877, "values": [ 0.04969574356079102, 0.049452030181884765, 0.05020159912109375 ] }, "throughput": { "unit": "samples/s", "value": 120.52276922795568 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_text-generation_hf-internal-testing/tiny-random-LlamaForCausalLM", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "model": "hf-internal-testing/tiny-random-LlamaForCausalLM", "processor": "hf-internal-testing/tiny-random-LlamaForCausalLM", "task": "text-generation", "library": "transformers", "model_type": "llama", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.2745579528808594, 0.009665535926818849, 0.00901632022857666, 0.008461312294006347, 0.008477696418762207 ], "count": 5, "total": 0.3101788177490235, "mean": 0.0620357635498047, "p50": 0.00901632022857666, "p90": 0.16860098609924318, "p95": 0.22157946949005122, "p99": 0.26396225620269775, "stdev": 0.10626200774652632, "stdev_": 171.29152873441316 }, "throughput": { "unit": "samples/s", "value": 161.19733888616707 }, "energy": { "unit": "kWh", "cpu": 0.000006200777884723231, "ram": 0.0000033791103984226146, "gpu": 0.000008102228704000031, "total": 0.000017682116987145876 }, "efficiency": { "unit": "samples/kWh", "value": 565543.142106206 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.2745579528808594, 0.009665535926818849 ], "count": 2, "total": 0.2842234888076782, "mean": 0.1421117444038391, "p50": 0.1421117444038391, "p90": 0.24806871118545534, "p95": 0.2613133320331573, "p99": 0.27190902871131895, "stdev": 0.13244620847702027, "stdev_": 93.19863677181228 }, "throughput": { "unit": "samples/s", "value": 28.14686440434645 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.00901632022857666, 0.008461312294006347, 0.008477696418762207 ], "count": 3, "total": 0.025955328941345217, "mean": 0.008651776313781738, "p50": 0.008477696418762207, "p90": 0.00890859546661377, "p95": 0.008962457847595215, "p99": 0.00900554775238037, "stdev": 0.0002578582417356606, "stdev_": 2.980408096368702 }, "throughput": { "unit": "samples/s", "value": 693.4992055264277 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_hf-internal-testing/tiny-random-LlamaForCausalLM
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "model": "hf-internal-testing/tiny-random-LlamaForCausalLM", "processor": "hf-internal-testing/tiny-random-LlamaForCausalLM", "task": "text-generation", "library": "transformers", "model_type": "llama", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.2745579528808594, 0.009665535926818849, 0.00901632022857666, 0.008461312294006347, 0.008477696418762207 ], "count": 5, "total": 0.3101788177490235, "mean": 0.0620357635498047, "p50": 0.00901632022857666, "p90": 0.16860098609924318, "p95": 0.22157946949005122, "p99": 0.26396225620269775, "stdev": 0.10626200774652632, "stdev_": 171.29152873441316 }, "throughput": { "unit": "samples/s", "value": 161.19733888616707 }, "energy": { "unit": "kWh", "cpu": 0.000006200777884723231, "ram": 0.0000033791103984226146, "gpu": 0.000008102228704000031, "total": 0.000017682116987145876 }, "efficiency": { "unit": "samples/kWh", "value": 565543.142106206 } }
{ "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.2745579528808594, 0.009665535926818849 ], "count": 2, "total": 0.2842234888076782, "mean": 0.1421117444038391, "p50": 0.1421117444038391, "p90": 0.24806871118545534, "p95": 0.2613133320331573, "p99": 0.27190902871131895, "stdev": 0.13244620847702027, "stdev_": 93.19863677181228 }, "throughput": { "unit": "samples/s", "value": 28.14686440434645 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1392.361472, "max_global_vram": 699.92448, "max_process_vram": 0, "max_reserved": 44.040192, "max_allocated": 42.173952 }, "latency": { "unit": "s", "values": [ 0.00901632022857666, 0.008461312294006347, 0.008477696418762207 ], "count": 3, "total": 0.025955328941345217, "mean": 0.008651776313781738, "p50": 0.008477696418762207, "p90": 0.00890859546661377, "p95": 0.008962457847595215, "p99": 0.00900554775238037, "stdev": 0.0002578582417356606, "stdev_": 2.980408096368702 }, "throughput": { "unit": "samples/s", "value": 693.4992055264277 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.36077261352539064, 0.04537036895751953, 0.04449280166625977, 0.04430950546264648, 0.04501708984375 ], "count": 5, "total": 0.5399623794555665, "mean": 0.1079924758911133, "p50": 0.04501708984375, "p90": 0.2346117156982422, "p95": 0.29769216461181636, "p99": 0.3481565237426758, "stdev": 0.1263906284945815, "stdev_": 117.0365133789679 }, "throughput": { "unit": "samples/s", "value": 92.59904375266666 }, "energy": { "unit": "kWh", "cpu": 0.000008615995320833115, "ram": 0.000004702194379110486, "gpu": 0.000014464456015999758, "total": 0.00002778264571594336 }, "efficiency": { "unit": "samples/kWh", "value": 359936.9225754261 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.36077261352539064, 0.04537036895751953 ], "count": 2, "total": 0.40614298248291014, "mean": 0.20307149124145507, "p50": 0.20307149124145507, "p90": 0.3292323890686035, "p95": 0.34500250129699706, "p99": 0.35761859107971194, "stdev": 0.15770112228393557, "stdev_": 77.65793283924161 }, "throughput": { "unit": "samples/s", "value": 19.69749655919914 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.04449280166625977, 0.04430950546264648, 0.04501708984375 ], "count": 3, "total": 0.13381939697265624, "mean": 0.04460646565755208, "p50": 0.04449280166625977, "p90": 0.044912232208251954, "p95": 0.044964661026000975, "p99": 0.04500660408020019, "stdev": 0.00029984278245194344, "stdev_": 0.6721957860411177 }, "throughput": { "unit": "samples/s", "value": 134.50964813178766 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.36077261352539064, 0.04537036895751953, 0.04449280166625977, 0.04430950546264648, 0.04501708984375 ], "count": 5, "total": 0.5399623794555665, "mean": 0.1079924758911133, "p50": 0.04501708984375, "p90": 0.2346117156982422, "p95": 0.29769216461181636, "p99": 0.3481565237426758, "stdev": 0.1263906284945815, "stdev_": 117.0365133789679 }, "throughput": { "unit": "samples/s", "value": 92.59904375266666 }, "energy": { "unit": "kWh", "cpu": 0.000008615995320833115, "ram": 0.000004702194379110486, "gpu": 0.000014464456015999758, "total": 0.00002778264571594336 }, "efficiency": { "unit": "samples/kWh", "value": 359936.9225754261 } }
{ "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.36077261352539064, 0.04537036895751953 ], "count": 2, "total": 0.40614298248291014, "mean": 0.20307149124145507, "p50": 0.20307149124145507, "p90": 0.3292323890686035, "p95": 0.34500250129699706, "p99": 0.35761859107971194, "stdev": 0.15770112228393557, "stdev_": 77.65793283924161 }, "throughput": { "unit": "samples/s", "value": 19.69749655919914 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1357.1072, "max_global_vram": 3566.731264, "max_process_vram": 0, "max_reserved": 2910.846976, "max_allocated": 2523.776 }, "latency": { "unit": "s", "values": [ 0.04449280166625977, 0.04430950546264648, 0.04501708984375 ], "count": 3, "total": 0.13381939697265624, "mean": 0.04460646565755208, "p50": 0.04449280166625977, "p90": 0.044912232208251954, "p95": 0.044964661026000975, "p99": 0.04500660408020019, "stdev": 0.00029984278245194344, "stdev_": 0.6721957860411177 }, "throughput": { "unit": "samples/s", "value": 134.50964813178766 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 5, "total": 0.8139294586181639, "mean": 0.16278589172363278, "stdev": 0.2273662362653359, "p50": 0.04927385711669922, "p90": 0.3902293930053711, "p95": 0.5038737297058105, "p99": 0.594789199066162, "values": [ 0.61751806640625, 0.049296382904052735, 0.04860006332397461, 0.0492410888671875, 0.04927385711669922 ] }, "throughput": { "unit": "samples/s", "value": 61.4303849929289 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 2, "total": 0.6668144493103026, "mean": 0.3334072246551513, "stdev": 0.2841108417510986, "p50": 0.3334072246551513, "p90": 0.5606958980560303, "p95": 0.5891069822311401, "p99": 0.6118358495712279, "values": [ 0.61751806640625, 0.049296382904052735 ] }, "throughput": { "unit": "samples/s", "value": 11.99734050195603 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 3, "total": 0.1471150093078613, "mean": 0.04903833643595377, "stdev": 0.00031019448743967263, "p50": 0.0492410888671875, "p90": 0.04926730346679687, "p95": 0.049270580291748044, "p99": 0.04927320175170898, "values": [ 0.04860006332397461, 0.0492410888671875, 0.04927385711669922 ] }, "throughput": { "unit": "samples/s", "value": 122.35325331307405 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.45478912353515627, 0.1893631286621094, 0.0835420150756836, 0.08369971466064453, 0.08313139343261719 ], "count": 5, "total": 0.894525375366211, "mean": 0.17890507507324221, "p50": 0.08369971466064453, "p90": 0.34861872558593754, "p95": 0.4017039245605468, "p99": 0.44417208374023437, "stdev": 0.1439111886122783, "stdev_": 80.43996994124527 }, "throughput": { "unit": "samples/s", "value": 55.895563588154694 }, "energy": { "unit": "kWh", "cpu": 0.000012832551090971836, "ram": 0.000007012865461790179, "gpu": 0.00002287779608000154, "total": 0.00004272321263276355 }, "efficiency": { "unit": "samples/kWh", "value": 234064.7948448335 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.45478912353515627, 0.1893631286621094 ], "count": 2, "total": 0.6441522521972657, "mean": 0.32207612609863284, "p50": 0.32207612609863284, "p90": 0.4282465240478516, "p95": 0.4415178237915039, "p99": 0.4521348635864258, "stdev": 0.13271299743652343, "stdev_": 41.20547494286531 }, "throughput": { "unit": "samples/s", "value": 12.41942409222544 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.0835420150756836, 0.08369971466064453, 0.08313139343261719 ], "count": 3, "total": 0.25037312316894533, "mean": 0.08345770772298178, "p50": 0.0835420150756836, "p90": 0.08366817474365235, "p95": 0.08368394470214843, "p99": 0.08369656066894532, "stdev": 0.0002395524324600902, "stdev_": 0.2870345220302817 }, "throughput": { "unit": "samples/s", "value": 71.89270067080668 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.5.1+cu124", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.248768, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.47.0", "transformers_commit": null, "accelerate_version": "1.2.0", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.12", "timm_commit": null, "peft_version": "0.14.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.45478912353515627, 0.1893631286621094, 0.0835420150756836, 0.08369971466064453, 0.08313139343261719 ], "count": 5, "total": 0.894525375366211, "mean": 0.17890507507324221, "p50": 0.08369971466064453, "p90": 0.34861872558593754, "p95": 0.4017039245605468, "p99": 0.44417208374023437, "stdev": 0.1439111886122783, "stdev_": 80.43996994124527 }, "throughput": { "unit": "samples/s", "value": 55.895563588154694 }, "energy": { "unit": "kWh", "cpu": 0.000012832551090971836, "ram": 0.000007012865461790179, "gpu": 0.00002287779608000154, "total": 0.00004272321263276355 }, "efficiency": { "unit": "samples/kWh", "value": 234064.7948448335 } }
{ "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.45478912353515627, 0.1893631286621094 ], "count": 2, "total": 0.6441522521972657, "mean": 0.32207612609863284, "p50": 0.32207612609863284, "p90": 0.4282465240478516, "p95": 0.4415178237915039, "p99": 0.4521348635864258, "stdev": 0.13271299743652343, "stdev_": 41.20547494286531 }, "throughput": { "unit": "samples/s", "value": 12.41942409222544 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1438.158848, "max_global_vram": 4604.821504, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3703.54432 }, "latency": { "unit": "s", "values": [ 0.0835420150756836, 0.08369971466064453, 0.08313139343261719 ], "count": 3, "total": 0.25037312316894533, "mean": 0.08345770772298178, "p50": 0.0835420150756836, "p90": 0.08366817474365235, "p95": 0.08368394470214843, "p99": 0.08369656066894532, "stdev": 0.0002395524324600902, "stdev_": 0.2870345220302817 }, "throughput": { "unit": "samples/s", "value": 71.89270067080668 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "model": "microsoft/deberta-v3-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 5, "total": 1.0522736740112304, "mean": 0.21045473480224608, "stdev": 0.2521626361507063, "p50": 0.08427519989013672, "p90": 0.46303764648437507, "p95": 0.5889081359863281, "p99": 0.6896045275878907, "values": [ 0.7147786254882813, 0.08542617797851562, 0.08360550689697266, 0.08418816375732421, 0.08427519989013672 ] }, "throughput": { "unit": "samples/s", "value": 47.51615595342393 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 2, "total": 0.8002048034667969, "mean": 0.40010240173339845, "stdev": 0.3146762237548828, "p50": 0.40010240173339845, "p90": 0.6518433807373047, "p95": 0.683311003112793, "p99": 0.7084851010131836, "values": [ 0.7147786254882813, 0.08542617797851562 ] }, "throughput": { "unit": "samples/s", "value": 9.997440611879489 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 3, "total": 0.2520688705444336, "mean": 0.08402295684814454, "stdev": 0.0002973125946472148, "p50": 0.08418816375732421, "p90": 0.08425779266357422, "p95": 0.08426649627685547, "p99": 0.08427345916748047, "values": [ 0.08360550689697266, 0.08418816375732421, 0.08427519989013672 ] }, "throughput": { "unit": "samples/s", "value": 71.40905563278207 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null

No dataset card yet

Downloads last month
161