[2025-01-15 18:16:51,423 I 527828 527828] (raylet) main.cc:180: Setting cluster ID to: c6dd68455e4477d20070064a9a50d1e2164d023ecaa38bf3a6635eb3 [2025-01-15 18:16:51,432 I 527828 527828] (raylet) main.cc:289: Raylet is not set to kill unknown children. [2025-01-15 18:16:51,433 I 527828 527828] (raylet) io_service_pool.cc:35: IOServicePool is running with 1 io_service. [2025-01-15 18:16:51,433 I 527828 527828] (raylet) main.cc:419: Setting node ID node_id=431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [2025-01-15 18:16:51,434 I 527828 527828] (raylet) store_runner.cc:32: Allowing the Plasma store to use up to 2.14748GB of memory. [2025-01-15 18:16:51,434 I 527828 527828] (raylet) store_runner.cc:48: Starting object store with directory /dev/shm, fallback /tmp/ray, and huge page support disabled [2025-01-15 18:16:51,434 I 527828 527857] (raylet) dlmalloc.cc:154: create_and_mmap_buffer(2147483656, /dev/shm/plasmaXXXXXX) [2025-01-15 18:16:51,436 I 527828 527857] (raylet) store.cc:564: Plasma store debug dump: Current usage: 0 / 2.14748 GB - num bytes created total: 0 0 pending objects of total size 0MB - objects spillable: 0 - bytes spillable: 0 - objects unsealed: 0 - bytes unsealed: 0 - objects in use: 0 - bytes in use: 0 - objects evictable: 0 - bytes evictable: 0 - objects created by worker: 0 - bytes created by worker: 0 - objects restored: 0 - bytes restored: 0 - objects received: 0 - bytes received: 0 - objects errored: 0 - bytes errored: 0 [2025-01-15 18:16:51,440 I 527828 527828] (raylet) grpc_server.cc:134: ObjectManager server started, listening on port 44185. [2025-01-15 18:16:51,443 I 527828 527828] (raylet) worker_killing_policy.cc:101: Running GroupByOwner policy. [2025-01-15 18:16:51,443 I 527828 527828] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 94999994368 bytes (0.95 system memory), total system memory bytes: 99999997952 [2025-01-15 18:16:51,443 I 527828 527828] (raylet) node_manager.cc:287: Initializing NodeManager node_id=431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [2025-01-15 18:16:51,444 I 527828 527828] (raylet) grpc_server.cc:134: NodeManager server started, listening on port 35913. [2025-01-15 18:16:51,453 I 527828 527896] (raylet) agent_manager.cc:77: Monitor agent process with name dashboard_agent/424238335 [2025-01-15 18:16:51,453 I 527828 527898] (raylet) agent_manager.cc:77: Monitor agent process with name runtime_env_agent [2025-01-15 18:16:51,453 I 527828 527828] (raylet) event.cc:493: Ray Event initialized for RAYLET [2025-01-15 18:16:51,453 I 527828 527828] (raylet) event.cc:324: Set ray event level to warning [2025-01-15 18:16:51,456 I 527828 527828] (raylet) raylet.cc:134: Raylet of id, 431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed started. Raylet consists of node_manager and object_manager. node_manager address: 192.168.0.2:35913 object_manager address: 192.168.0.2:44185 hostname: 0cd925b1f73b [2025-01-15 18:16:51,458 I 527828 527828] (raylet) node_manager.cc:525: [state-dump] NodeManager: [state-dump] Node ID: 431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [state-dump] Node name: 192.168.0.2 [state-dump] InitialConfigResources: {CPU: 200000, object_store_memory: 21474836480000, GPU: 20000, node:192.168.0.2: 10000, node:__internal_head__: 10000, accelerator_type:A40: 10000, memory: 863880515590000} [state-dump] ClusterTaskManager: [state-dump] ========== Node: 431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed ================= [state-dump] Infeasible queue length: 0 [state-dump] Schedule queue length: 0 [state-dump] Dispatch queue length: 0 [state-dump] num_waiting_for_resource: 0 [state-dump] num_waiting_for_plasma_memory: 0 [state-dump] num_waiting_for_remote_node_resources: 0 [state-dump] num_worker_not_started_by_job_config_not_exist: 0 [state-dump] num_worker_not_started_by_registration_timeout: 0 [state-dump] num_tasks_waiting_for_workers: 0 [state-dump] num_cancelled_tasks: 0 [state-dump] cluster_resource_scheduler state: [state-dump] Local id: -8328701162829391839 Local resources: {"total":{object_store_memory: [21474836480000], memory: [863880515590000], accelerator_type:A40: [10000], node:__internal_head__: [10000], GPU: [10000, 10000], node:192.168.0.2: [10000], CPU: [200000]}}, "available": {object_store_memory: [21474836480000], memory: [863880515590000], accelerator_type:A40: [10000], node:__internal_head__: [10000], GPU: [10000, 10000], node:192.168.0.2: [10000], CPU: [200000]}}, "labels":{"ray.io/node_id":"431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed",} is_draining: 0 is_idle: 1 Cluster resources: node id: -8328701162829391839{"total":{accelerator_type:A40: 10000, node:__internal_head__: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, memory: 863880515590000, GPU: 20000, CPU: 200000}}, "available": {accelerator_type:A40: 10000, node:__internal_head__: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, memory: 863880515590000, GPU: 20000, CPU: 200000}}, "labels":{"ray.io/node_id":"431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placment group locations": [], "node to bundles": []} [state-dump] Waiting tasks size: 0 [state-dump] Number of executing tasks: 0 [state-dump] Number of pinned task arguments: 0 [state-dump] Number of total spilled tasks: 0 [state-dump] Number of spilled waiting tasks: 0 [state-dump] Number of spilled unschedulable tasks: 0 [state-dump] Resource usage { [state-dump] } [state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}: [state-dump] [state-dump] Running tasks by scheduling class: [state-dump] ================================================== [state-dump] [state-dump] ClusterResources: [state-dump] LocalObjectManager: [state-dump] - num pinned objects: 0 [state-dump] - pinned objects size: 0 [state-dump] - num objects pending restore: 0 [state-dump] - num objects pending spill: 0 [state-dump] - num bytes pending spill: 0 [state-dump] - num bytes currently spilled: 0 [state-dump] - cumulative spill requests: 0 [state-dump] - cumulative restore requests: 0 [state-dump] - spilled objects pending delete: 0 [state-dump] [state-dump] ObjectManager: [state-dump] - num local objects: 0 [state-dump] - num unfulfilled push requests: 0 [state-dump] - num object pull requests: 0 [state-dump] - num chunks received total: 0 [state-dump] - num chunks received failed (all): 0 [state-dump] - num chunks received failed / cancelled: 0 [state-dump] - num chunks received failed / plasma error: 0 [state-dump] Event stats: [state-dump] Global stats: 0 total (0 active) [state-dump] Queueing time: mean = -nan s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] Execution time: mean = -nan s, total = 0.000 s [state-dump] Event stats: [state-dump] PushManager: [state-dump] - num pushes in flight: 0 [state-dump] - num chunks in flight: 0 [state-dump] - num chunks remaining: 0 [state-dump] - max chunks allowed: 409 [state-dump] OwnershipBasedObjectDirectory: [state-dump] - num listeners: 0 [state-dump] - cumulative location updates: 0 [state-dump] - num location updates per second: 140586630528440000.000 [state-dump] - num location lookups per second: 140586630528416000.000 [state-dump] - num locations added per second: 0.000 [state-dump] - num locations removed per second: 0.000 [state-dump] BufferPool: [state-dump] - create buffer state map size: 0 [state-dump] PullManager: [state-dump] - num bytes available for pulled objects: 2147483648 [state-dump] - num bytes being pulled (all): 0 [state-dump] - num bytes being pulled / pinned: 0 [state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} [state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} [state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} [state-dump] - first get request bundle: N/A [state-dump] - first wait request bundle: N/A [state-dump] - first task request bundle: N/A [state-dump] - num objects queued: 0 [state-dump] - num objects actively pulled (all): 0 [state-dump] - num objects actively pulled / pinned: 0 [state-dump] - num bundles being pulled: 0 [state-dump] - num pull retries: 0 [state-dump] - max timeout seconds: 0 [state-dump] - max timeout request is already processed. No entry. [state-dump] [state-dump] WorkerPool: [state-dump] - registered jobs: 0 [state-dump] - process_failed_job_config_missing: 0 [state-dump] - process_failed_rate_limited: 0 [state-dump] - process_failed_pending_registration: 0 [state-dump] - process_failed_runtime_env_setup_failed: 0 [state-dump] - num PYTHON workers: 0 [state-dump] - num PYTHON drivers: 0 [state-dump] - num PYTHON pending start requests: 0 [state-dump] - num PYTHON pending registration requests: 0 [state-dump] - num object spill callbacks queued: 0 [state-dump] - num object restore queued: 0 [state-dump] - num util functions queued: 0 [state-dump] - num idle workers: 0 [state-dump] TaskDependencyManager: [state-dump] - task deps map size: 0 [state-dump] - get req map size: 0 [state-dump] - wait req map size: 0 [state-dump] - local objects map size: 0 [state-dump] WaitManager: [state-dump] - num active wait requests: 0 [state-dump] Subscriber: [state-dump] Channel WORKER_OBJECT_EVICTION [state-dump] - cumulative subscribe requests: 0 [state-dump] - cumulative unsubscribe requests: 0 [state-dump] - active subscribed publishers: 0 [state-dump] - cumulative published messages: 0 [state-dump] - cumulative processed messages: 0 [state-dump] Channel WORKER_REF_REMOVED_CHANNEL [state-dump] - cumulative subscribe requests: 0 [state-dump] - cumulative unsubscribe requests: 0 [state-dump] - active subscribed publishers: 0 [state-dump] - cumulative published messages: 0 [state-dump] - cumulative processed messages: 0 [state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL [state-dump] - cumulative subscribe requests: 0 [state-dump] - cumulative unsubscribe requests: 0 [state-dump] - active subscribed publishers: 0 [state-dump] - cumulative published messages: 0 [state-dump] - cumulative processed messages: 0 [state-dump] num async plasma notifications: 0 [state-dump] Remote node managers: [state-dump] Event stats: [state-dump] Global stats: 27 total (13 active) [state-dump] Queueing time: mean = 1.452 ms, max = 10.889 ms, min = 30.657 us, total = 39.203 ms [state-dump] Execution time: mean = 1.084 ms, total = 29.255 ms [state-dump] Event stats: [state-dump] PeriodicalRunner.RunFnPeriodically - 11 total (2 active, 1 running), Execution time: mean = 168.579 us, total = 1.854 ms, Queueing time: mean = 3.554 ms, max = 10.889 ms, min = 30.657 us, total = 39.093 ms [state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 1.341 ms, total = 1.341 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] NodeManager.ScheduleAndDispatchTasks - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 303.912 us, total = 303.912 us, Queueing time: mean = 77.189 us, max = 77.189 us, min = 77.189 us, total = 77.189 us [state-dump] NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 21.748 ms, total = 21.748 ms, Queueing time: mean = 33.189 us, max = 33.189 us, min = 33.189 us, total = 33.189 us [state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 2.028 ms, total = 2.028 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.980 ms, total = 1.980 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s [state-dump] DebugString() time ms: 0 [state-dump] [state-dump] [2025-01-15 18:16:51,460 I 527828 527828] (raylet) accessor.cc:762: Received notification for node, IsAlive = 1 node_id=431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [2025-01-15 18:16:51,604 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527935, the token is 0 [2025-01-15 18:16:51,608 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527936, the token is 1 [2025-01-15 18:16:51,610 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527937, the token is 2 [2025-01-15 18:16:51,612 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527938, the token is 3 [2025-01-15 18:16:51,614 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527939, the token is 4 [2025-01-15 18:16:51,616 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527940, the token is 5 [2025-01-15 18:16:51,618 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527941, the token is 6 [2025-01-15 18:16:51,620 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527942, the token is 7 [2025-01-15 18:16:51,622 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527943, the token is 8 [2025-01-15 18:16:51,624 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527944, the token is 9 [2025-01-15 18:16:51,627 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527945, the token is 10 [2025-01-15 18:16:51,630 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527946, the token is 11 [2025-01-15 18:16:51,632 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527947, the token is 12 [2025-01-15 18:16:51,635 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527948, the token is 13 [2025-01-15 18:16:51,637 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527949, the token is 14 [2025-01-15 18:16:51,639 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527950, the token is 15 [2025-01-15 18:16:51,642 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527951, the token is 16 [2025-01-15 18:16:51,644 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527952, the token is 17 [2025-01-15 18:16:51,647 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527953, the token is 18 [2025-01-15 18:16:51,650 I 527828 527828] (raylet) worker_pool.cc:501: Started worker process with pid 527954, the token is 19 [2025-01-15 18:16:52,360 I 527828 527857] (raylet) object_store.cc:35: Object store current usage 8e-09 / 2.14748 GB. [2025-01-15 18:16:52,511 I 527828 527828] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool. [2025-01-15 18:16:53,097 I 527828 527828] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false [2025-01-15 18:16:53,098 I 527828 527828] (raylet) node_manager.cc:1586: Driver (pid=527562) is disconnected. worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000 [2025-01-15 18:16:53,104 I 527828 527828] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool. [2025-01-15 18:16:53,204 I 527828 527828] (raylet) main.cc:454: received SIGTERM. Existing local drain request = None [2025-01-15 18:16:53,204 I 527828 527828] (raylet) main.cc:255: Raylet graceful shutdown triggered, reason = EXPECTED_TERMINATION, reason message = received SIGTERM [2025-01-15 18:16:53,204 I 527828 527828] (raylet) main.cc:258: Shutting down... [2025-01-15 18:16:53,204 I 527828 527828] (raylet) accessor.cc:510: Unregistering node node_id=431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [2025-01-15 18:16:53,206 I 527828 527828] (raylet) accessor.cc:523: Finished unregistering node info, status = OK node_id=431ce0c1466ea23c73e34d584e0d0f56775d4ba822bfbc5a6eadc0ed [2025-01-15 18:16:53,212 I 527828 527828] (raylet) agent_manager.cc:112: Killing agent dashboard_agent/424238335, pid 527895. [2025-01-15 18:16:53,224 I 527828 527896] (raylet) agent_manager.cc:79: Agent process with name dashboard_agent/424238335 exited, exit code 0. [2025-01-15 18:16:53,224 I 527828 527828] (raylet) agent_manager.cc:112: Killing agent runtime_env_agent, pid 527897. [2025-01-15 18:16:53,230 I 527828 527898] (raylet) agent_manager.cc:79: Agent process with name runtime_env_agent exited, exit code 0. [2025-01-15 18:16:53,231 I 527828 527828] (raylet) io_service_pool.cc:47: IOServicePool is stopped. [2025-01-15 18:16:53,337 I 527828 527828] (raylet) stats.h:120: Stats module has shutdown.