File size: 19,579 Bytes
c011401
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
[2025-01-10 23:12:28,384 I 50136 50136] (raylet) main.cc:180: Setting cluster ID to: dc63ec7d4f2695faf284f4494bda099531bef1598f2f452a1cec6669
[2025-01-10 23:12:28,391 I 50136 50136] (raylet) main.cc:289: Raylet is not set to kill unknown children.
[2025-01-10 23:12:28,391 I 50136 50136] (raylet) io_service_pool.cc:35: IOServicePool is running with 1 io_service.
[2025-01-10 23:12:28,391 I 50136 50136] (raylet) main.cc:419: Setting node ID node_id=df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[2025-01-10 23:12:28,392 I 50136 50136] (raylet) store_runner.cc:32: Allowing the Plasma store to use up to 2.14748GB of memory.
[2025-01-10 23:12:28,392 I 50136 50136] (raylet) store_runner.cc:48: Starting object store with directory /dev/shm, fallback /tmp/ray, and huge page support disabled
[2025-01-10 23:12:28,392 I 50136 50165] (raylet) dlmalloc.cc:154: create_and_mmap_buffer(2147483656, /dev/shm/plasmaXXXXXX)
[2025-01-10 23:12:28,393 I 50136 50165] (raylet) store.cc:564: Plasma store debug dump: 
Current usage: 0 / 2.14748 GB
- num bytes created total: 0
0 pending objects of total size 0MB
- objects spillable: 0
- bytes spillable: 0
- objects unsealed: 0
- bytes unsealed: 0
- objects in use: 0
- bytes in use: 0
- objects evictable: 0
- bytes evictable: 0

- objects created by worker: 0
- bytes created by worker: 0
- objects restored: 0
- bytes restored: 0
- objects received: 0
- bytes received: 0
- objects errored: 0
- bytes errored: 0

[2025-01-10 23:12:28,396 I 50136 50136] (raylet) grpc_server.cc:134: ObjectManager server started, listening on port 36649.
[2025-01-10 23:12:28,399 I 50136 50136] (raylet) worker_killing_policy.cc:101: Running GroupByOwner policy.
[2025-01-10 23:12:28,399 I 50136 50136] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 94999994368 bytes (0.95 system memory), total system memory bytes: 99999997952
[2025-01-10 23:12:28,399 I 50136 50136] (raylet) node_manager.cc:287: Initializing NodeManager node_id=df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[2025-01-10 23:12:28,400 I 50136 50136] (raylet) grpc_server.cc:134: NodeManager server started, listening on port 34821.
[2025-01-10 23:12:28,407 I 50136 50204] (raylet) agent_manager.cc:77: Monitor agent process with name dashboard_agent/424238335
[2025-01-10 23:12:28,407 I 50136 50206] (raylet) agent_manager.cc:77: Monitor agent process with name runtime_env_agent
[2025-01-10 23:12:28,407 I 50136 50136] (raylet) event.cc:493: Ray Event initialized for RAYLET
[2025-01-10 23:12:28,407 I 50136 50136] (raylet) event.cc:324: Set ray event level to warning
[2025-01-10 23:12:28,408 I 50136 50136] (raylet) raylet.cc:134: Raylet of id, df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0 started. Raylet consists of node_manager and object_manager. node_manager address: 192.168.0.2:34821 object_manager address: 192.168.0.2:36649 hostname: 0cd925b1f73b
[2025-01-10 23:12:28,410 I 50136 50136] (raylet) node_manager.cc:525: [state-dump] NodeManager:
[state-dump] Node ID: df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[state-dump] Node name: 192.168.0.2
[state-dump] InitialConfigResources: {CPU: 200000, memory: 849738305540000, node:192.168.0.2: 10000, accelerator_type:A40: 10000, GPU: 20000, object_store_memory: 21474836480000, node:__internal_head__: 10000}
[state-dump] ClusterTaskManager:
[state-dump] ========== Node: df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0 =================
[state-dump] Infeasible queue length: 0
[state-dump] Schedule queue length: 0
[state-dump] Dispatch queue length: 0
[state-dump] num_waiting_for_resource: 0
[state-dump] num_waiting_for_plasma_memory: 0
[state-dump] num_waiting_for_remote_node_resources: 0
[state-dump] num_worker_not_started_by_job_config_not_exist: 0
[state-dump] num_worker_not_started_by_registration_timeout: 0
[state-dump] num_tasks_waiting_for_workers: 0
[state-dump] num_cancelled_tasks: 0
[state-dump] cluster_resource_scheduler state: 
[state-dump] Local id: 652636538163867454 Local resources: {"total":{node:__internal_head__: [10000], accelerator_type:A40: [10000], node:192.168.0.2: [10000], CPU: [200000], memory: [849738305540000], GPU: [10000, 10000], object_store_memory: [21474836480000]}}, "available": {node:__internal_head__: [10000], accelerator_type:A40: [10000], node:192.168.0.2: [10000], CPU: [200000], memory: [849738305540000], GPU: [10000, 10000], object_store_memory: [21474836480000]}}, "labels":{"ray.io/node_id":"df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0",} is_draining: 0 is_idle: 1 Cluster resources: node id: 652636538163867454{"total":{GPU: 20000, node:__internal_head__: 10000, memory: 849738305540000, node:192.168.0.2: 10000, accelerator_type:A40: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "available": {GPU: 20000, node:__internal_head__: 10000, memory: 849738305540000, node:192.168.0.2: 10000, accelerator_type:A40: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "labels":{"ray.io/node_id":"df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placment group locations": [], "node to bundles": []}
[state-dump] Waiting tasks size: 0
[state-dump] Number of executing tasks: 0
[state-dump] Number of pinned task arguments: 0
[state-dump] Number of total spilled tasks: 0
[state-dump] Number of spilled waiting tasks: 0
[state-dump] Number of spilled unschedulable tasks: 0
[state-dump] Resource usage {
[state-dump] }
[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}:
[state-dump] 
[state-dump] Running tasks by scheduling class:
[state-dump] ==================================================
[state-dump] 
[state-dump] ClusterResources:
[state-dump] LocalObjectManager:
[state-dump] - num pinned objects: 0
[state-dump] - pinned objects size: 0
[state-dump] - num objects pending restore: 0
[state-dump] - num objects pending spill: 0
[state-dump] - num bytes pending spill: 0
[state-dump] - num bytes currently spilled: 0
[state-dump] - cumulative spill requests: 0
[state-dump] - cumulative restore requests: 0
[state-dump] - spilled objects pending delete: 0
[state-dump] 
[state-dump] ObjectManager:
[state-dump] - num local objects: 0
[state-dump] - num unfulfilled push requests: 0
[state-dump] - num object pull requests: 0
[state-dump] - num chunks received total: 0
[state-dump] - num chunks received failed (all): 0
[state-dump] - num chunks received failed / cancelled: 0
[state-dump] - num chunks received failed / plasma error: 0
[state-dump] Event stats:
[state-dump] Global stats: 0 total (0 active)
[state-dump] Queueing time: mean = -nan s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] Execution time:  mean = -nan s, total = 0.000 s
[state-dump] Event stats:
[state-dump] PushManager:
[state-dump] - num pushes in flight: 0
[state-dump] - num chunks in flight: 0
[state-dump] - num chunks remaining: 0
[state-dump] - max chunks allowed: 409
[state-dump] OwnershipBasedObjectDirectory:
[state-dump] - num listeners: 0
[state-dump] - cumulative location updates: 0
[state-dump] - num location updates per second: 139814652100024000.000
[state-dump] - num location lookups per second: 139814652100000000.000
[state-dump] - num locations added per second: 0.000
[state-dump] - num locations removed per second: 0.000
[state-dump] BufferPool:
[state-dump] - create buffer state map size: 0
[state-dump] PullManager:
[state-dump] - num bytes available for pulled objects: 2147483648
[state-dump] - num bytes being pulled (all): 0
[state-dump] - num bytes being pulled / pinned: 0
[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - first get request bundle: N/A
[state-dump] - first wait request bundle: N/A
[state-dump] - first task request bundle: N/A
[state-dump] - num objects queued: 0
[state-dump] - num objects actively pulled (all): 0
[state-dump] - num objects actively pulled / pinned: 0
[state-dump] - num bundles being pulled: 0
[state-dump] - num pull retries: 0
[state-dump] - max timeout seconds: 0
[state-dump] - max timeout request is already processed. No entry.
[state-dump] 
[state-dump] WorkerPool:
[state-dump] - registered jobs: 0
[state-dump] - process_failed_job_config_missing: 0
[state-dump] - process_failed_rate_limited: 0
[state-dump] - process_failed_pending_registration: 0
[state-dump] - process_failed_runtime_env_setup_failed: 0
[state-dump] - num PYTHON workers: 0
[state-dump] - num PYTHON drivers: 0
[state-dump] - num PYTHON pending start requests: 0
[state-dump] - num PYTHON pending registration requests: 0
[state-dump] - num object spill callbacks queued: 0
[state-dump] - num object restore queued: 0
[state-dump] - num util functions queued: 0
[state-dump] - num idle workers: 0
[state-dump] TaskDependencyManager:
[state-dump] - task deps map size: 0
[state-dump] - get req map size: 0
[state-dump] - wait req map size: 0
[state-dump] - local objects map size: 0
[state-dump] WaitManager:
[state-dump] - num active wait requests: 0
[state-dump] Subscriber:
[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_REF_REMOVED_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_OBJECT_EVICTION
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] num async plasma notifications: 0
[state-dump] Remote node managers: 
[state-dump] Event stats:
[state-dump] Global stats: 27 total (13 active)
[state-dump] Queueing time: mean = 1.084 ms, max = 7.977 ms, min = 10.832 us, total = 29.266 ms
[state-dump] Execution time:  mean = 802.487 us, total = 21.667 ms
[state-dump] Event stats:
[state-dump] 	PeriodicalRunner.RunFnPeriodically - 11 total (2 active, 1 running), Execution time: mean = 144.927 us, total = 1.594 ms, Queueing time: mean = 2.658 ms, max = 7.977 ms, min = 24.441 us, total = 29.242 ms
[state-dump] 	ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 16.732 ms, total = 16.732 ms, Queueing time: mean = 12.747 us, max = 12.747 us, min = 12.747 us, total = 12.747 us
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.156 ms, total = 1.156 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 203.966 us, total = 203.966 us, Queueing time: mean = 10.832 us, max = 10.832 us, min = 10.832 us, total = 10.832 us
[state-dump] 	NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 781.790 us, total = 781.790 us, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.ScheduleAndDispatchTasks - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 1.199 ms, total = 1.199 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] DebugString() time ms: 1
[state-dump] 
[state-dump] 
[2025-01-10 23:12:28,411 I 50136 50136] (raylet) accessor.cc:762: Received notification for node, IsAlive = 1 node_id=df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[2025-01-10 23:12:28,524 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50243, the token is 0
[2025-01-10 23:12:28,527 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50244, the token is 1
[2025-01-10 23:12:28,529 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50245, the token is 2
[2025-01-10 23:12:28,532 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50246, the token is 3
[2025-01-10 23:12:28,534 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50247, the token is 4
[2025-01-10 23:12:28,536 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50248, the token is 5
[2025-01-10 23:12:28,537 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50249, the token is 6
[2025-01-10 23:12:28,539 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50250, the token is 7
[2025-01-10 23:12:28,541 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50251, the token is 8
[2025-01-10 23:12:28,542 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50252, the token is 9
[2025-01-10 23:12:28,544 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50253, the token is 10
[2025-01-10 23:12:28,546 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50254, the token is 11
[2025-01-10 23:12:28,548 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50255, the token is 12
[2025-01-10 23:12:28,549 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50256, the token is 13
[2025-01-10 23:12:28,551 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50257, the token is 14
[2025-01-10 23:12:28,553 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50258, the token is 15
[2025-01-10 23:12:28,555 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50259, the token is 16
[2025-01-10 23:12:28,557 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50260, the token is 17
[2025-01-10 23:12:28,559 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50261, the token is 18
[2025-01-10 23:12:28,561 I 50136 50136] (raylet) worker_pool.cc:501: Started worker process with pid 50262, the token is 19
[2025-01-10 23:12:29,178 I 50136 50165] (raylet) object_store.cc:35: Object store current usage 8e-09 / 2.14748 GB.
[2025-01-10 23:12:29,365 I 50136 50136] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-10 23:12:38,403 W 50136 50159] (raylet) metric_exporter.cc:105: [1] Export metrics to agent failed: RpcError: RPC Error message: failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:60285: Failed to connect to remote host: Connection refused; RPC Error details: . This won't affect Ray, but you can lose metrics from the cluster.
[2025-01-10 23:12:47,131 I 50136 50136] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-10 23:12:47,131 I 50136 50136] (raylet) node_manager.cc:1586: Driver (pid=49857) is disconnected. worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000
[2025-01-10 23:12:47,135 I 50136 50136] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-10 23:12:47,239 I 50136 50136] (raylet) main.cc:454: received SIGTERM. Existing local drain request = None
[2025-01-10 23:12:47,239 I 50136 50136] (raylet) main.cc:255: Raylet graceful shutdown triggered, reason = EXPECTED_TERMINATION, reason message = received SIGTERM
[2025-01-10 23:12:47,239 I 50136 50136] (raylet) main.cc:258: Shutting down...
[2025-01-10 23:12:47,239 I 50136 50136] (raylet) accessor.cc:510: Unregistering node node_id=df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[2025-01-10 23:12:47,240 I 50136 50136] (raylet) accessor.cc:523: Finished unregistering node info, status = OK node_id=df7ee78b4bc3ac4f066532a9ba0bb0c580f959d04b14b9c1859f7fa0
[2025-01-10 23:12:47,243 I 50136 50136] (raylet) agent_manager.cc:112: Killing agent dashboard_agent/424238335, pid 50203.
[2025-01-10 23:12:47,253 I 50136 50204] (raylet) agent_manager.cc:79: Agent process with name dashboard_agent/424238335 exited, exit code 0.
[2025-01-10 23:12:47,253 I 50136 50136] (raylet) agent_manager.cc:112: Killing agent runtime_env_agent, pid 50205.
[2025-01-10 23:12:47,263 I 50136 50206] (raylet) agent_manager.cc:79: Agent process with name runtime_env_agent exited, exit code 0.
[2025-01-10 23:12:47,264 I 50136 50136] (raylet) io_service_pool.cc:47: IOServicePool is stopped.
[2025-01-10 23:12:47,311 I 50136 50136] (raylet) stats.h:120: Stats module has shutdown.