File size: 19,563 Bytes
c011401 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
[2025-01-15 18:18:04,021 I 536593 536593] (raylet) main.cc:180: Setting cluster ID to: a461c5a60dd8717dbff63b0f8b8483a2c52b91cc9483133e8ee1f369
[2025-01-15 18:18:04,030 I 536593 536593] (raylet) main.cc:289: Raylet is not set to kill unknown children.
[2025-01-15 18:18:04,030 I 536593 536593] (raylet) io_service_pool.cc:35: IOServicePool is running with 1 io_service.
[2025-01-15 18:18:04,031 I 536593 536593] (raylet) main.cc:419: Setting node ID node_id=49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[2025-01-15 18:18:04,031 I 536593 536593] (raylet) store_runner.cc:32: Allowing the Plasma store to use up to 2.14748GB of memory.
[2025-01-15 18:18:04,031 I 536593 536593] (raylet) store_runner.cc:48: Starting object store with directory /dev/shm, fallback /tmp/ray, and huge page support disabled
[2025-01-15 18:18:04,031 I 536593 536622] (raylet) dlmalloc.cc:154: create_and_mmap_buffer(2147483656, /dev/shm/plasmaXXXXXX)
[2025-01-15 18:18:04,033 I 536593 536622] (raylet) store.cc:564: Plasma store debug dump:
Current usage: 0 / 2.14748 GB
- num bytes created total: 0
0 pending objects of total size 0MB
- objects spillable: 0
- bytes spillable: 0
- objects unsealed: 0
- bytes unsealed: 0
- objects in use: 0
- bytes in use: 0
- objects evictable: 0
- bytes evictable: 0
- objects created by worker: 0
- bytes created by worker: 0
- objects restored: 0
- bytes restored: 0
- objects received: 0
- bytes received: 0
- objects errored: 0
- bytes errored: 0
[2025-01-15 18:18:05,037 I 536593 536593] (raylet) grpc_server.cc:134: ObjectManager server started, listening on port 33067.
[2025-01-15 18:18:05,040 I 536593 536593] (raylet) worker_killing_policy.cc:101: Running GroupByOwner policy.
[2025-01-15 18:18:05,041 I 536593 536593] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 94999994368 bytes (0.95 system memory), total system memory bytes: 99999997952
[2025-01-15 18:18:05,041 I 536593 536593] (raylet) node_manager.cc:287: Initializing NodeManager node_id=49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[2025-01-15 18:18:05,042 I 536593 536593] (raylet) grpc_server.cc:134: NodeManager server started, listening on port 46477.
[2025-01-15 18:18:05,049 I 536593 536686] (raylet) agent_manager.cc:77: Monitor agent process with name dashboard_agent/424238335
[2025-01-15 18:18:05,049 I 536593 536688] (raylet) agent_manager.cc:77: Monitor agent process with name runtime_env_agent
[2025-01-15 18:18:05,050 I 536593 536593] (raylet) event.cc:493: Ray Event initialized for RAYLET
[2025-01-15 18:18:05,050 I 536593 536593] (raylet) event.cc:324: Set ray event level to warning
[2025-01-15 18:18:05,052 I 536593 536593] (raylet) raylet.cc:134: Raylet of id, 49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360 started. Raylet consists of node_manager and object_manager. node_manager address: 192.168.0.2:46477 object_manager address: 192.168.0.2:33067 hostname: 0cd925b1f73b
[2025-01-15 18:18:05,055 I 536593 536593] (raylet) node_manager.cc:525: [state-dump] NodeManager:
[state-dump] Node ID: 49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[state-dump] Node name: 192.168.0.2
[state-dump] InitialConfigResources: {node:192.168.0.2: 10000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, GPU: 20000, object_store_memory: 21474836480000, CPU: 200000}
[state-dump] ClusterTaskManager:
[state-dump] ========== Node: 49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360 =================
[state-dump] Infeasible queue length: 0
[state-dump] Schedule queue length: 0
[state-dump] Dispatch queue length: 0
[state-dump] num_waiting_for_resource: 0
[state-dump] num_waiting_for_plasma_memory: 0
[state-dump] num_waiting_for_remote_node_resources: 0
[state-dump] num_worker_not_started_by_job_config_not_exist: 0
[state-dump] num_worker_not_started_by_registration_timeout: 0
[state-dump] num_tasks_waiting_for_workers: 0
[state-dump] num_cancelled_tasks: 0
[state-dump] cluster_resource_scheduler state:
[state-dump] Local id: -8491188467377325818 Local resources: {"total":{node:__internal_head__: [10000], accelerator_type:A40: [10000], CPU: [200000], memory: [856446742530000], GPU: [10000, 10000], object_store_memory: [21474836480000], node:192.168.0.2: [10000]}}, "available": {node:__internal_head__: [10000], accelerator_type:A40: [10000], CPU: [200000], memory: [856446742530000], GPU: [10000, 10000], object_store_memory: [21474836480000], node:192.168.0.2: [10000]}}, "labels":{"ray.io/node_id":"49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360",} is_draining: 0 is_idle: 1 Cluster resources: node id: -8491188467377325818{"total":{GPU: 20000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "available": {GPU: 20000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "labels":{"ray.io/node_id":"49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placment group locations": [], "node to bundles": []}
[state-dump] Waiting tasks size: 0
[state-dump] Number of executing tasks: 0
[state-dump] Number of pinned task arguments: 0
[state-dump] Number of total spilled tasks: 0
[state-dump] Number of spilled waiting tasks: 0
[state-dump] Number of spilled unschedulable tasks: 0
[state-dump] Resource usage {
[state-dump] }
[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}:
[state-dump]
[state-dump] Running tasks by scheduling class:
[state-dump] ==================================================
[state-dump]
[state-dump] ClusterResources:
[state-dump] LocalObjectManager:
[state-dump] - num pinned objects: 0
[state-dump] - pinned objects size: 0
[state-dump] - num objects pending restore: 0
[state-dump] - num objects pending spill: 0
[state-dump] - num bytes pending spill: 0
[state-dump] - num bytes currently spilled: 0
[state-dump] - cumulative spill requests: 0
[state-dump] - cumulative restore requests: 0
[state-dump] - spilled objects pending delete: 0
[state-dump]
[state-dump] ObjectManager:
[state-dump] - num local objects: 0
[state-dump] - num unfulfilled push requests: 0
[state-dump] - num object pull requests: 0
[state-dump] - num chunks received total: 0
[state-dump] - num chunks received failed (all): 0
[state-dump] - num chunks received failed / cancelled: 0
[state-dump] - num chunks received failed / plasma error: 0
[state-dump] Event stats:
[state-dump] Global stats: 0 total (0 active)
[state-dump] Queueing time: mean = -nan s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] Execution time: mean = -nan s, total = 0.000 s
[state-dump] Event stats:
[state-dump] PushManager:
[state-dump] - num pushes in flight: 0
[state-dump] - num chunks in flight: 0
[state-dump] - num chunks remaining: 0
[state-dump] - max chunks allowed: 409
[state-dump] OwnershipBasedObjectDirectory:
[state-dump] - num listeners: 0
[state-dump] - cumulative location updates: 0
[state-dump] - num location updates per second: 70137860726492000.000
[state-dump] - num location lookups per second: 70137860726480000.000
[state-dump] - num locations added per second: 0.000
[state-dump] - num locations removed per second: 0.000
[state-dump] BufferPool:
[state-dump] - create buffer state map size: 0
[state-dump] PullManager:
[state-dump] - num bytes available for pulled objects: 2147483648
[state-dump] - num bytes being pulled (all): 0
[state-dump] - num bytes being pulled / pinned: 0
[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - first get request bundle: N/A
[state-dump] - first wait request bundle: N/A
[state-dump] - first task request bundle: N/A
[state-dump] - num objects queued: 0
[state-dump] - num objects actively pulled (all): 0
[state-dump] - num objects actively pulled / pinned: 0
[state-dump] - num bundles being pulled: 0
[state-dump] - num pull retries: 0
[state-dump] - max timeout seconds: 0
[state-dump] - max timeout request is already processed. No entry.
[state-dump]
[state-dump] WorkerPool:
[state-dump] - registered jobs: 0
[state-dump] - process_failed_job_config_missing: 0
[state-dump] - process_failed_rate_limited: 0
[state-dump] - process_failed_pending_registration: 0
[state-dump] - process_failed_runtime_env_setup_failed: 0
[state-dump] - num PYTHON workers: 0
[state-dump] - num PYTHON drivers: 0
[state-dump] - num PYTHON pending start requests: 0
[state-dump] - num PYTHON pending registration requests: 0
[state-dump] - num object spill callbacks queued: 0
[state-dump] - num object restore queued: 0
[state-dump] - num util functions queued: 0
[state-dump] - num idle workers: 0
[state-dump] TaskDependencyManager:
[state-dump] - task deps map size: 0
[state-dump] - get req map size: 0
[state-dump] - wait req map size: 0
[state-dump] - local objects map size: 0
[state-dump] WaitManager:
[state-dump] - num active wait requests: 0
[state-dump] Subscriber:
[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_REF_REMOVED_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_OBJECT_EVICTION
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] num async plasma notifications: 0
[state-dump] Remote node managers:
[state-dump] Event stats:
[state-dump] Global stats: 28 total (13 active)
[state-dump] Queueing time: mean = 1.236 ms, max = 9.296 ms, min = 25.177 us, total = 34.594 ms
[state-dump] Execution time: mean = 36.725 ms, total = 1.028 s
[state-dump] Event stats:
[state-dump] PeriodicalRunner.RunFnPeriodically - 11 total (2 active, 1 running), Execution time: mean = 166.716 us, total = 1.834 ms, Queueing time: mean = 3.114 ms, max = 9.296 ms, min = 25.177 us, total = 34.259 ms
[state-dump] NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 1.020 s, total = 1.020 s, Queueing time: mean = 86.835 us, max = 86.835 us, min = 86.835 us, total = 86.835 us
[state-dump] RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 1.540 ms, total = 1.540 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] NodeManager.ScheduleAndDispatchTasks - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.792 ms, total = 1.792 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 2.553 ms, total = 2.553 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 314.492 us, total = 314.492 us, Queueing time: mean = 102.054 us, max = 102.054 us, min = 102.054 us, total = 102.054 us
[state-dump] ObjectManager.UpdateAvailableMemory - 1 total (0 active), Execution time: mean = 2.516 us, total = 2.516 us, Queueing time: mean = 146.311 us, max = 146.311 us, min = 146.311 us, total = 146.311 us
[state-dump] MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] DebugString() time ms: 0
[state-dump]
[state-dump]
[2025-01-15 18:18:05,057 I 536593 536593] (raylet) accessor.cc:762: Received notification for node, IsAlive = 1 node_id=49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[2025-01-15 18:18:05,125 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536725, the token is 0
[2025-01-15 18:18:05,128 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536726, the token is 1
[2025-01-15 18:18:05,130 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536727, the token is 2
[2025-01-15 18:18:05,132 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536728, the token is 3
[2025-01-15 18:18:05,135 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536729, the token is 4
[2025-01-15 18:18:05,137 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536730, the token is 5
[2025-01-15 18:18:05,139 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536731, the token is 6
[2025-01-15 18:18:05,141 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536732, the token is 7
[2025-01-15 18:18:05,143 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536733, the token is 8
[2025-01-15 18:18:05,145 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536734, the token is 9
[2025-01-15 18:18:05,147 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536735, the token is 10
[2025-01-15 18:18:05,148 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536736, the token is 11
[2025-01-15 18:18:05,151 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536737, the token is 12
[2025-01-15 18:18:05,153 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536738, the token is 13
[2025-01-15 18:18:05,156 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536739, the token is 14
[2025-01-15 18:18:05,158 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536740, the token is 15
[2025-01-15 18:18:05,161 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536741, the token is 16
[2025-01-15 18:18:05,163 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536742, the token is 17
[2025-01-15 18:18:05,166 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536743, the token is 18
[2025-01-15 18:18:05,168 I 536593 536593] (raylet) worker_pool.cc:501: Started worker process with pid 536744, the token is 19
[2025-01-15 18:18:05,903 I 536593 536622] (raylet) object_store.cc:35: Object store current usage 8e-09 / 2.14748 GB.
[2025-01-15 18:18:06,075 I 536593 536593] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-15 18:18:06,174 I 536593 536593] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:18:06,174 I 536593 536593] (raylet) node_manager.cc:1586: Driver (pid=536329) is disconnected. worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000
[2025-01-15 18:18:06,180 I 536593 536593] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-15 18:18:06,325 I 536593 536593] (raylet) main.cc:454: received SIGTERM. Existing local drain request = None
[2025-01-15 18:18:06,325 I 536593 536593] (raylet) main.cc:255: Raylet graceful shutdown triggered, reason = EXPECTED_TERMINATION, reason message = received SIGTERM
[2025-01-15 18:18:06,325 I 536593 536593] (raylet) main.cc:258: Shutting down...
[2025-01-15 18:18:06,325 I 536593 536593] (raylet) accessor.cc:510: Unregistering node node_id=49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[2025-01-15 18:18:06,327 I 536593 536593] (raylet) accessor.cc:523: Finished unregistering node info, status = OK node_id=49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360
[2025-01-15 18:18:06,332 I 536593 536593] (raylet) agent_manager.cc:112: Killing agent dashboard_agent/424238335, pid 536685.
[2025-01-15 18:18:06,346 I 536593 536686] (raylet) agent_manager.cc:79: Agent process with name dashboard_agent/424238335 exited, exit code 0.
[2025-01-15 18:18:06,346 I 536593 536593] (raylet) agent_manager.cc:112: Killing agent runtime_env_agent, pid 536687.
[2025-01-15 18:18:06,356 I 536593 536688] (raylet) agent_manager.cc:79: Agent process with name runtime_env_agent exited, exit code 0.
[2025-01-15 18:18:06,357 I 536593 536593] (raylet) io_service_pool.cc:47: IOServicePool is stopped.
[2025-01-15 18:18:06,534 I 536593 536593] (raylet) stats.h:120: Stats module has shutdown.
|