The kube-scheduler assigns a node to a Pod when the Pod isunscheduled—meaning it exists in the API server but hasno spec.nodeName set. The event that triggers scheduling is therefore:a new Pod is created and has no assigned node, which is optionD.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending” unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds” the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), thatnewPod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer isD: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========