The correct answer isD (the kubelet)because the kubelet is thenode agentresponsible for actually running Pods on each node. Kubernetes can orchestrate workloads across many nodes because every worker node (and control-plane node that runs workloads) runs a kubelet that continuously watches the API server for PodSpecs assigned to that node and then ensures the containers described by those PodSpecs are started and kept running. In other words, the kube-scheduler decideswherea Pod should run (sets spec.nodeName), but the kubelet is what makes the Podrunon that chosen node.
The kubelet integrates with the container runtime (via CRI) to pull images, create sandboxes, start containers, and manage their lifecycle. It also reports node and Pod status back to the control plane, executes liveness/readiness/startup probes, mounts volumes, and performs local housekeeping that keeps the node aligned with the declared desired state. This node-level reconciliation loop is a key Kubernetes pattern: the control plane declares intent, and the kubelet enforces it on the node.
Option C (API server) is critical but does not run Pods; it is the control plane’s front door for storing and serving cluster state. Option A (“node server”) is not a Kubernetes component. Option B (etcd static pods) is a misunderstanding: etcd is the datastore for Kubernetes state and may run as static Pods in some installations, but it is not the mechanism that runs user workloads across nodes.
So, Kubernetes runs Pods “across the fleet” because each node has a kubelet that can realize scheduled PodSpecs locally and keep them healthy over time.
=========