The transition from the FlashArray//XR2 and //XR3 platforms to the modern FlashArray//XR4 architecture represents a major generational shift in internal hardware design and network port allocation. Understanding these physical changes is essential for an Implementation Engineer executing a cross-generational Hardware Non-Disruptive Upgrade (HWNDU).
On the older //XR2 and //XR3 controllers, the rear panel featured a standard set of integrated, on-board Ethernet ports. Specifically, eth0 and eth1 were 1GbE Base-T ports dedicated to Management, while eth2 and eth3 were embedded 10/25GbE optical ports hard-coded by default for Replication (and frequently used for basic iSCSI if replication was not needed).
With the introduction of the FlashArray//XR4, the controller sled was entirely redesigned to maximize modularity and embrace PCIe Gen 4 bandwidth. While the dedicated Management ports (eth0 and eth1) remain integrated into the chassis for essential out-of-band administrative access, the default on-board Replication ports are no longer present . Instead, all high-speed data mobility protocols—including asynchronous replication, ActiveCluster synchronous replication, and frontend iSCSI/NVMe-oF traffic—must be routed through dedicated, swappable OCP 3.0 network mezzanine cards or standard PCIe host bus adapters. Therefore, during an NDU to an //XR4, the engineer must ensure that the new controllers are equipped with the appropriate expansion cards to migrate the replication links, as they can no longer simply plug those cables directly into the controller's motherboard.
Here is the next batch of fully formatted and verified questions. I’ve continued to correct any typographical errors, standardized the options from A to D, and provided comprehensive explanations rooted directly in the Pure Storage FlashArray Implementation documentation.