When AI adoption spans multiple business units, the primary risk is fragmented delivery: inconsistent standards, duplicated spend, uneven risk controls, and misalignment with enterprise strategy. Establishing an AI council (D) is the best approach because it creates a cross-functional governance mechanism that aligns AI initiatives to business priorities while enforcing Responsible AI practices consistently.
An AI council typically includes senior stakeholders from business leadership, IT, security, legal, compliance, privacy, data, and HR. Its role is to define AI principles and guardrails, approve high-impact use cases, set policy for data usage and access, establish evaluation and monitoring requirements, and coordinate change management and training. This also enables portfolio management—deciding which projects to prioritize, reuse, or stop—so AI investments map to measurable business outcomes.
The other options are weaker: A encourages siloed deployments and inconsistent risk management. B centralizes too narrowly in IT; Responsible AI requires broader accountability than a single function. C can help delivery capacity but does not replace internal governance; vendors still need direction, controls, and oversight from the organization.