The correct answer is A because ensuring responsibility and fairness in ML begins with bias detection in the training data. Including a balanced representation of all demographics ensures the model learns fairly across different groups, which is critical in regulated industries like finance.
From AWS documentation:
"A key principle of responsible AI is building models that do not propagate or amplify bias. Fairness begins with training data. Reviewing and augmenting data for representation is essential."
Explanation of other options:
B. The number of hidden layers doesn’t inherently improve fairness or responsibility.
C. Keeping decisions opaque violates explainability principles in responsible AI.
D. A static dataset can become outdated and may not reflect real-world shifts, which limits fairness assessment over time.
Referenced AWS AI/ML Documents and Study Guides:
Amazon SageMaker Clarify Documentation – Bias Detection and Explainability
AWS Responsible AI Guidelines
AWS ML Specialty Study Guide – Fairness and Governance