The financial institution needs an AI solution for loan approval decisions to be explainable for security and audit purposes. Explainability refers to the ability to understand and interpret how a model makes decisions. Model complexity directly impacts explainability: simpler models (e.g., logistic regression) are more interpretable, while complex models (e.g., deep neural networks) are harder to explain, often behaving like "black boxes."
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Model complexity affects the explainability of AI solutions. Simpler models, such as linear regression, are inherently more interpretable, while complex models, such as deep neural networks, may require additional tools like SageMaker Clarify to provide insights into their decision-making processes."
(Source: Amazon SageMaker Developer Guide, Explainability with SageMaker Clarify)
Detailed Explanation:
Option A: Model complexityThis is the correct answer. The complexity of the model directly influences how easily its decisions can be explained, a critical factor for audit and security purposes in loan approvals.
Option B: Training timeTraining time refers to how long it takes to train the model, which does not directly impact the explainability of its decisions.
Option C: Number of hyperparametersWhile hyperparameters affect model performance, they do not directly relate to explainability. A model with many hyperparameters might still be explainable if it is a simple model.
Option D: Deployment timeDeployment time refers to the time taken to deploy the model to production, which is unrelated to the explainability of its decisions.
[References:, Amazon SageMaker Developer Guide: Explainability with SageMaker Clarify (https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-explainability.html), AWS AI Practitioner Learning Path: Module on Responsible AI and Explainability, AWS Documentation: Explainable AI (https://aws.amazon.com/machine-learning/responsible-ai/), , , ]