Comprehensive and Detailed Explanation From Exact AWS AI documents:
According to MLOps best practices, once an ML model is deployed to production—especially an open-source pre-trained model—the organization must continuously monitor model outputs to ensure:
Model predictions remain accurate over time
No performance degradation due to data drift or concept drift
Outputs remain aligned with business and ethical expectations
AWS MLOps guidance emphasizes monitoring in production as a mandatory step for maintaining model reliability and governance.
Why the other options are incorrect:
A (Hyperparameter tuning) is optional and model-dependent.
B (Labeling data) is required only when training or fine-tuning, not when using a pre-trained model directly.
D (Feature engineering) is less relevant for modern pre-trained NLP models.
AWS AI document references:
MLOps Best Practices on AWS
Amazon SageMaker Model Monitoring
Operationalizing Machine Learning on AWS