The correct answer is C because Amazon Bedrock Guardrails provides out-of-the-box configurable safety mechanisms to control the behavior of LLMs in generative AI applications. Guardrails can be configured with denylists, content filters, sensitive topics, and tone enforcement, all without retraining the model.
From AWS documentation:
"Amazon Bedrock Guardrails allows developers to define safety and responsible AI policies directly in the model inference layer, making it easy to prevent harmful, biased, or unsafe outputs with minimal configuration."
Explanation of other options:
A. Bedrock playgrounds are interactive environments for testing prompts and models but do not provide production-grade safety enforcement.
B. SageMaker Clarify focuses on bias detection and explainability for supervised ML models — it does not directly apply guardrails to LLM outputs.
D. SageMaker JumpStart is for model fine-tuning and deployment, not for enforcing safety policies on LLM responses.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock Documentation – Guardrails Overview
AWS Responsible AI Whitepaper
AWS Certified ML Specialty Study Guide – Safety in Generative AI