The ecommerce company’s chatbot, powered by AI, automates customer order submissions and is accessible 24/7 via the website. Prompt injection is an AI system input vulnerability where malicious users craft inputs to manipulate the chatbot’s behavior, such as bypassing safeguards or accessing unauthorized information. This vulnerability must be resolved before the chatbot is made available to ensure security.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Prompt injection is a vulnerability in AI systems, particularly chatbots, where malicious inputs can manipulate the model’s behavior, potentially leading to unauthorized actions or harmful outputs. Implementing guardrails and input validation can mitigate this risk."
(Source: AWS Bedrock User Guide, Security Best Practices)
Detailed Explanation:
Option A: Data leakageData leakage refers to the unintended exposure of sensitive data during model training or inference, not an input vulnerability affecting a chatbot’s operation.
Option B: Prompt injectionThis is the correct answer. Prompt injection is a critical input vulnerability for chatbots, where malicious prompts can exploit the AI to produce harmful or unauthorized responses, a risk that must be addressed before launch.
Option C: Large language model (LLM) hallucinationsLLM hallucinations refer to the model generating incorrect or ungrounded responses, which is an output issue, not an input vulnerability.
Option D: Concept driftConcept drift occurs when the data distribution changes over time, affecting model performance. It is not an input vulnerability but a long-term performance issue.
[References:, AWS Bedrock User Guide: Security Best Practices (https://docs.aws.amazon.com/bedrock/latest/userguide/security.html), AWS AI Practitioner Learning Path: Module on AI Security and Vulnerabilities, AWS Documentation: Securing AI Systems (https://aws.amazon.com/security/), , , , , ]