Correct Option: A. Adversarial attacks
Adversarial attacks are specifically designed to deceive AI and machine learning models by feeding them crafted inputs that result in incorrect outputs. These attacks are highly effective against AI models, especially in areas like fraud detection, where accuracy is critical.
From CSA Security Guidance v4.0 – Domain 13: Security as a Service (SecaaS) and related AI-focused security discussions:
“AI models are vulnerable to adversarial inputs, where attackers introduce subtle perturbations to input data that are imperceptible to humans but cause the AI system to make wrong decisions. These attacks degrade the accuracy and reliability of machine learning models.”
— CSA Guidance on AI Security (in Security as a Service domain)
Adversarial ML is a well-recognized field of AI security, where the goal of the attacker is to intentionally corrupt or manipulate input data, thereby lowering the performance or biasing the output of the model.
Why the Other Options Are Incorrect:
B. DDoS attacks➤ Affects availability, not accuracy. DDoS can cause downtime but doesn’t interfere with model predictions.
C. Third-party services➤ May introduce supply chain or dependency risks, but they don’t directly impact the AI model’s accuracy unless involved in training data pipelines.
D. Jailbreak attack➤ More relevant to LLMs (Large Language Models) or chatbots, not structured AI fraud detection models.