Winter Special Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: vce65

A company has trained an ML model in Amazon SageMaker.

A company has trained an ML model in Amazon SageMaker. The company needs to host the model to provide inferences in a production environment.

The model must be highly available and must respond with minimum latency. The size of each request will be between 1 KB and 3 MB. The model will receive unpredictable bursts of requests during the day. The inferences must adapt proportionally to the changes in demand.

How should the company deploy the model into production to meet these requirements?

A.

Create a SageMaker real-time inference endpoint. Configure auto scaling. Configure the endpoint to present the existing model.

B.

Deploy the model on an Amazon Elastic Container Service (Amazon ECS) cluster. Use ECS scheduled scaling that is based on the CPU of the ECS cluster.

C.

Install SageMaker Operator on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Deploy the model in Amazon EKS. Set horizontal pod auto scaling to scale replicas based on the memory metric.

D.

Use Spot Instances with a Spot Fleet behind an Application Load Balancer (ALB) for inferences. Use the ALBRequestCountPerTarget metric as the metric for auto scaling.

Amazon Web Services MLA-C01 Summary

  • Vendor: Amazon Web Services
  • Product: MLA-C01
  • Update on: Feb 3, 2026
  • Questions: 207
Price: $52.5  $149.99
Buy Now MLA-C01 PDF + Testing Engine Pack

Payments We Accept

Your purchase with ExamsVCE is safe and fast. Your products will be available for immediate download after your payment has been received.
The ExamsVCE website is protected by 256-bit SSL from McAfee, the leader in online security.

examsvce payment method