Winter Special Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: vce65

An ML engineer is configuring auto scaling for an inference component of a model that...

An ML engineer is configuring auto scaling for an inference component of a model that runs behind an Amazon SageMaker AI endpoint. The ML engineer configures SageMaker AI auto scaling with a target tracking scaling policy set to 100 invocations per model per minute. The SageMaker AI endpoint scales appropriately during normal business hours. However, the ML engineer notices that at the start of each business day, there are zero instances available to handle requests, which causes delays in processing.

The ML engineer must ensure that the SageMaker AI endpoint can handle incoming requests at the start of each business day.

Which solution will meet this requirement?

A.

Reduce the SageMaker AI auto scaling cooldown period to the minimum supported value. Add an auto scaling lifecycle hook to scale the SageMaker AI instances.

B.

Change the target metric to CPU utilization.

C.

Modify the scaling policy target value to one.

D.

Apply a step scaling policy that scales based on an Amazon CloudWatch alarm. Apply a second CloudWatch alarm and scaling policy to scale the minimum number of instances from zero to one at the start of each business day.

Amazon Web Services MLA-C01 Summary

  • Vendor: Amazon Web Services
  • Product: MLA-C01
  • Update on: Feb 3, 2026
  • Questions: 207
Price: $52.5  $149.99
Buy Now MLA-C01 PDF + Testing Engine Pack

Payments We Accept

Your purchase with ExamsVCE is safe and fast. Your products will be available for immediate download after your payment has been received.
The ExamsVCE website is protected by 256-bit SSL from McAfee, the leader in online security.

examsvce payment method