A company has set up and deployed its machine learning (ML) model into production with an endpoint using Amazon SageMaker hosting services. The ML team has configured automatic scaling for its SageMaker instances to support workload changes. During testing, the team notices that additional instances are being launched before the new instances are ready. This behavior needs to change as soon as possible. How can the ML team solve this issue?
A) Decrease the cooldown period for the scale-in activity. Increase the configured maximum capacity of instances.
B) Replace the current endpoint with a multi-model endpoint using SageMaker.
C) Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inference endpoint.
D) Increase the cooldown period for the scale-out activity.
Correct Answer:
Verified
Q129: A Machine Learning Specialist is deciding between
Q130: An agency collects census information within a
Q131: A monitoring service generates 1 TB of
Q132: A Data Scientist is developing a machine
Q133: A machine learning specialist needs to analyze
Q135: A company is building a line-counting application
Q136: A manufacturing company uses machine learning (ML)
Q137: A machine learning specialist is developing a
Q138: A company offers an online shopping service
Q139: A library is developing an automatic book-borrowing
Unlock this Answer For Free Now!
View this answer and more for free by performing one of the following actions
Scan the QR code to install the App and get 2 free unlocks
Unlock quizzes for free by uploading documents