A Machine Learning Specialist is designing a scalable data storage solution for Amazon SageMaker. There is an existing TensorFlow-based model implemented as a train.py script that relies on static training data that is currently stored as TFRecords. Which method of providing training data to Amazon SageMaker would meet the business requirements with the LEAST development overhead?
A) Use Amazon SageMaker script mode and use train.py unchanged. Point the Amazon SageMaker training invocation to the local path of the data without reformatting the training data.
B) Use Amazon SageMaker script mode and use train.py unchanged. Put the TFRecord data into an Amazon S3 bucket. Point the Amazon SageMaker training invocation to the S3 bucket without reformatting the training data.
C) Rewrite the train.py script to add a section that converts TFRecords to protobuf and ingests the protobuf data instead of TFRecords.
D) Prepare the data in the format accepted by Amazon SageMaker. Use AWS Glue or AWS Lambda to reformat and store the data in an Amazon S3 bucket.
Correct Answer:
Verified
Q149: A financial company is trying to detect
Q150: A machine learning (ML) specialist must develop
Q151: A company is launching a new product
Q152: A company is converting a large number
Q153: A machine learning (ML) specialist is administering
Q155: A Machine Learning Specialist is planning to
Q156: A machine learning specialist stores IoT soil
Q157: A bank wants to launch a low-rate
Q158: A company provisions Amazon SageMaker notebook instances
Q159: A financial services company wants to adopt
Unlock this Answer For Free Now!
View this answer and more for free by performing one of the following actions
Scan the QR code to install the App and get 2 free unlocks
Unlock quizzes for free by uploading documents