You have a data pipeline with a Cloud Dataflow job that aggregates and writes time series metrics to Cloud Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)
A) Configure your Cloud Dataflow pipeline to use local execution
B) Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions
C) Increase the number of nodes in the Cloud Bigtable cluster
D) Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable
E) Modify your Cloud Dataflow pipeline to use the CoGroupByKey transform before writing to Cloud Bigtable CoGroupByKey
Correct Answer:
Verified
Q105: You work for an advertising company, and
Q106: You have Google Cloud Dataflow streaming pipeline
Q107: You are operating a streaming Cloud Dataflow
Q108: You work for a global shipping company.
Q109: You want to migrate an on-premises Hadoop
Q111: An external customer provides you with a
Q112: You need to create a new transaction
Q113: You are migrating your data warehouse to
Q114: MJTelco Case Study Company Overview MJTelco is
Q115: Your team is working on a binary
Unlock this Answer For Free Now!
View this answer and more for free by performing one of the following actions
Scan the QR code to install the App and get 2 free unlocks
Unlock quizzes for free by uploading documents