You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science team for analysis. Every hour, thousands of transactions are updated with a new status. The size of the intitial dataset is 1.5 PB, and it will grow by 3 TB per day. The data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance and usability for your data science team. Which two strategies should you adopt? (Choose two.)
A) Denormalize the data as must as possible.
B) Preserve the structure of the data as much as possible.
C) Use BigQuery UPDATE to further reduce the size of the dataset.
D) Develop a data pipeline where status updates are appended to BigQuery instead of updated.
E) Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery's support for external data sources to query.
Correct Answer:
Verified
Q131: You work on a regression problem in
Q132: Your company is in the process of
Q133: You are integrating one of your internal
Q134: You work for an advertising company, and
Q135: You have a petabyte of analytics data
Q137: You want to build a managed Hadoop
Q138: You currently have a single on-premises Kafka
Q139: Flowlogistic Case Study Company Overview Flowlogistic is
Q140: You are responsible for writing your company's
Q141: You work for a large real estate
Unlock this Answer For Free Now!
View this answer and more for free by performing one of the following actions
Scan the QR code to install the App and get 2 free unlocks
Unlock quizzes for free by uploading documents