Deck 10: AWS Certified Solutions Architect - Professional (SAP-C01)
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/871
Play
Full screen (f)
Deck 10: AWS Certified Solutions Architect - Professional (SAP-C01)
1
A company is storing data on Amazon Simple Storage Service (S3). The company's security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? (Choose 3)
A) Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.
B) Use Amazon S3 server-side encryption with customer-provided keys.
C) Use Amazon S3 server-side encryption with EC2 key pair.
D) Use Amazon S3 bucket policies to restrict access to the data at rest.
E) Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
F) Use SSL to encrypt the data while in transit to Amazon S3.
A) Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.
B) Use Amazon S3 server-side encryption with customer-provided keys.
C) Use Amazon S3 server-side encryption with EC2 key pair.
D) Use Amazon S3 bucket policies to restrict access to the data at rest.
E) Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
F) Use SSL to encrypt the data while in transit to Amazon S3.
Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.
Use Amazon S3 server-side encryption with customer-provided keys.
Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
Use Amazon S3 server-side encryption with customer-provided keys.
Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
2
Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst in web traffic due to a company announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required?
A) Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website.
B) Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
C) Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
D) Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
A) Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website.
B) Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
C) Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
D) Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
3
You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2 8xlarge instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?
A) Create more, smaller flies on Amazon S3.
B) Add additional cc2 8xlarge instances by introducing a task group.
C) Use smaller instances that have higher aggregate I/O performance.
D) Create fewer, larger files on Amazon S3.
A) Create more, smaller flies on Amazon S3.
B) Add additional cc2 8xlarge instances by introducing a task group.
C) Use smaller instances that have higher aggregate I/O performance.
D) Create fewer, larger files on Amazon S3.
Use smaller instances that have higher aggregate I/O performance.
4
You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately, this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)
A) An AWS Direct Connect link between the VPC and the network housing the internal services.
B) An Internet Gateway to allow a VPN connection.
C) An Elastic IP address on the VPC instance
D) An IP address space that does not conflict with the one on-premises
E) Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses
F) A VM Import of the current virtual machine
A) An AWS Direct Connect link between the VPC and the network housing the internal services.
B) An Internet Gateway to allow a VPN connection.
C) An Elastic IP address on the VPC instance
D) An IP address space that does not conflict with the one on-premises
E) Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses
F) A VM Import of the current virtual machine
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
5
What does elasticity mean to AWS?
A) The ability to scale computing resources up easily, with minimal friction and down with latency.
B) The ability to scale computing resources up and down easily, with minimal friction.
C) The ability to provision cloud computing resources in expectation of future demand.
D) The ability to recover from business continuity events with minimal friction.
A) The ability to scale computing resources up easily, with minimal friction and down with latency.
B) The ability to scale computing resources up and down easily, with minimal friction.
C) The ability to provision cloud computing resources in expectation of future demand.
D) The ability to recover from business continuity events with minimal friction.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
6
You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose 2)
A) Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address.
B) Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution.
C) Place all your web servers behind ELB. Configure a Route53 CNMIE to point to the ELB DNS name.
D) Assign EIPs to all web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover.
E) Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.
A) Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address.
B) Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution.
C) Place all your web servers behind ELB. Configure a Route53 CNMIE to point to the ELB DNS name.
D) Assign EIPs to all web servers. Configure a Route53 record set with all EIPs, with health checks and DNS failover.
E) Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
7
Your company is storing millions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5 TB of space, to generate simulations that can be used to steer business decisions. You are required to design an AWS solution that can cost effectively accommodate the long-term storage and in-flight subsets of data. Which approach can satisfy these objectives?
A) Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
B) Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
C) Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
D) Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
E) Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.
A) Use Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
B) Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
C) Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
D) Use HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
E) Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
8
Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data in JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal. Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform?
A) Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
B) Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
C) Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
D) Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
A) Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
B) Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
C) Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
D) Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
9
A customer is deploying an SSL enabled web application to AWS and would like to implement a separation of roles between the EC2 service administrators that are entitled to login to instances as well as making API calls and the security officers who will maintain and have exclusive access to the application's X.509 certificate that contains the private key.
A) Upload the certificate on an S3 bucket owned by the security officers and accessible only by EC2 Role of the web servers.
B) Configure the web servers to retrieve the certificate upon boot from an CloudHSM is managed by the security officers.
C) Configure system permissions on the web servers to restrict access to the certificate only to the authority security officers
D) Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB.
A) Upload the certificate on an S3 bucket owned by the security officers and accessible only by EC2 Role of the web servers.
B) Configure the web servers to retrieve the certificate upon boot from an CloudHSM is managed by the security officers.
C) Configure system permissions on the web servers to restrict access to the certificate only to the authority security officers
D) Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
10
Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC. The optimal setup for persistence and security that meets the above requirements would be the following.
A) Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
B) Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block.
C) Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
D) Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts in your application subnets.
A) Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
B) Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block.
C) Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
D) Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts in your application subnets.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
11
A large real-estate brokerage is exploring the option of adding a cost-effective location based alert to their existing mobile application. The application backend infrastructure currently runs on AWS. Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the US. Which one of the following architectural suggestions would you make to the customer?
A) The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.
B) Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications location through carrier connection: RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
C) The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
D) The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
A) The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances; DynamoDB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.
B) Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the mobile applications location through carrier connection: RDS will be used to store and relevant offers. EC2 instances will communicate with mobile carriers to push alerts back to the mobile application.
C) The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from DynamoDB. AWS Mobile Push will be used to send offers to the mobile application.
D) The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the relevant offers from DynamoDB. EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
12
You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers)
A) Latency resource record sets cannot be used in combination with weighted resource record sets.
B) You did not setup an HTTP health check to one or more of the weighted resource record sets associated with me disabled web servers.
C) The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
D) One of the two working web servers in the other region did not pass its HTTP health check.
E) You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the servers.
A) Latency resource record sets cannot be used in combination with weighted resource record sets.
B) You did not setup an HTTP health check to one or more of the weighted resource record sets associated with me disabled web servers.
C) The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
D) One of the two working web servers in the other region did not pass its HTTP health check.
E) You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the servers.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
13
You are designing the network infrastructure for an application server in Amazon VPC. Users will access all application instances from the Internet, as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements?
A) Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
B) Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
C) Configure a single routing table with two default routes: on to the Internet via an Internet gateway, the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in the VPC.
D) Configure two routing tables: on that has a default router via the Internet gateway, and other that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet.
A) Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
B) Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.
C) Configure a single routing table with two default routes: on to the Internet via an Internet gateway, the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in the VPC.
D) Configure two routing tables: on that has a default router via the Internet gateway, and other that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
14
Your customer is willing to consolidate their log streams (access logs, application logs, security logs, etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours. What is the best approach to meet your customer's requirements?
A) Send all the log events to Amazon SQS, setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
B) Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs
C) Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs
D) Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3, use EMR to apply heuristics on the logs
A) Send all the log events to Amazon SQS, setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.
B) Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs
C) Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs
D) Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3, use EMR to apply heuristics on the logs
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
15
The AWS IT infrastructure that AWS provides, complies with the following IT security standards, including:
A) SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2 and SOC 3
B) FISMA, DIACAP, and FedRAMP
C) PCI DSS Level 1, ISO 27001, ITAR and FIPS 140-2
D) HIPAA, Cloud Security Alliance (CSA) and Motion Picture Association of America (MPAA)
E) All of the above
A) SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2 and SOC 3
B) FISMA, DIACAP, and FedRAMP
C) PCI DSS Level 1, ISO 27001, ITAR and FIPS 140-2
D) HIPAA, Cloud Security Alliance (CSA) and Motion Picture Association of America (MPAA)
E) All of the above
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
16
Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?
A) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
B) A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
C) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
D) A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and a Multi-AZ RDS (Relational Database services) deployment.
A) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
B) A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
C) A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
D) A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and a Multi-AZ RDS (Relational Database services) deployment.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
17
You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
A) Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
B) Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
C) Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
D) Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
A) Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
B) Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
C) Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
D) Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
18
You have a periodic image analysis application that gets some files in input, analyzes them and tor each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day. Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results. It takes almost 20 hours per day to complete the process. What services could be used to reduce the elaboration time and improve the availability of the solution?
A) S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue
B) EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
C) S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
D) EBS with Provisioned IOPS (PIOPS) to store I/O files SQS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
A) S3 to store I/O files. SQS to distribute elaboration commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue
B) EBS with Provisioned IOPS (PIOPS) to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
C) S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
D) EBS with Provisioned IOPS (PIOPS) to store I/O files SQS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SQS queue.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
19
You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The four EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4,000 IOPS (4,000 16KB reads or writes), for a total of 16,000 random IOPS on the instance. The EC2 instance initially delivers the expected 16,000 IOPS random read and write performance. Sometime later, in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume is provisioned to 4,000 IOPs like the original four, for a total of 24,000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution?
A) The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBSOptimized instance that provides larger throughput.
B) Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and filesystem to use 64KB blocks to increase throughput.
C) The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume.
D) Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
E) RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6,000 IOPS.
A) The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBSOptimized instance that provides larger throughput.
B) Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and filesystem to use 64KB blocks to increase throughput.
C) The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume.
D) Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.
E) RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6,000 IOPS.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
20
You've been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3. They are using a combination of RDS and DynamoDB for their dynamic data and then archiving nightly into S3 for further processing with EMR. They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack?
A) Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC.
B) Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.
C) Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
D) Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering. This will enable the ELB itself to perform WAF functionality.
A) Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to their VPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC.
B) Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.
C) Add a WAF tier by creating a new ELB and an AutoScaling group of EC2 Instances running a host-based WAF. They would redirect Route 53 to resolve to the new WAF tier ELB. The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
D) Remove all but TLS 1.2 from the web tier ELB and enable Advanced Protocol Filtering. This will enable the ELB itself to perform WAF functionality.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
21
You require the ability to analyze a customer's clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?
A) Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
B) Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
C) Write click events directly to Amazon Redshift and then analyze with SQL
D) Publish web clicks by session to an Amazon SQS queue then periodically drain these events to Amazon RDS and analyze with SQL.
A) Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
B) Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
C) Write click events directly to Amazon Redshift and then analyze with SQL
D) Publish web clicks by session to an Amazon SQS queue then periodically drain these events to Amazon RDS and analyze with SQL.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
22
You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)
A) Route 53 Record Sets
B) IAM Roles
C) Elastic IP Addresses (EIP)
D) EC2 Key Pairs
E) Launch configurations
F) Security Groups
A) Route 53 Record Sets
B) IAM Roles
C) Elastic IP Addresses (EIP)
D) EC2 Key Pairs
E) Launch configurations
F) Security Groups
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
23

A) Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
B) Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.
C) Implement message passing between EC2 instances within a batch by exchanging messages through SQS.
D) Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.
E) Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
24
A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation? (Choose 2)
A) Add a route to the route table with an iPsec VPN connection as the target.
B) Enable route propagation to the virtual pinnate gateway (VGW).
C) Enable route propagation to the customer gateway (CGW).
D) Modify the route table of all Instances using the 'route' command.
E) Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
A) Add a route to the route table with an iPsec VPN connection as the target.
B) Enable route propagation to the virtual pinnate gateway (VGW).
C) Enable route propagation to the customer gateway (CGW).
D) Modify the route table of all Instances using the 'route' command.
E) Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
25
A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available, Soft state, Eventual consistency) rather than an ACID (Atomicity, Consistency, Isolation, Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?
A) Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
B) Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
C) Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
D) Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
A) Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
B) Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
C) Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
D) Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
26
You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application servers and a database server. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request. How would you implement the architecture on AWS in order to maximize scalability and high availability?
A) File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different Azs.
B) File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different Azs.
C) File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
D) File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different Azs.
A) File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different Azs.
B) File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different Azs.
C) File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
D) File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different Azs.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
27
You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usually there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible?
A) Use DynamoDB with a "Calls" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table.
B) Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACTIVE" or 'TERMINATED". In this way the SQL query is optimized by the use of the Index.
C) Use RDS Multi-AZ with two tables, one for "ACTIVE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access.
D) Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive" attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.
A) Use DynamoDB with a "Calls" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table.
B) Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACTIVE" or 'TERMINATED". In this way the SQL query is optimized by the use of the Index.
C) Use RDS Multi-AZ with two tables, one for "ACTIVE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access.
D) Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive" attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
28
You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet. Which of the following options would you consider?
A) Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.
B) Implement security groups and configure outbound rules to only permit traffic to software depots.
C) Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
D) Implement network access control lists to all specific destinations, with an Implicit deny all rule.
A) Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.
B) Implement security groups and configure outbound rules to only permit traffic to software depots.
C) Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
D) Implement network access control lists to all specific destinations, with an Implicit deny all rule.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
29
A company is running a batch analysis every hour on their main transactional DB, running on an RDS MySQL instance, to populate their central Data Warehouse running on Redshift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
A) Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
B) Replace RDS with Redshift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard
C) Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
D) Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
A) Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
B) Replace RDS with Redshift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard
C) Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
D) Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
30
You have deployed a three-tier web application in a VPC with a CIDR block of 10.0.0.0/28. You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances. The web, application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could be the root caused? (Choose 2 answers)
A) AWS reserves the first and the last private IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances
B) The Internet Gateway (IGW) of your VPC has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
C) The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
D) AWS reserves one IP address in each subnet's CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances
E) AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances
A) AWS reserves the first and the last private IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances
B) The Internet Gateway (IGW) of your VPC has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
C) The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
D) AWS reserves one IP address in each subnet's CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances
E) AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
31
Which is a valid Amazon Resource name (ARN) for IAM?
A) aws:iam::123456789012:instance-profile/Webserver
B) arn:aws:iam::123456789012:instance-profile/Webserver
C) 123456789012:aws:iam::instance-profile/Webserver
D) arn:aws:iam::123456789012::instance-profile/Webserver
A) aws:iam::123456789012:instance-profile/Webserver
B) arn:aws:iam::123456789012:instance-profile/Webserver
C) 123456789012:aws:iam::instance-profile/Webserver
D) arn:aws:iam::123456789012::instance-profile/Webserver
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
32
Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database. In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics. How do you build the database architecture in order to meet the requirements?
A) For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B) For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
C) For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
D) For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
E) Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
A) For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
B) For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
C) For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
D) For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
E) Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
33
You are responsible for a web application that consists of an Elastic Load Balancing (ELB) load balancer in front of an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances. For a recent deployment of a new version of the application, a new Amazon Machine Image (AMI) was created, and the Auto Scaling group was updated with a new launch configuration that refers to this new AMI. During the deployment, you received complaints from users that the website was responding with errors. All instances passed the ELB health checks. What should you do in order to avoid errors for future deployments? (Choose 2)
A) Add an Elastic Load Balancing health check to the Auto Scaling group. Set a short period for the health checks to operate as soon as possible in order to prevent premature registration of the instance to the load balancer.
B) Enable EC2 instance CloudWatch alerts to change the launch configuration's AMI to the previous one. Gradually terminate instances that are using the new AMI.
C) Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail.
D) Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration.
E) Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer.
A) Add an Elastic Load Balancing health check to the Auto Scaling group. Set a short period for the health checks to operate as soon as possible in order to prevent premature registration of the instance to the load balancer.
B) Enable EC2 instance CloudWatch alerts to change the launch configuration's AMI to the previous one. Gradually terminate instances that are using the new AMI.
C) Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail.
D) Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration.
E) Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
34
Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options win provide the most seamless transition for your users?
A) Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.
B) Configure your DirectConnect router with a higher BGP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.
C) Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
D) Configure your DirectConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP priority, and verify network traffic is leveraging the DirectConnect connection.
A) Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.
B) Configure your DirectConnect router with a higher BGP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.
C) Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
D) Configure your DirectConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP priority, and verify network traffic is leveraging the DirectConnect connection.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
35
You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP'S connections to specific domains from their EC2-hosted applications. You deploy a single EC2 instance running proxy software and configure It to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration. You have a nightly maintenance window or 10 minutes where all instances fetch new software updates. Each update Is about 200MB In size and there are 500 instances In the VPC that routinely fetch updates. After a few days you notice that some machines are failing to successfully download some, but not all of their updates within the maintenance window. The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances. What might be happening? (Choose 2)
A) You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time.
B) You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance.
C) The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
D) You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail.
E) You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW).
A) You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time.
B) You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance.
C) The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
D) You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail.
E) You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW).
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
36
An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision me web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
A) Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day, create a "Lastupdated" attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
B) Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region.
C) Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region.
D) Send also each Ante into an SQS queue in me second region; use an auto-scaling group behind the SQS queue to replay the write in the second region.
A) Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day, create a "Lastupdated" attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
B) Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region.
C) Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region.
D) Send also each Ante into an SQS queue in me second region; use an auto-scaling group behind the SQS queue to replay the write in the second region.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
37
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an IPSec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2)
A) Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
B) The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
C) Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.
D) The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
E) The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
A) Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
B) The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
C) Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.
D) The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
E) The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
38
A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show's website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive millions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting is completed the page will display the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?
A) Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
B) Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote.
C) Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table.
D) Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.
A) Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service instance.
B) Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the users vote.
C) Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the DynamoDB table.
D) Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user, the web servers win process the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application servers will then retrieve the items from the queue and store the result into a DynamoDB table.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
39
Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?
A) A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
B) Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
C) Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.
D) A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoded videos from Glacier.
A) A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
B) Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.
C) Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.
D) A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoded videos from Glacier.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
40
Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDS table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. What is the best approach for storing data to DynamoDB and S3?
A) Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.
B) Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.
C) Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket.
D) Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
A) Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.
B) Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.
C) Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket.
D) Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
41
What is the maximum write throughput I can provision for a single Dynamic DB table?
A) 1,000 write capacity units
B) 100,000 write capacity units
C) Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first.
D) 10,000 write capacity units
A) 1,000 write capacity units
B) 100,000 write capacity units
C) Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first.
D) 10,000 write capacity units
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
42
In the context of AWS IAM, identify a true statement about user passwords (login profiles).
A) They must contain Unicode characters.
B) They can contain any Basic Latin (ASCII) characters.
C) They must begin and end with a forward slash (/).
D) They cannot contain Basic Latin (ASCII) characters.
A) They must contain Unicode characters.
B) They can contain any Basic Latin (ASCII) characters.
C) They must begin and end with a forward slash (/).
D) They cannot contain Basic Latin (ASCII) characters.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
43
You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard?
A) Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
B) Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
C) Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
D) Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.
E) Use Elastic Beanstalk "Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.
A) Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
B) Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job
C) Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
D) Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.
E) Use Elastic Beanstalk "Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
44
Which AWS instance address has the following characteristics? :"If you stop an instance, its Elastic IP address is unmapped, and you must remap it when you restart the instance."
A) Both A and B
B) None of these
C) VPC Addresses
D) EC2 Addresses
A) Both A and B
B) None of these
C) VPC Addresses
D) EC2 Addresses
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
45
You have an application running on an EC2 Instance which will allow users to download flies from a private S3 bucket using a pre-signed URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
A) Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
B) Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user's credentials from the EC2 instance user data.
C) Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata
D) Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
A) Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
B) Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user's credentials from the EC2 instance user data.
C) Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata
D) Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
46
Which of the following are characteristics of Amazon VPC subnets? (Choose 2)
A) Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
B) Each subnet maps to a single Availability Zone.
C) CIDR block mask of /25 is the smallest range supported.
D) By default, all subnets can route between each other, whether they are private or public.
E) Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
A) Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
B) Each subnet maps to a single Availability Zone.
C) CIDR block mask of /25 is the smallest range supported.
D) By default, all subnets can route between each other, whether they are private or public.
E) Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
47
Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?
A) Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
B) Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
C) Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
D) Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.
A) Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
B) Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
C) Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
D) Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
48
A 3-Ber e-commerce web application is currently deployed on-premises, and will be migrated to AWS for greater scalability and elasticity. The web tier currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database failover capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes. Which AWS storage and database architecture meets the requirements of the application?
A) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
B) Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi- AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
C) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
D) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
A) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
B) Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi- AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
C) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
D) Web servers: store read-only data in S3, and copy from S3 to root volume at boot time App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
49
Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets?
A) A 1-time charge of 10$ for all the datasets.
B) 1$ per dataset per month
C) 10$ per month for all the datasets
D) There is no charge for using the public data sets
A) A 1-time charge of 10$ for all the datasets.
B) 1$ per dataset per month
C) 10$ per month for all the datasets
D) There is no charge for using the public data sets
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
50
A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week. Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way?
A) Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
B) Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe
C) Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe
D) Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe
A) Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
B) Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe
C) Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe
D) Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
51
You want to use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC). What criterion must be met for this to be possible?
A) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public AWS CodeDeploy endpoint.
B) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public Amazon S3 service endpoint.
C) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDeploy and Amazon S3 service endpoints.
D) It is not currently possible to use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC.)
A) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public AWS CodeDeploy endpoint.
B) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public Amazon S3 service endpoint.
C) The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDeploy and Amazon S3 service endpoints.
D) It is not currently possible to use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC.)
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
52
You are developing a new mobile application and are considering storing user preferences in AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?
A) Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
B) Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
C) Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
D) Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
A) Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access credentials
B) Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize access.
C) Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user management and access privilege system to manage security and access credentials.
D) Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute pointing to the user' S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize access.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
53
How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?
A) Detach the volume and attach it to another EC2 instance in the other AZ.
B) Simply create a new volume in the other AZ and specify the original volume as the source.
C) Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
D) Detach the volume, then use the ec2-migrate-volume command to move it to another AZ.
A) Detach the volume and attach it to another EC2 instance in the other AZ.
B) Simply create a new volume in the other AZ and specify the original volume as the source.
C) Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
D) Detach the volume, then use the ec2-migrate-volume command to move it to another AZ.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
54
You are designing a multi-platform web application for AWS The application will run on EC2 instances and will be accessed from PCs. Tablets and smart phones Supported accessing platforms are Windows, MacOS, IOS and Android Separate sticky session and SSL certificate setups are required for different platform types. Which of the following describes the most cost effective and performance efficient architecture setup?
A) Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC.
B) Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.
C) Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.
D) Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.
A) Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC.
B) Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.
C) Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.
D) Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
55
By default, Amazon Cognito maintains the last-written version of the data. You can override this behavior and resolve data conflicts programmatically. In addition, push synchronization allows you to use Amazon Cognito to send a silent notification to all devices associated with an identity to notify them that new data is available.
A) get
B) post
C) pull
D) push
A) get
B) post
C) pull
D) push
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
56
When you put objects in Amazon S3, what is the indication that an object was successfully stored?
A) A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
B) Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
C) A success code is inserted into the S3 object metadata.
D) Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.
A) A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
B) Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
C) A success code is inserted into the S3 object metadata.
D) Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
57
After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful. Which of the following steps could resolve the issue?
A) Disabling the Source/Destination Check attribute on the NAT instance
B) Attaching an Elastic IP address to the instance in the private subnet
C) Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet
D) Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet
A) Disabling the Source/Destination Check attribute on the NAT instance
B) Attaching an Elastic IP address to the instance in the private subnet
C) Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet
D) Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
58
An IAM user is trying to perform an action on an object belonging to some other root account's bucket. Which of the below mentioned options will AWS S3 not verify?
A) The object owner has provided access to the IAM user
B) Permission provided by the parent of the IAM user on the bucket
C) Permission provided by the bucket owner to the IAM user
D) Permission provided by the parent of the IAM user
A) The object owner has provided access to the IAM user
B) Permission provided by the parent of the IAM user on the bucket
C) Permission provided by the bucket owner to the IAM user
D) Permission provided by the parent of the IAM user
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
59
Within the IAM service a GROUP is regarded as a:
A) A collection of AWS accounts
B) It's the group of EC2 machines that gain the permissions specified in the GROUP.
C) There's no GROUP in IAM, but only USERS and RESOURCES.
D) A collection of users.
A) A collection of AWS accounts
B) It's the group of EC2 machines that gain the permissions specified in the GROUP.
C) There's no GROUP in IAM, but only USERS and RESOURCES.
D) A collection of users.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
60
The ________ service is targeted at organizations with multiple users or systems that use AWS products such as Amazon EC2, Amazon SimpleDB, and the AWS Management Console.
A) Amazon RDS
B) AWS Integrity Management
C) AWS Identity and Access Management
D) Amazon EMR
A) Amazon RDS
B) AWS Integrity Management
C) AWS Identity and Access Management
D) Amazon EMR
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
61
In IAM, which of the following is true of temporary security credentials?
A) Once you issue temporary security credentials, they cannot be revoked.
B) None of these are correct.
C) Once you issue temporary security credentials, they can be revoked only when the virtual MFA device is used.
D) Once you issue temporary security credentials, they can be revoked.
A) Once you issue temporary security credentials, they cannot be revoked.
B) None of these are correct.
C) Once you issue temporary security credentials, they can be revoked only when the virtual MFA device is used.
D) Once you issue temporary security credentials, they can be revoked.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
62
You have subscribed to the AWS Business and Enterprise support plan. Your business has a backlog of problems, and you need about 20 of your IAM users to open technical support cases. How many users can open technical support cases under the AWS Business and Enterprise support plan?
A) 5 users
B) 10 users
C) Unlimited
D) 1 user
A) 5 users
B) 10 users
C) Unlimited
D) 1 user
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
63
Does Amazon RDS API provide actions to modify DB instances inside a VPC and associate them with DB Security Groups?
A) Yes, Amazon does this but only for MySQL RDS.
B) Yes
C) No
D) Yes, Amazon does this but only for Oracle RDS.
A) Yes, Amazon does this but only for MySQL RDS.
B) Yes
C) No
D) Yes, Amazon does this but only for Oracle RDS.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
64
The Statement element, of an AWS IAM policy, contains an array of individual statements. Each individual statement is a(n) _________ block enclosed in braces { }.
A) XML
B) JavaScript
C) JSON
D) AJAX
A) XML
B) JavaScript
C) JSON
D) AJAX
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
65
The MySecureData company has five branches across the globe. They want to expand their data centers such that their web server will be in the AWS and each branch would have their own database in the local data center. Based on the user login, the company wants to connect to the data center. How can MySecureData company implement this scenario with the AWS VPC?
A) Create five VPCs with the public subnet for the app server and setup the VPN gateway for each VPN to connect them individually.
B) Use the AWS VPN CloudHub to communicate with multiple VPN connections.
C) Use the AWS CloudGateway to communicate with multiple VPN connections.
D) It is not possible to connect different data centers from a single VPC.
A) Create five VPCs with the public subnet for the app server and setup the VPN gateway for each VPN to connect them individually.
B) Use the AWS VPN CloudHub to communicate with multiple VPN connections.
C) Use the AWS CloudGateway to communicate with multiple VPN connections.
D) It is not possible to connect different data centers from a single VPC.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
66
How many cg1.4xlarge on-demand instances can a user run in one region without taking any limit increase approval from AWS?
A) 20
B) 2
C) 5
D) 10
A) 20
B) 2
C) 5
D) 10
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
67
A user has configured EBS volume with PIOPS. The user is not experiencing the optimal throughput. Which of the following could not be factor affecting I/O performance of that EBS volume?
A) EBS bandwidth of dedicated instance exceeding the PIOPS
B) EBS volume size
C) EC2 bandwidth
D) Instance type is not EBS optimized
A) EBS bandwidth of dedicated instance exceeding the PIOPS
B) EBS volume size
C) EC2 bandwidth
D) Instance type is not EBS optimized
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
68
What types of identities do Amazon Cognito identity pools support?
A) They support both authenticated and unauthenticated identities.
B) They support only unauthenticated identities.
C) They support neither authenticated nor unauthenticated identities.
D) They support only authenticated identities.
A) They support both authenticated and unauthenticated identities.
B) They support only unauthenticated identities.
C) They support neither authenticated nor unauthenticated identities.
D) They support only authenticated identities.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
69
The two policies that you attach to an IAM role are the access policy and the trust policy. The trust policy identifies who can assume the role and grants the permission in the AWS Lambda account principal by adding the _______ action.
A) aws:AssumeAdmin
B) lambda:InvokeAsync
C) sts:InvokeAsync
D) sts:AssumeRole
A) aws:AssumeAdmin
B) lambda:InvokeAsync
C) sts:InvokeAsync
D) sts:AssumeRole
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
70
How many g2.2xlarge on-demand instances can a user run in one region without taking any limit increase approval from AWS?
A) 20
B) 2
C) 5
D) 10
A) 20
B) 2
C) 5
D) 10
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
71
What bandwidths do AWS Direct Connect currently support?
A) 10Mbps and 100Mbps
B) 10Gbps and 100Gbps
C) 100Mbps and 1Gbps
D) 1Gbps and 10 Gbps
A) 10Mbps and 100Mbps
B) 10Gbps and 100Gbps
C) 100Mbps and 1Gbps
D) 1Gbps and 10 Gbps
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
72
A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?
A) Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
B) Keep only 10 instances running and manually launch 10 instances every day during office hours.
C) During the pre-vacation period setup 20 instances to run continuously.
D) During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
A) Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
B) Keep only 10 instances running and manually launch 10 instances every day during office hours.
C) During the pre-vacation period setup 20 instances to run continuously.
D) During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
73
When does an AWS Data Pipeline terminate the AWS Data Pipeline-managed compute resources?
A) AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 2 hours.
B) When the final activity that uses the resources is running
C) AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 12 hours.
D) When the final activity that uses the resources has completed successfully or failed
A) AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 2 hours.
B) When the final activity that uses the resources is running
C) AWS Data Pipeline terminates AWS Data Pipeline-managed compute resources every 12 hours.
D) When the final activity that uses the resources has completed successfully or failed
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
74
An organization is planning to host an application on the AWS VPC. The organization wants dedicated instances. However, an AWS consultant advised the organization not to use dedicated instances with VPC as the design has a few limitations. Which of the below mentioned statements is not a limitation of dedicated instances with VPC?
A) All instances launched with this VPC will always be dedicated instances and the user cannot use a default tenancy model for them.
B) It does not support the AWS RDS with a dedicated tenancy VPC.
C) The user cannot use Reserved Instances with a dedicated tenancy model.
D) The EBS volume will not be on the same tenant hardware as the EC2 instance though the user has configured dedicated tenancy.
A) All instances launched with this VPC will always be dedicated instances and the user cannot use a default tenancy model for them.
B) It does not support the AWS RDS with a dedicated tenancy VPC.
C) The user cannot use Reserved Instances with a dedicated tenancy model.
D) The EBS volume will not be on the same tenant hardware as the EC2 instance though the user has configured dedicated tenancy.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
75
A user is planning to host a web server as well as an app server on a single EC2 instance which is a part of the public subnet of a VPC. How can the user setup to have two separate public IPs and separate security groups for both the application as well as the web server?
A) Launch VPC with two separate subnets and make the instance a part of both the subnets.
B) Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them.
C) Launch a VPC instance with two network interfaces. Assign a separate security group to each and AWS will assign a separate public IP to them.
D) Launch a VPC with ELB such that it redirects requests to separate VPC instances of the public subnet.
A) Launch VPC with two separate subnets and make the instance a part of both the subnets.
B) Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them.
C) Launch a VPC instance with two network interfaces. Assign a separate security group to each and AWS will assign a separate public IP to them.
D) Launch a VPC with ELB such that it redirects requests to separate VPC instances of the public subnet.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
76
What is the default maximum number of VPCs allowed per region?
A) 5
B) 10
C) 100
D) 15
A) 5
B) 10
C) 100
D) 15
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
77
The CFO of a company wants to allow one of his employees to view only the AWS usage report page. Which of the below mentioned IAM policy statements allows the user to have access to the AWS usage report page?
A) "Effect": "Allow", "Action": ["Describe"], "Resource": "Billing"
B) "Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*"
C) "Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*"
D) "Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"
A) "Effect": "Allow", "Action": ["Describe"], "Resource": "Billing"
B) "Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*"
C) "Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*"
D) "Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
78
A user is configuring MySQL RDS with PIOPS. What should be the minimum size of DB storage provided by the user?
A) 1 TB
B) 50 GB
C) 5 GB
D) 100 GB
A) 1 TB
B) 50 GB
C) 5 GB
D) 100 GB
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
79
A user has created a MySQL RDS instance with PIOPS. Which of the below mentioned statements will help user understand the advantage of PIOPS?
A) The user can achieve additional dedicated capacity for the EBS I/O with an enhanced RDS option
B) It uses a standard EBS volume with optimized configuration the stacks
C) It uses optimized EBS volumes and optimized configuration stacks
D) It provides a dedicated network bandwidth between EBS and RDS
A) The user can achieve additional dedicated capacity for the EBS I/O with an enhanced RDS option
B) It uses a standard EBS volume with optimized configuration the stacks
C) It uses optimized EBS volumes and optimized configuration stacks
D) It provides a dedicated network bandwidth between EBS and RDS
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck
80
Doug has created a VPC with CIDR 10.201.0.0/16 in his AWS account. In this VPC he has created a public subnet with CIDR block 10.201.31.0/24. While launching a new EC2 from the console, he is not able to assign the private IP address 10.201.31.6 to this instance. Which is the most likely reason for this issue?
A) Private address IP 10.201.31.6 is currently assigned to another interface
B) Private IP address 10.201.31.6 is reserved by Amazon for IP networking purposes.
C) Private IP address 10.201.31.6 is blocked via ACLs in Amazon infrastructure as a part of platform security.
D) Private IP address 10.201.31.6 is not part of the associated subnet's IP address range.
A) Private address IP 10.201.31.6 is currently assigned to another interface
B) Private IP address 10.201.31.6 is reserved by Amazon for IP networking purposes.
C) Private IP address 10.201.31.6 is blocked via ACLs in Amazon infrastructure as a part of platform security.
D) Private IP address 10.201.31.6 is not part of the associated subnet's IP address range.
Unlock Deck
Unlock for access to all 871 flashcards in this deck.
Unlock Deck
k this deck