Notes: Hi all, AWS Certified DevOps Engineer Professional Practice Exam Part 5 will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics. We highly recommend you should take AWS Certified DevOps Engineer Professional Actual Exam Version because it include real questions and highlighted answers are collected in our exam. It will help you pass exam in easier way.
For PDF Version:
Part 1: https://www.awslagi.com/aws-certified-devops-professional-pdf/
Part 2: https://www.awslagi.com/aws-certified-devops-professional-practice-exam-part-2
Part 3: https://www.awslagi.com/aws-certified-devops-professional-practice-exam-part-3
Part 4: https://www.awslagi.com/aws-certified-devops-professional-practice-exam-part-4
Part 5: https://www.awslagi.com/aws-certified-devops-professional-practice-exam-part-5
Part 6: https://www.awslagi.com/aws-certified-devops-professional-practice-exam-part-6
For Audio Version: https://www.youtube.com/playlist?list=PLRfkgcv2GPKMuC_Bskj9G4IhDuu-sNNjo
341. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it’s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?
A. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
B. Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform ad-hoc MapReduce analysis and write new queries when needed.
C. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
D. Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.
Answer: D
342. When thinking of AWS Elastic Beanstalk’s model, which is true?
A. Applications have many deployments, deployments have many environments.
B. Environments have many applications, applications have many deployments.
C. Applications have many environments, environments have many deployments.
D. Deployments have many environments, environments have many applications.
Answer: C
343. You work for a company that automatically tags photographs using artificial neural networks (ANNs), which run on GPUs using C++. You receive millions of images at a time, but only 3 times per day on average. These images are loaded into an AWS S3 bucket you control for you in a batch, and then the customer publishes a JSON-formatted manifest into another S3 bucket you control as well. Each image takes 10 milliseconds to process using a full GPU. Your neural network software requires 5 minutes to bootstrap. Image tags are JSON objects, and you must publish them to an S3 bucket. Which of these is the best system architecture for this system?
A. Create an OpsWorks Stack with two Layers. The first contains lifecycle scripts for launching and bootstrapping an HTTP API on G2 instances for ANN image processing, and the second has an always-on instance which monitors the S3 manifest bucket for new files. When a new file is detected, request instances to boot on the ANN layer. When the instances are booted and the HTTP APIs are up, submit processing requests to individual instances.
B. Make an S3 notification configuration which publishes to AWS Lambda on the manifest bucket. Make the Lambda create a CloudFormation Stack which contains the logic to construct an autoscaling worker tier of EC2 G2 instances with the ANN code on each instance. Create an SQS queue of the images in the manifest. Tear the stack down when the queue is empty.
C. Deploy your ANN code to AWS Lambda as a bundled binary for the C++ extension. Make an S3 notification configuration on the manifest, which publishes to another AWS Lambda running controller code. This controller code publishes all the images in the manifest to AWS Kinesis. Your ANN code Lambda Function uses the Kinesis as an Event Source. The system automatically scales when the stream contains image events.
D. Create an Auto Scaling, Load Balanced Elastic Beanstalk worker tier Application and Environment. Deploy the ANN code to G2 instances in this tier. Set the desired capacity to 1. Make the code periodically check S3 for new manifests. When a new manifest is detected, push all of the images in the manifest into the SQS queue associated with the Elastic Beanstalk worker tier.
Answer: B
344. You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, your company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1’s AZ’s through f, inclusive.
A. 3 servers in each of AZ’s a through d, inclusive.
B. 8 servers in each of AZ’s a and b.
C. 2 servers in each of AZ’s a through e, inclusive.
D. 4 servers in each of AZ’s a through c, inclusive.
Answer: C
345. You need to create a Route53 record automatically in CloudFormation when not running in production during all launches of a Template. How should you implement this?
A. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when the environment is not production.
B. Create two templates, one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production.
C. Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record with a null string when environment is production.
D. Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.
Answer: A
346. What is web identity federation?
A. Use of an identity provider like Google or Facebook to become an AWS IAM User.
B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
C. Use of AWS IAM User tokens to log in as a Google or Facebook user.
D. Use of AWS STS Tokens to log in as a Google or Facebook user.
Answer: B
347. You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines?
A. Develop models of your entire cloud system in CloudFormation. Use this model in Staging and Production to achieve greater parity.
B. Use AWS Config to force the Staging and Production stacks to have configuration parity. Any differences will be detected for you so you are aware of risks.
C. Use AMIs to ensure the whole machine, including the kernel of the virtual machines, is consistent, since Docker uses Linux Container (LXC) technology, and we need to make sure the container environment is consistent.
D. Use AWS ECS and Docker clustering. This will make sure that the AMIs and machine sizes are the same across both environments.
Answer: A
348. You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What’s the best design for this system, using DynamoDB?
A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching.
B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching.
C. DynamoDB table with 100x higher read than write throughput, with ElastiCache caching.
D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching.
Answer: D
349. You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their infrastructure. They currently have no automation at all for deployment, and they have had many failures while trying to deploy to production. The company has told you deployment process risk mitigation is the most important thing now, and you have a lot of budget for tools and AWS resources. Their stack:
A. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environments. Use Elastic Beanstalk’s Rolling Deploy option to progressively roll out application code changes when promoting across environments.
B. Model the stack in 3 CloudFormation templates: Data layer, compute layer, and networking layer. Write stack deployment and integration testing automation following Blue-Green methodologies.
C. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated ELB. Use Chef and App Deployments to automate Rolling Deployment.
D. Model the stack in 1 CloudFormation template, to ensure consistency and dependency graph resolution. Write deployment and integration testing automation following Rolling Deployment methodologies.
Answer: B
350. What is the scope of an EBS snapshot?
A. Availability Zone
B. Placement Group
C. Region
D. VPC
Answer: C
351. Your system uses a multi-master, multi-region DynamoDB configuration spanning two regions to achieve high availability. For the first time since launching your system, one of the AWS Regions in which you operate over went down for 3 hours, and the failover worked correctly. However, after recovery, your users are experiencing strange bugs, in which users on different sides of the globe see different data. What is a likely design issue that was not accounted for when launching?
A. The system does not have Lambda Functor Repair Automatons, to perform table scans and check for corrupted partition blocks inside the Table in the recovered Region.
B. The system did not implement DynamoDB Table Defragmentation for restoring partition performance in the Region that experienced an outage, so data is served stale.
C. The system did not include repair logic and request replay buffering logic for post-failure, to resynchronize data to the Region that was unavailable for a number of hours.
D. The system did not use DynamoDB Consistent Read requests, so the requests in different areas are not utilizing consensus across Regions at runtime.
Answer: C
352. You run operations for a company that processes digital wallet payments at a very high volume. One second of downtime, during which you drop payments or are otherwise unavailable, loses you on average USD 100. You balance the financials of the transaction system once per day. Which database setup is best suited to address this business risk?
A. A multi-AZ RDS deployment with synchronous replication to multiple standbys and read-replicas for fast failover and ACID properties.
B. A multi-region, multi-master, active-active RDS configuration using database-level ACID design principles with database trigger writes for replication.
C. A multi-region, multi-master, active-active DynamoDB configuration using application control-level BASE design principles with change-stream write queue buffers for replication.
D. A multi-AZ DynamoDB setup with changes streamed to S3 via AWS Kinesis, for highly durable storage and BASE properties.
Answer: C
353. When thinking of DynamoDB, what are true of Local Secondary Key properties?
A. Either the partition key or the sort key can be different from the table, but not both.
B. Only the sort key can be different from the table.
C. The partition key and sort key can be different from the table.
D. Only the partition key can be different from the table.
Answer: B
354. Which deployment method, when using AWS Auto Scaling Groups and Auto Scaling Launch Configurations, enables the shortest time to live for individual servers?
A. Pre-baking AMIs with all code and configuration on deploys.
B. Using a Dockerfile bootstrap on instance launch.
C. Using UserData bootstrapping scripts.
D. Using AWS EC2 Run Commands to dynamically SSH into fleets.
Answer: A
355. Which of these techniques enables the fastest possible rollback times in the event of a failed deployment?
A. Rolling; Immutable
B. Rolling; Mutable
C. Canary or A/B
D. Blue-Green
Answer: D
356. Which of the following are not valid sources for OpsWorks custom cookbook repositories?
A. HTTP(S)
B. Git
C. AWS EBS
D. Subversion
Answer: C
357. You are building a deployment system on AWS. You will deploy new code by bootstrapping instances in a private subnet in a VPC at runtime using UserData scripts pointing to an S3 zip file object, where your code is stored. An ELB in a public subnet has network interfaces and connectivity to the instances. Requests from users of the system are routed to the ELB via a Route53 A Record Alias. You do not use any VPC endpoints. Which is a risk of using this approach?
A. Route53 Alias records do not always update dynamically with ELB network changes after deploys.
B. If the NAT routing for the private subnet fails, deployments fail.
C. Kernel changes to the base AMI may render the code inoperable.
D. The instances cannot be in a private subnet if the ELB is in a public one.
Answer: B
358. Which major database needs a BYO license?
A. PostgreSQL
B. MariaDB
C. MySQL
D. Oracle
Answer: D
359. What is the maximum supported single-volume throughput on EBS?
A. 320MiB/s
B. 160MiB/s
C. 40MiB/s
D. 640MiB/s
Answer: A
360. When a user is detaching an EBS volume from a running instance and attaching it to a new instance, which of the below mentioned options should be followed to avoid file system damage?
A. Unmount the volume first
B. Stop all the I/O of the volume before processing
C. Take a snapshot of the volume before detaching
D. Force Detach the volume to ensure that all the data stays intact
Answer: A
361. A user is creating a new EBS volume from an existing snapshot. The snapshot size shows 10 GB. Can the user create a volume of 30 GB from that snapshot?
A. Provided the original volume has set the change size attribute to true
B. Yes
C. Provided the snapshot has the modify size attribute set as true
D. No
Answer: B
362. How long are the messages kept on an SQS queue by default?
A. If a message is not read, it is never deleted
B. 2 weeks
C. 1 day
D. 4 days
Answer: D
363. A user has attached an EBS volume to a running Linux instance as a “/dev/sdf” device. The user is unable to see the attached device when he runs the command “df -h”. What is the possible reason for this?
A. The volume is not in the same AZ of the instance
B. The volume is not formatted
C. The volume is not attached as a root device
D. The volume is not mounted
Answer: D
364. When using Amazon SQS how much data can you store in a message?
A. 8 KB
B. 2 KB
C. 16 KB
D. 4 KB
Answer: A
365. What is the maximum time messages can be stored in SQS?
A. 14 days
B. one month
C. 4 days
D. 7 days
Answer: A
366. A user has created a new EBS volume from an existing snapshot. The user mounts the volume on the instance to which it is attached. Which of the below mentioned options is a required step before the user can mount the volume?
A. Run a cyclic check on the device for data consistency
B. Create the file system of the volume
C. Resize the volume as per the original snapshot size
D. No step is required. The user can directly mount the device
Answer: D
367. You need your CI to build AMIs with code pre-installed on the images on every new code push. You need to do this as cheaply as possible. How do you do this?
A. Bid on spot instances just above the asking price as soon as new commits come in, perform all instance configuration and setup, then create an AMI based on the spot instance.
B. Have the CI launch a new on-demand EC2 instance when new commits come in, perform all instance configuration and setup, then create an AMI based on the on-demand instance.
C. Purchase a Light Utilization Reserved Instance to save money on the continuous integration machine. Use these credits whenever you create AMIs on instances.
D. When the CI instance receives commits, attach a new EBS volume to the CI machine. Perform all setup on this EBS volume so you do not need a new EC2 instance to create the AMI.
Answer: A
368. When thinking of DynamoDB, what are true Global Secondary Key properties?
A. The partition key and sort key can be different from the table.
B. Only the partition key can be different from the table.
C. Either the partition key or the sort key can be different from the table, but not both.
D. Only the sort key can be different from the table.
Answer: A
369. You need to process long-running jobs once and only once. How might you do this?
A. Use an SNS queue and set the visibility timeout to long enough for jobs to process.
B. Use an SQS queue and set the reprocessing timeout to long enough for jobs to process.
C. Use an SQS queue and set the visibility timeout to long enough for jobs to process.
D. Use an SNS queue and set the reprocessing timeout to long enough for jobs to process.
Answer: C
370. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?
A. Subscribe your queue to an SNS topic instead.
B. Use as long of a poll as possible, instead of short polls.
C. Alter your visibility timeout to be shorter.
D. Use sqsd on your EC2 instances.
Answer: B
371. You need to know when you spend $1000 or more on AWS. What’s the easy way for you to see that notification?
A. AWS CloudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
B. Scrape the billing page periodically and pump into Kinesis.
C. AWS CloudWatch Metrics + Billing Alarm + Lambda event subscription. When a threshold is exceeded, email the manager.
D. Scrape the billing page periodically and publish to SNS.
Answer: C
372. You need to grant a vendor access to your AWS account. They need to be able to read protected messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to accomplish this?
A. Create an IAM User with API Access Keys. Grant the User permissions to access the bucket. Give the vendor the AWS Access Key ID and AWS Secret Access Key for the User.
B. Create an EC2 Instance Profile on your account. Grant the associated IAM role full access to the bucket. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor.
C. Create a cross-account IAM Role with permission to access the bucket, and grant permission to use the Role to the vendor AWS account.
D. Generate a signed S3 PUT URL and a signed S3 PUT URL, both with wildcard values and 2 year durations. Pass the URLs to the vendor.
Answer: C
373. Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?
A. Your API Gateway deployment is throttling your requests.
B. Your AWS API Gateway Deployment is bottlenecking on request (de)serialization.
C. You did not request a limit increase on concurrent Lambda function executions.
D. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.
Answer: C
374. Why are more frequent snapshots of EBS Volumes faster?
A. Blocks in EBS Volumes are allocated lazily, since while logically separated from other EBS Volumes, Volumes often share the same physical hardware. Snapshotting the first time forces full block range allocation, so the second snapshot doesn’t need to perform the allocation phase and is faster.
B. The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.
C. AWS provides more disk throughput for burst capacity during snapshots if the drive has been pre-warmed by snapshotting and reading all blocks.
D. The drive is pre-warmed, so block access is more rapid for volumes when every block on the device has already been read at least one time.
Answer: B
375. For AWS CloudFormation, which stack state refuses UpdateStack calls?
A. UPDATE_ROLLBACK_FAILED
B. UPDATE_ROLLBACK_COMPLETE
C. UPDATE_COMPLETE
D. CREATE_COMPLETE
Answer: A
376. You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?
A. 6667
B. 4166
C. 5556
D. 2778
Answer: C
377. Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?
A. Use CloudTrail Log File Integrity Validation.
B. Use AWS Config SNS Subscriptions and process events in real time.
C. Use CloudTrail backed up to AWS S3 and Glacier.
D. Use AWS Config Timeline forensics.
Answer: A
378. Which of these is not a Pseudo Parameter in AWS CloudFormation?
A. AWS::StackName
B. AWS::AccountId
C. AWS::StackArn
D. AWS::NotificationARNs
Answer: C
379. What is the scope of an EBS volume?
A. VPC
B. Region
C. Placement Group
D. Availability Zone
Answer: D
380. You are experiencing performance issues writing to a DynamoDB table. Your system tracks high scores for video games on a marketplace. Your most popular game experiences all of the performance issues. What is the most likely problem?
A. DynamoDB’s vector clock is out of sync, because of the rapid growth in request for the most popular game.
B. You selected the Game ID or equivalent identifier as the primary partition key for the table.
C. Users of the most popular video game each perform more read and write requests than average.
D. You did not provision enough read or write throughput to the table.
Answer: B
381. You meet once per month with your operations team to review the past month’s data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened?
A. Check your CloudTrail log history around the spike’s time for any API calls that caused slowness. B. Review CloudWatch Metrics graphs to determine which component(s) slowed the system down.
C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
D. Analyze your logs to detect bursts in traffic at that time.
Answer: B
382. Which of these is not an intrinsic function in AWS CloudFormation?
A. Fn::Split
B. Fn::FindInMap
C. Fn::Select
D. Fn::GetAZs
Answer: A
383. For AWS CloudFormation, which is true?
A. Custom resources using SNS have a default timeout of 3 minutes.
B. Custom resources using SNS do not need a ServiceToken property.
C. Custom resources using Lambda and Code.ZipFile allow inline node js resource composition.
D. Custom resources using Lambda do not need a ServiceTokenproperty
Answer: C
384. Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources – you do not have a database. What is a simple but effective way to achieve this uptime goal?
A. Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CloudFront uses will be fine.
B. Use an ELB and a cross-zone ELB deployment to create redundancy across data centers. Even if a region fails, the other AZ will stay online.
C. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.
Answer: D
385. You are designing an enterprise data storage system. Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as lowcost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads. Which is the most appropriate EBS Volume Type for this scenario?
A. gp1
B. io1
C. standard
D. gp2
Answer: C
386. You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge?
A. Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete actions. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
B. Submit a ticket to the AWS Forums. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHub. Their response time is usually 1 day, and they complete requests within a week or two.
C. Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
D. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
Answer: D
387. You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory, because your organization is a power-user of Active Directory. How should you manage your AWS identities in the most simple manner?
A. Use a large AWS Directory Service Simple AD.
B. Use a large AWS Directory Service AD Connector.
C. Use a Sync Domain running on AWS Directory Service.
D. Use an AWS Directory Sync Domain running on AWS Lambda
Answer: B
388. When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?
A. 24/7 instances
B. Spot instances
C. Time-based instances
D. Load-based instances
Answer: B
389. Which of these is not a CloudFormation Helper Script?
A. cfn-signal
B. cfn-hup
C. cfn-request
D. cfn-get-metadata
390. Your team wants to begin practicing continuous delivery using CloudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment?
A. Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS.
B. Model your stack in one template, so you can leverage CloudFormation’s state management and dependency resolution to propagate all changes.
C. Use CloudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure.
D. Parametrize the template and use Mappings to ensure your template works in multiple Regions.
Answer: B
391. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?
A. AWS SQS
B. AWS Lambda
C. AWS Kinesis
D. AWS SNS
Answer: C
392. You are building a Ruby on Rails application for internal, non-production use which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?
A. AWS CloudFormation
B. AWS OpsWorks
C. AWS ELB + EC2 with CLI Push
D. AWS Elastic Beanstalk
Answer: D