Month End Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

SAA-C03 Amazon Web Services AWS Certified Solutions Architect - Associate (SAA-C03) Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Amazon Web Services SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 5 / 6
Total 1186 questions

A company wants to enhance its ecommerce order-processing application that is deployed on AWS. The application must process each order exactly once without affecting the customer experience during unpredictable traffic surges.

Which solution will meet these requirements?

A.

Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Put all the orders in the SQS queue. Configure an AWS Lambda function as the target to process the orders.

B.

Create an Amazon Simple Notification Service (Amazon SNS) standard topic. Publish all the orders to the SNS standard topic. Configure the application as a notification target.

C.

Create a flow by using Amazon AppFlow. Send the orders to the flow. Configure an AWS Lambda function as the target to process the orders.

D.

Configure AWS X-Ray in the application to track the order requests. Configure the application to process the orders by pulling the orders from Amazon CloudWatch.

A company is building a serverless application to process orders from an ecommerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.

Which solution will meet these requirements?

A.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.

B.

Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.

D.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.

Question:

A company wants to migrate an application to AWS. The application runs on Docker containers behind an Application Load Balancer (ALB). The application stores data in a PostgreSQL database. The cloud-based solution must use AWS WAF to inspect all application traffic. The application experiences most traffic on weekdays. There is significantly less traffic on weekends. Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon RDS for PostgreSQL as the database.

B.

Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon RDS for PostgreSQL as the database.

C.

Create a web access control list (web ACL) in AWS WAF that includes the necessary rules. Attach the web ACL to the ALB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.

D.

Use a Network Load Balancer (NLB). Create a web access control list (web ACL) in AWS WAF that has the necessary rules. Attach the web ACL to the NLB. Run the application on Amazon Elastic Container Service (Amazon ECS). Use Amazon Aurora Serverless as the database.

A company runs an application on Amazon EC2 instances across multiple Availability Zones in the same AWS Region. The EC2 instances share an Amazon Elastic File System (Amazon EFS) volume that is mounted on all the instances. The EFS volume stores a variety of files such as installation media, third-party files, interface files, and other one-time files.

The company accesses some EFS files frequently and needs to retrieve the files quickly. The company accesses other files rarely. The EFS volume is multiple terabytes in size. The company needs to optimize storage costs for Amazon EFS.

Which solution will meet these requirements with the LEAST effort?

A.

Move the files to Amazon S3. Set up a lifecycle policy to move the files to S3 Glacier Flexible Retrieval.

B.

Apply a lifecycle policy to the EFS files to move the files to EFS Infrequent Access.

C.

Move the files to Amazon Elastic Block Store (Amazon EBS) Cold HDD Volumes (sc1).

D.

Move the files to Amazon S3. Set up a lifecycle policy to move the rarely-used files to S3 Glacier Deep Archive.

An ecommerce company runs an application that uses an Amazon DynamoDB table in a single AWS Region. The company wants to deploy the application to a second Region. The company needs to support multi-active replication with low latency reads and writes to the existing DynamoDB table in both Regions.

Which solution will meet these requirements in the MOST operationally efficient way?

A.

Create a DynamoDB global secondary index (GSI) for the existing table. Create a new table in the second Region. Convert the existing DynamoDB table to a global table. Specify the new table as the secondary table.

B.

Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create a new application that uses the DynamoDB Streams Kinesis Adapter and the Amazon Kinesis Client Library (KCL). Configure the new application to read data from the DynamoDB table in the first Region and to write the data to the new table in the second Region.

C.

Convert the existing DynamoDB table to a global table. Choose the appropriate second Region to achieve active-active write capabilities in both Regions.

D.

Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create an AWS Lambda function in the first Region that reads data from the table in the first Region and writes the data to the new table in the second Region. Set a DynamoDB stream as the input trigger for the Lambda function.

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs.

Which solution will meet these requirements?

A.

Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket.

B.

Move the website to an Amazon S3 bucket. Configure an Amazon ElastiCache cluster for the S3 bucket.

C.

Move the website to AWS Amplify. Configure an ALB to resolve to the Amplify website.

D.

Move the website to AWS Amplify. Configure EC2 instances to cache the website.

A company is building a new application that uses multiple serverless architecture components. The application architecture includes an Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.

The company needs a service to send messages that the REST API receives to multiple target Lambda functions for processing. The service must filter messages so each target Lambda function receives only the messages the function needs.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Send the requests from the REST API to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe multiple Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target Lambda functions to poll the SQS queues.

B.

Send the requests from the REST API to a set of Amazon EC2 instances that are configured to process messages. Configure the instances to filter messages and to invoke the target Lambda functions.

C.

Send the requests from the REST API to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.

D.

Send the requests from the REST API to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the SQS queues.

Question:

A genomics research company is designing a scalable architecture for a loosely coupled workload. Tasks in the workload are independent and can be processed in parallel. The architecture needs to minimize management overhead and provide automatic scaling based on demand.

Options:

A.

Use a cluster of Amazon EC2 instances. Use AWS Systems Manager to manage the workload.

B.

Implement a serverless architecture that uses AWS Lambda functions.

C.

Use AWS ParallelCluster to deploy a dedicated high-performance cluster.

D.

Implement vertical scaling for each workload task.

Question:

A company is building an ecommerce application that uses a relational database to store customer data and order history. The company also needs a solution to store 100 GB of product images. The company expects the traffic flow for the application to be predictable. Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use Amazon RDS for MySQL for the database. Store the product images in an Amazon S3 bucket.

B.

Use Amazon DynamoDB for the database. Store the product images in an Amazon S3 bucket.

C.

Use Amazon RDS for MySQL for the database. Store the product images in an Amazon Aurora MySQL database.

D.

Create three Amazon EC2 instances. Install MongoDB software on the instances to use as the database. Store the product images in an Amazon RDS for MySQL database with a Multi-AZ deployment.

A company wants to create an Amazon EMR cluster that multiple teams will use. The company wants to ensure that each team's big data workloads can access only the AWS services that each team needs to interact with. The company does not want the workloads to have access to Instance Metadata Service Version 2 (IMDSv2) on the cluster's underlying EC2 instances.

Which solution will meet these requirements?

A.

Configure interface VPC endpoints for each AWS service that the teams need. Use the required interface VPC endpoints to submit the big data workloads.

B.

Create EMR runtime roles. Configure the cluster to use the runtime roles. Use the runtime roles to submit the big data workloads.

C.

Create an EC2 IAM instance profile that has the required permissions for each team. Use the instance profile to submit the big data workloads.

D.

Create an EMR security configuration that has the EnableApplicationScoped IAM Role option set to false. Use the security configuration to submit the big data workloads.

A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

B.

Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

C.

Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete.Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.

D.

Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

A company wants to migrate an on-premises data center to AWS. The data canter hosts an SFTP server that stores its data on an NFS-based file system. The server holds 200 GB of data that needs to be transferred. The server must be hosted on an Amazon EC2 instance that uses an Amazon Elastic File System (Amazon EFS) file system

When combination of steps should a solutions architect take to automate this task? (Select TWO )

A.

Launch the EC2 instance into the same Avalability Zone as the EFS fie system

B.

install an AWS DataSync agent m the on-premises data center

C.

Create a secondary Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instance tor the data

D.

Manually use an operating system copy command to push the data to the EC2 instance

E.

Use AWS DataSync to create a suitable location configuration for the onprermises SFTP server

A survey company has gathered data for several years from areasm\the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB m size and growing. The company has started to share the data with a European marketing firm that has S3 buckets The company wants to ensure that its data transfer costs remain as low as possible

Which solution will meet these requirements?

A.

Configure the Requester Pays feature on the company's S3 bucket

B.

Configure S3 Cross-Region Replication from the company’s S3 bucket to one of the marketing firm's S3 buckets.

C.

Configure cross-account access for the marketing firm so that the marketing firm has access to the company’s S3 bucket.

D.

Configure the company’s S3 bucket to use S3 Intelligent-Tiering Sync the S3 bucket to one of the marketing firm’s S3 buckets

An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions architect needs to optimize the application's performance quickly.

What should the solutions architect recommend?

A.

Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.

B.

Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.

C.

Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database.

D.

Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.

An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon S3 to host the full website in different S3 buckets Add Amazon CloudFront distributions Set the S3 buckets as origins for the distributions Store the order data in Amazon S3

B.

Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones Add an Application Load Balancer (ALB) to distribute the website traffic Add another ALB for the backend APIs Store the data in Amazon RDS for MySQL

C.

Migrate the full application to run in containers Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic Store the data in Amazon RDS for MySQL

D.

Use an Amazon S3 bucket to host the website's static content Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin Use Amazon API Gateway and AWS Lambda functions for the backend APIs Store the data in Amazon DynamoDB

A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.

What should a solutions architect do to transmit and process the clickstream data?

A.

Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics

B.

Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis

C.

Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to process the data tor analysis.

D.

Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis

A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance.

Which solution meets these requirements MOST cost-effectively?

A.

Stop the DB instance when tests are completed. Restart the DB instance when required.

B.

Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.

C.

Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.

D.

Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.

A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable.

Which solution will meet these requirements MOST cost-effectively?

A.

Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.

B.

Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.

C.

Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.

D.

Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.

A company has an AWS Glue extract. transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 bucket.

New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during each run.

What should the solutions architect do to prevent AWS Glue from reprocessing old data?

A.

Edit the job to use job bookmarks.

B.

Edit the job to delete data after the data is processed

C.

Edit the job by setting the NumberOfWorkers field to 1.

D.

Use a FindMatches machine learning (ML) transform.

A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use the EC2 serial console to directly access the terminal interface of each instance for administration.

B.

Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.

C.

Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.

D.

Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel.

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database funning on Amazon EC2. The company wants this application to be highly available with tow operational complexity

Which architecture otters the HGHEST availability?

A.

Add a second ActiveMQ server to another Availably Zone Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.

B.

Use Amazon MO with active/standby brokers configured across two Availability Zones Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.

C.

Use Amazon MO with active/standby blotters configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon ROS tor MySQL with Multi-AZ enabled.

D.

Use Amazon MQ with active/standby brokers configured across two Availability Zones Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.

A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.

Which solution will meet these requirements?

A.

Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.

B.

Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.

C.

Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.

D.

Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.

A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location.

What should a solutions architect do to meet these requirements?

A.

Move the catalog to Amazon ElastiCache for Redis.

B.

Deploy a larger EC2 instance with a larger instance store.

C.

Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.

D.

Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.

Which solution will meet these requirements?

A.

Configure an S3 interface endpoint.

B.

Configure an S3 gateway endpoint.

C.

Create an S3 bucket in a private subnet.

D.

Create an S3 bucket in the same Region as the EC2 instance.

An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email.

Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the Lambda function more than once, resulting in multiple email messages.

What should the solutions architect do to resolve this issue with the LEAST operational overhead?

A.

Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.

B.

Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.

C.

Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout.

D.

Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.

A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.

Which solution meets these requirements with the LEAST amount of operational overhead?

A.

Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

B.

Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.

C.

Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.

D.

Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes The application data must be stored in a standard file system structure The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.

Which solution will meet these requirements?

A.

Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage

B.

Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage

C.

Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.

D.

Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage.

A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control.

Which solution will satisfy these requirements?

A.

Configure Amazon EFS storage and set the Active Directory domain for authentication

B.

Create an SMB Me share on an AWS Storage Gateway tile gateway in two Availability Zones

C.

Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume

D.

Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication

A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs to share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses a customer managed customer master key (CMK) to encrypt EBS volume snapshots.

What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?

A.

Make the encrypted AMI and snapshots publicly available. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key

B.

Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key.

C.

Modify the launchPermission property of the AMI Share the AMI with the MSP Partner's AWS account only. Modify the CMK's key policy to trust a new CMK that is owned by the MSP Partner for encryption.

D.

Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account. Encrypt the S3 bucket with a CMK that is owned by the MSP Partner Copy and launch the AMI in the MSP Partner's AWS account.

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.

Which storage option meets these requirements?

A.

S3 Standard

B.

S3 Intelligent-Tiering

C.

S3 Standard-Infrequent Access {S3 Standard-IA)

D.

S3 One Zone-Infrequent Access (S3 One Zone-IA)

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.

The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Ad ditionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.

What is the MOST operationally efficient solution that meets these requirements?

A.

Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days

B.

Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days

C.

Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days

D.

Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue

A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-lime analytics. A secure transfer is important because the data is considered sensitive.

Which solution offers the MOST reliable data transfer?

A.

AWS DataSync over public internet

B.

AWS DataSync over AWS Direct Connect

C.

AWS Database Migration Service (AWS DMS) over public internet

D.

AWS Database Migration Service (AWS DMS) over AWS Direct Connect

A company needs to keep user transaction data in an Amazon DynamoDB table.

The company must retain the data for 7 years.

What is the MOST operationally efficient solution that meets these requirements?

A.

Use DynamoDB point-in-time recovery to back up the table continuously.

B.

Use AWS Backup to create backup schedules and retention policies for the table.

C.

Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

D.

Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day

The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS

What should a solutions architect do to meet these requirements?

A.

Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server. Reconfigure the workloads to use FSx for Windows File Server on AWS.

B.

Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file data to the S3 File Gateway Reconfigure the on-premises workloads and the cloud workloads to use the S3 File Gateway

C.

Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file data to Amazon S3 Reconfigure the workloads to use either Amazon S3 directly or the S3 File Gateway, depending on each workload's location

D.

Deploy and configure Amazon FSx for Windows File Server on AWS Deploy and configure an Amazon FSx File Gateway on premises Move the on-premises file data to the FSx File Gateway Configure the cloud workloads to use FSx for Windows File Server on AWS Configure the on-premises workloads to use the FSx File Gateway

A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.

What should a solutions architect do to meet this requirement?

A.

Update the ALB's network ACL to accept only HTTPS traffic

B.

Create a rule that replaces the HTTP in the URL with HTTPS.

C.

Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.

D.

Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).

An Amazon EC2 administrator created the following policy associated with an IAM group containing several users

What is the effect of this policy?

A.

Users can terminate an EC2 instance in any AWS Region except us-east-1.

B.

Users can terminate an EC2 instance with the IP address 10 100 100 1 in the us-east-1 Region

C.

Users can terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100.100.254.

D.

Users cannot terminate an EC2 instance in the us-east-1 Region when the user's source IP is 10.100 100 254

A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development effort.

What should a solutions architect do to meet these requirements?

A.

Use Amazon Comprehend to detect inappropriate content. Use human review for low-confidence predictions.

B.

Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.

C.

Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-confidence predictions.

D.

Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence predictions.

A solutions architect is developing a multiple-subnet VPC architecture. The solution will consist of six subnets in two Availability Zones. The subnets are defined as public, private and dedicated for databases. Only the Amazon EC2 instances running in the private subnets should be able to access a database.

Which solution meets these requirements?

A.

Create a now route table that excludes the route to the public subnets' CIDR blocks. Associate the route table to the database subnets.

B.

Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance.

C.

Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance.

D.

Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

A company wants to run its critical applications in containers to meet requirements tor scalability and availability The company prefers to focus on maintenance of the critical applications The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload

What should a solutions architect do to meet those requirements?

A.

Use Amazon EC2 Instances, and Install Docker on the Instances

B.

Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes

C.

Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate

D.

Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-op6mized Amazon Machine Image (AMI).

A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size.

Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.

What should a solutions architect do to meet these requirements with the LEAST development effort?

A.

Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan me objects in the bucket. If objects contain Pll. trigger an S3 Lifecycle policy to remove the objects that contain Pll.

B.

Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain Pll. Use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects mat contain Pll.

C.

Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. It objects contain Rll. use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain Pll.

D.

Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain Pll. use Amazon Simple Email Service (Amazon STS) to trigger a notification to the administrators and trigger on S3 Lifecycle policy to remove the objects mot contain PII.

A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed.

The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.

Which solution will meet these requirements?

A.

Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.

B.

Create an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.

C.

Create an Amazon FSx for Windows File Server file system to extend the company's storage space.

D.

Install a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

A company has an automobile sales website that stores its listings in a database on Amazon RDS When an automobile is sold the listing needs to be removed from the website and the data must be sent to multiple target systems.

Which design should a solutions architect recommend?

A.

Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS> queue for the targets to consume

B.

Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume

C.

Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics Use AWS Lambda functions to update the targets

D.

Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues Use AWS Lambda functions to update the targets

A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.

What should the solutions architect do to meet this requirement?

A.

Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.

B.

Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.

C.

Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.

D.

Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.

A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged tor auditing purposes. Keys must be rotated every year.

Which solution meets these requirements and «the MOST operationally efferent?

A.

Server-side encryption with customer-provided keys (SSE-C)

B.

Server-side encryption with Amazon S3 managed keys (SSE-S3)

C.

Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation

D.

Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automate rotation

A company is storing sensitive user information in an Amazon S3 bucket The company wants to provide secure access to this bucket from the application tier running on Ama2on EC2 instances inside a VPC.

Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)

A.

Configure a VPC gateway endpoint for Amazon S3 within the VPC

B.

Create a bucket policy to make the objects to the S3 bucket public

C.

Create a bucket policy that limits access to only the application tier running in the VPC

D.

Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance

E.

Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket

An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket.

A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.

Which combination of actions will meet these requirements? (Choose two.)

A.

Create an Amazon Simple Queue Service (Amazon SQS) queue Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket

B.

Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source When the SQS message is successfully processed, delete the message in the queue

C.

Configure the Lambda function to monitor the S3 bucket for new uploads When an uploaded image is detected write the file name to a text file in memory and use the text file to keep track of the images that were processed

D.

Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue When items are added to the queue log the file name in a text file on the EC2 instance and invoke the Lambda function

E.

Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket When an image is uploaded. send an alert to an Amazon Simple Notification Service (Amazon SNS) topic with the application owner's email address for further processing

A company runs an on-premises application that is powered by a MySQL database The company is migrating the application to AWS to Increase the application's elasticity and availability

The current architecture shows heavy read activity on the database during times of normal operation Every 4 hours the company's development team pulls a full export of the production database to populate a database in the staging environment During this period, users experience unacceptable application latency The development team is unable to use the staging environment until the procedure completes

A solutions architect must recommend replacement architecture that alleviates the application latency issue The replacement architecture also must give the development team the ability to continue using the staging environment without delay

Which solution meets these requirements?

A.

Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.

B.

Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production Use database cloning to create the staging database on-demand

C.

Use Amazon RDS for MySQL with a Mufti AZ deployment and read replicas for production Use the standby instance tor the staging database.

D.

Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.

A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours The company wants to use these data points in its existinganalytics platform A solutions architect must determine the most viable multi-tier option to support this architecture The data points must be accessible from the REST API.

Which action meets these requirements for storing and retrieving location data?

A.

Use Amazon Athena with Amazon S3

B.

Use Amazon API Gateway with AWS Lambda

C.

Use Amazon QuickSight with Amazon Redshift.

D.

Use Amazon API Gateway with Amazon Kinesis Data Analytics

A company hosts an application on AWS Lambda functions mat are invoked by an Amazon API Gateway API The Lambda functions save customer data to an Amazon Aurora MySQL databaseWhenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete The result is that customer data Is not recorded for some of the event

A solutions architect needs to design a solution that stores customer data that is created during database upgrades

Which solution will meet these requirements?

A.

Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy

B.

Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database

C.

Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database.

D.

Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the customer data in the database

A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.

How can the solutions architect meet this requirement?

A.

Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through It.

B.

Deploy a NAT gateway into a public subnet and attach an end point policy that allows access to the S3 buckets.

C.

Deploy the application Into a public subnet and allow it to route through an internet gateway to access the S3 Buckets

D.

Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone placing both behind an Application Load Balancer After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.

What should a solutions architect propose to ensure users see all of their documents at once?

A.

Copy the data so both EBS volumes contain all the documents.

B.

Configure the Application Load Balancer to direct a user to the server with the documents

C.

Copy the data from both EBS volumes to Amazon EFS Modify the application to save new documents to Amazon EFS

D.

Configure the Application Load Balancer to send the request to both servers Return each document from the correct server.

A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL with the company's domain name and corresponding certificate so that the third-party services can use HTTPS.

Which solution will meet these requirements?

A.

Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM).

B.

Create Route 53 DNS records with the company's domain name. Point the alias record to the Regional API Gateway stage endpoint. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.

C.

Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.

D.

Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to the API Gateway APIs. Create Route 53 DNS records with the company's domain name. Point an A record to the company's domain name.

A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a solution to automatically rotate the database credentials on a regular basis.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and instance metadata at the same time.

B.

Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the same time. Use S3 Versioning to ensure the ability to fall back to previous values.

C.

Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.

D.

Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.

A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis.

Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files.

Which solution meets these requirements with the LEAST operational overhead?

A.

Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster.

B.

Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.

C.

Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB. Most Voted

D.

Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in Amazon Aurora DB cluster.

A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.

What should a solutions architect do to meet these requirements?

A.

Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins Configure Route 53 to route traffic to the CloudFront distribution.

B.

Create an Amazon CloudFront distribution that has the ALB as an origin Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Configure Route 53 to route traffic to the CloudFront distribution.

C.

Create an Amazon CloudFront distribution that has the S3 bucket as an origin Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints Create a custom domain name that points to the accelerator DNS name Use the custom domain name as an endpoint for the web application.

D.

Create an Amazon CloudFront distribution that has the ALB as an origin C. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content, Point the other domain name to the accelerator DNS name for static content Use the domain names as endpoints for the web application.

A company recently created a disaster recovery site in a Different AWS Region.The company needs to transfer large amounts of data back and forth between NFS file systems in the two Regions on a periods.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use AWS DataSync.

B.

Use AWS Snowball devices

C.

Set up an SFTP server on Amazon EC2

D.

Use AWS Database Migration Service (AWS DMS)

A company runs a web application on Amazon EC2 instances in multiple Availability Zones. The EC2 instances are in private subnets. A solutions architect implements an internet-facing Application Load Balancer (ALB) and specifies the EC2 instances as the target group. However, the internet traffic is not reaching the EC2 instances.

How should the solutions architect reconfigure the architecture to resolve this issue?

A.

Replace the ALB with a Network Load Balancer. Configure a NAT gateway in a public subnet to allow internet traffic.

B.

Move the EC2 instances to public subnets. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.

C.

Update the route tables for the EC2 instances’ subnets to send 0.0.0.0/0 traffic through the internet gateway route. Add a rule to the EC2 instances’ security groups to allow outbound traffic to 0.0.0.0/0.

D.

Create public subnets in each Availability Zone. Associate the public subnets with the ALB. Update the route tables for the public subnets with a route to the private subnets.

A company needs a backup strategy for its three-tier stateless web application The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events The database tier runs on Amazon RDS for PostgreSQL The web application does not require temporary local storage on the EC2 instances The company's recovery point objective (RPO) is 2 hours

The backup strategy must maximize scalability and optimize resource utilization for this environment

Which solution will meet these requirements?

A.

Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO

B.

Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO

C.

Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO

D.

Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO

A company plans to use Amazon ElastiCache for its multi-tier web application A solutions architect creates a Cache VPC for the ElastiCache cluster and an App VPC for the application's Amazon EC2 instances Both VPCs are in the us-east-1 Region

The solutions architect must implement a solution to provide tne application's EC2 instances with access to the ElastiCache cluster

Which solution will meet these requirements MOST cost-effectively?

A.

Create a peering connection between the VPCs Add a route table entry for the peering connection in both VPCs Configure an inbound rule for the ElastiCache cluster's security group to allow inbound connection from the application's security group

B.

Create a Transit VPC Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC Configure an inbound rule for the ElastiCache cluster's security group to allow inbound connection from the application's security group

C.

Create a peering connection between the VPCs Add a route table entry for the peering connection in both VPCs Configure an inbound rule for the peering connection's security group to allow inbound connection from the application's secunty group

D.

Create a Transit VPC Update the VPC route tables in the Cache VPC and the App VPC to route traffic through the Transit VPC Configure an inbound rule for the Transit VPCs security group to allow inbound connection from the application's security group

A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group. The number of transactions can vary but the beseline CPU utilization that is noted on each run is at least 60%. The company needs to provision the capacity 30 minutes before the jobs run.

Currently engineering complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s capacity.

Which solution will meet these requiements with the LEAST operational overhead?

A.

Ceate a dynamic scalling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric to 60%.

B.

Create a scheduled scaling polcy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes. Before the batch jobs run.

C.

Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the Policy, set the instances to pre-launch 30 minutes before the jobs run.

D.

Create an Amazon EventBridge event to invoke an AWS Lamda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.

Page: 5 / 6
Total 1186 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved