Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

SAP-C02 Amazon Web Services AWS Certified Solutions Architect - Professional Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Amazon Web Services SAP-C02 AWS Certified Solutions Architect - Professional certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 2 / 6
Total 562 questions

A company has a data lake in Amazon S3 that needs to be accessed by hundreds of applications across many AWS accounts. The company's information security policy states that the S3 bucket must not be accessed over the public internet and that each application should have the minimum permissions necessary to function.

To meet these requirements, a solutions architect plans to use an S3 access point that is restricted to specific VPCs for each application.

Which combination of steps should the solutions architect take to implement this solution? (Select TWO.)

A.

Create an S3 access point for each application in the AWS account that owns the S3 bucket. Configure each access point to be accessible only from the application's VPC. Update the bucket policy to require access from an access point.

B.

Create an interface endpoint for Amazon S3 in each application's VPC. Configure the endpoint policy to allow access to an S3 access point. Create a VPC gateway attachment for the S3 endpoint.

C.

Create a gateway endpoint for Amazon S3 in each application's VPC. Configure the endpoint policy to allow access to an S3 access point. Specify the route table that is used to access the access point.

D.

Create an S3 access point for each application in each AWS account and attach the access points to the S3 bucket. Configure each access point to be accessible only from the application's VPC. Update the bucket policy to require access from an access point.

E.

Create a gateway endpoint for Amazon S3 in the data lake's VPC. Attach an endpoint policy to allow access to the S3 bucket. Specify the route table that is used to access the bucket.

Question:

A company is migrating a containerized Kubernetes app with manifest files to AWS. What is the easiest migration path?

A.

App Runner + open-source repo

B.

Amazon EKSwith managed node groups and Aurora

C.

ECS on EC2 + task definitions

D.

Rebuild Kubernetes cluster on EC2 manually

An application is using an Amazon RDS for MySQL Multi-AZ DB instance in the us-east-1 Region. After a failover test, the application lost the connections to the database and could not re-establish the connections. After a restart of the application, the application re-established the connections.

A solutions architect must implement a solution so that the application can re-establish connections to the database without requiring a restart.

Which solution will meet these requirements?

A.

Create an Amazon Aurora MySQL Serverless v1 DB instance. Migrate the RDS DB instance to the Aurora Serverless v1 DB instance. Update the connection settings in the application to point to the Aurora reader endpoint.

B.

Create an RDS proxy. Configure the existing RDS endpoint as a target. Update the connection settings in the application to point to the RDS proxy endpoint.

C.

Create a two-node Amazon Aurora MySQL DB cluster. Migrate the RDS DB instance to the Aurora DB cluster. Create an RDS proxy. Configure the existing RDS endpoint as a target. Update the connection settings in the application to point to the RDS proxy endpoint.

D.

Create an Amazon S3 bucket. Export the database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Configure Amazon Athena to use the S3 bucket as a data store. Install the latest Open Database Connectivity (ODBC) driver for the application. Update the connection settings in the application to point to the Athena endpoint

A company is developing an application that will display financial reports. The company needs a solution that can store financial Information that comes from multiple systems. The solution must provide the reports through a web interface and must serve the data will less man 500 milliseconds or latency to end users. The solution also must be highly available and must have an RTO or 30 seconds.

Which solution will meet these requirements?

A.

Use an Amazon Redshift cluster to store the data. Use a state website that is hosted on Amazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service (Amazon EKS) cluster to provide the reports to the application.

B.

Use Amazon S3 to store the data Use Amazon Athena to provide the reports to the application. Use AWS App Runner to serve the application to view the reports.

C.

Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStight dashboard with direct Query datasets to provide the reports to the application.

D.

Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS Elastic Beanstalk to provide the reports to the application.

A company has developed an application that is running Windows Server on VMware vSphere VMs that the company hosts on premises The application data is stored in a proprietary format that must be read through the application The company manually provisioned the servers and the application

As part of its disaster recovery plan, the company wants the ability to host its application on AWS temporarily if the company's on-premises environment becomes unavailable The company wants the application to return to on-premises hosting after a disaster recovery event is complete The RPO is 5 minutes.

Which solution meets these requirements with the LEAST amount of operational overhead?

A.

Configure AWS DataSync Replicate the data to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and attach the EBS volumes

B.

Configure AWS Elastic Disaster Recovery Replicate the data to replication Amazon EC2 instances that are attached to Amazon Elastic Block Store (Amazon EBS) volumes When the on-premises environment is unavailable use Elastic Disaster Recovery to launch EC2 instances that use the replicated volumes

C.

Provision an AWS Storage Gateway file gateway. Replicate the data to an Amazon S3 bucket When the on-premises environment is unavailable, use AWS Backup to restore the data to Amazon Elastic Block Store (Amazon EBS) volumes and launch Amazon EC2 instances from these EBS volumes

D.

Provision an Amazon FSx for Windows File Server file system on AWS Replicate the data to the file system When the on-premises environment is unavailable, use AWS Cloud Format ion templates to provision Amazon EC2 instances and use AWS CloudFormation Init commands to mount the Amazon FSx file shares

A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors. Sensors send continuous data updates, and each update is less than 1 MB in size. The agency has a fleet of on-premises application servers. These servers receive upda.es 'on the sensors, convert the raw data into a human readable format, and write the results loan on-premises relational database server. Data analysts then use simple SOL queries to monitor the data.

The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks These maintenance tasks, which include updates and patches to the application servers, cause downtime. While an application server is down, data is lost from sensors because the remaining servers cannot handle the entire workload.

The agency wants a solution that optimizes operational overhead and costs. A solutions architect recommends the use of AWS loT Core to collect the sensor data.

What else should the solutions architect recommend to meet these requirements?

A.

Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to .csv format, and insert it into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance.

B.

Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to Apache Parquet format and save it to an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.

C.

Send the sensor data to an Amazon Managed Service for Apache Flink {previously known as Amazon Kinesis Data Analytics) application to convert the data to .csv format and store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance.

D.

Send the sensor data to an Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) application to convert the data to Apache Parquet format and store it in an Amazon S3 bucket Instruct the data analysis to query the data by using Amazon Athena.

A company is developing a web application that runs on Amazon EC2 instances in an Auto Scaling group behind a public-facing Application Load Balancer (ALB). Only users from a specific country are allowed to access the application. The company needs the ability to log the access requests that have been blocked. The solution should require the least possible maintenance.

Which solution meets these requirements?

A.

Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in theIPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.

B.

Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB.

C.

Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB.

D.

Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB.

A company runs applications in hundreds of production AWS accounts. The company uses AWS Organizations with all features enabled and has a centralized backup

operation that uses AWS Backup.

The company is concerned about ransomware attacks. To address this concern, the company has created a new policy that all backups must be resilient to breaches of privileged-user credentials in any production account.

Which combination of steps will meet this new requirement? (Select THREE.)

A.

Implement cross-account backup with AWS Backup vaults in designated non-production accounts.

B.

Add an SCP that restricts the modification of AWS Backup vaults.

C.

Implement AWS Backup Vault Lock in compliance mode.

D.

Configure the backup frequency, lifecycle, and retention period to ensure that at least one backup always exists in the cold tier.

E.

Configure AWS Backup to write all backups to an Amazon S3 bucket in a designated non-production account. Ensure that the S3 bucket has S3 Object Lock enabled.

F.

Implement least privilege access for the IAM service role that is assigned to AWS Backup.

A company runs a proprietary stateless ETL application on an Amazon EC2 Linux instance. The application is a Linux binary, and the source code cannot be modified. The application is single-threaded, uses 2 GB of RAM. and is highly CPU intensive The application is scheduled to run every 4 hours and runs for up to 20 minutes A solutions architect wants to revise the architecture for the solution.

Which strategy should the solutions architect use?

A.

Use AWS Lambda to run the application. Use Amazon CloudWatch Logs to invoke the Lambda function every 4 hours.

B.

Use AWS Batch to run the application. Use an AWS Step Functions state machine to invoke the AWS Batch job every 4 hours.

C.

Use AWS Fargate to run the application. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke the Fargate task every 4 hours.

D.

Use Amazon EC2 Spot Instances to run the application. Use AWS CodeDeploy to deploy and run the application every 4 hours.

A VPC spans three Availability Zones, each with public and private subnets. One NAT gateway and one internet gateway exist. Private EC2 instances must connect to the internet.

A.

Add two more NAT gateways (one per AZ). Configure each private subnet to use its AZ’s NAT gateway.

B.

Add two more NAT gateways and configure public subnets.

C.

Add internet gateways per AZ and route private subnets.

D.

Add internet gateways per AZ and configure public subnets.

A company runs an application on AWS. The application uses an Amazon Aurora MySQL database that is encrypted with the default AWS managed AWS KMS key.

The company must implement a solution to rotate the database encryption key every 180 days. The solution must provide a notification if the encryption key is noncompliant with this standard.

Which solution will meet these requirements?

A.

Configure the rotation period for the existing AWS managed KMS key to be 180 days. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the existing KMS key. Configure AWS Config to use Amazon SNS to notify the security team if key rotation is noncompliant.

B.

Create a new AWS managed KMS key with automatic rotation set for 180 days. Take a snapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SES to notify the security team if key encryption is noncompliant.

C.

Create a new customer managed KMS key with automatic rotation set for 180 days. Take asnapshot of the database. Restore the snapshot to a new Aurora cluster that uses the new KMS key. Create an AWS Config custom rule that uses an AWS Lambda function to validate the key rotation period. Configure AWS Config to use Amazon SNS to notify the security team if key encryption is noncompliant.

D.

Create a new customer managed KMS key with automatic rotation set for 180 days. Update the database to use the new KMS key for encryption. Implement the cmk-backing-key-rotation-enabled AWS Config managed rule for the new KMS key. Configure AWS Config to use Amazon SES to notify the security team if key rotation is noncompliant.

A company has VPC flow logs enabled for its NAT gateway. The company is seeing Action = ACCEPT for inbound traffic that comes from public IP address

198.51.100.2 destined for a private Amazon EC2 instance.

A solutions architect must determine whether the traffic represents unsolicited inbound connections from the internet. The first two octets of the VPC CIDR block are 203.0.

Which set of steps should the solutions architect take to meet these requirements?

A.

Open the AWS CloudTrail console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 203.0" and the source address set as "like 198.51.100.2". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.

B.

Open the Amazon CloudWatch console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 203.0" and the source address set as "like 198.51.100.2". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.

C.

Open the AWS CloudTrail console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 198.51.100.2" and the source address set as "like 203.0". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.

D.

Open the Amazon CloudWatch console. Select the log group that contains the NAT gateway's elastic network interface and the private instance's elastic network interface. Run a query to filter with the destination address set as "like 198.51.100.2" and the source address set as "like 203.0". Run the stats command to filter the sum of bytes transferred by the source address and the destination address.

A team collects and routes behavioral data for an entire company The company runs a Multi-AZ VPC environment with public subnets, private subnets, and in internet gateway Each public subnet also contains a NAT gateway Most of the company's applications read from and write to Amazon Kinesis Data Streams. Most of the workloads am in private subnets.

A solutions architect must review the infrastructure The solutions architect needs to reduce costs and maintain the function of the applications The solutions architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high A further review shows that NatGateway-Bytes charges are increasing the cost in the EC2-Other category.

What should the solutions architect do to meet these requirements?

A.

Enable VPC Flow Logs. Use Amazon Athena to analyze the logs for traffic that can be removed. Ensure that security groups are Mocking traffic that is responsible for high costs.

B.

Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that applications have the correct IAM permissions to use the interface VPC endpoint.

C.

Enable VPC Flow Logs and Amazon Detective Review Detective findings for traffic that is not related to Kinesis Data Streams Configure security groups to block that traffic

D.

Add an interface VPC endpoint for Kinesis Data Streams to the VPC. Ensure that the VPC endpoint policy allows traffic from the applications.

A company is running a serverless application that consists of several AWS Lambda functions and Amazon DynamoDB tables. The company has created new functionality that requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB cluster is located in three subnets in a VPC.

Which of the possible solutions will allow the Lambda functions to access the Neptune DB cluster and DynamoDB tables? (Select TWO.)

A.

Create three public subnets in the Neptune VPC, and route traffic through an internet gateway. Host the Lambda functions in the three new public subnets.

B.

Create three private subnets in the Neptune VPC, and route internet traffic through a NAT gateway. Host the Lambda functions in the three new private subnets.

C.

Host the Lambda functions outside the VPC. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.

D.

Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint.

E.

Create three private subnets in the Neptune VPC. Host the Lambda functions in the three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB traffic to the VPC endpoint.

A company that uses AWS Organizations allows developers to experiment on AWS. As part of the landing zone that the company has deployed, developers use their company email address to request an account. The company wants to ensure that developers are not launching costly services or running services unnecessarily. The company must give developers a fixed monthly budget to limit their AWS costs.

Which combination of steps will meet these requirements? (Choose three.)

A.

Create an SCP to set a fixed monthly account usage limit. Apply the SCP to the developer accounts.

B.

Use AWS Budgets to create a fixed monthly budget for each developer's account as part of the account creation process.

C.

Create an SCP to deny access to costly services and components. Apply the SCP to the developer accounts.

D.

Create an IAM policy to deny access to costly services and components. Apply the IAM policy to the developer accounts.

E.

Create an AWS Budgets alert action to terminate services when the budgeted amount is reached. Configure the action to terminate all services.

F.

Create an AWS Budgets alert action to send an Amazon Simple Notification Service (Amazon SNS) notification when the budgeted amount is reached. Invoke an AWS Lambda function to terminate all services.

A company is running a three-tier web application in an on-premises data center. The frontend is a PHP application that is served by an Apache web server. The middle tier is a monolithic Java SE application. The storage tier is a 60 TB PostgreSQL database.

The three-tier web application recently crashed and became unresponsive. The database also reached capacity because of read operations. The company wants to migrate to AWS to resolve these issues and improve scalability,

Which combination of steps will meet these requirements with the LEAST development effort? (Select THREE.)

A.

Configure an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer to host the web server. Use Amazon EFS for the frontend static assets.

B.

Host the static single-page application on Amazon S3. Use an Amazon CloudFront distribution to serve the application.

C.

Create a Docker container to run the Java SE application. Use AWS Fargate to host the container.

D.

Create an AWS Elastic Beanstalk environment for Java to host the Java SE application.

E.

Migrate the PostgreSQL database to an Amazon EC2 instance that is larger than the on-premisesPostgreSQL database.

F.

Use AWS DMS to replatform the PostgreSQL database to an Amazon Aurora PostgreSQL database. Use Aurora Auto Scaling for read replicas.

A software as a service (SaaS) based company provides a case management solution to customers A3 part of the solution. The company uses a standalone Simple Mail Transfer Protocol (SMTP) server to send email messages from an application. The application also stores an email template for acknowledgement email messages that populate customer data before the application sends the email message to the customer.

The company plans to migrate this messaging functionality to the AWS Cloud and needs to minimize operational overhead.

Which solution will meet these requirements MOST cost-effectively?

A.

Set up an SMTP server on Amazon EC2 instances by using an AMI from the AWS Marketplace. Store the email template in an Amazon S3 bucket. Create an AWS Lambda function to retrieve the template from the S3 bucket and to merge the customer data from the application with the template. Use an SDK in the Lambda function to send the email message.

B.

Set up Amazon Simple Email Service (Amazon SES) to send email messages. Store the email template in an Amazon S3 bucket. Create an AWS Lambda function to retrieve the template from the S3 bucket and to merge the customer data from the application with the template. Use an SDK in the Lambda function to send the email message.

C.

Set up an SMTP server on Amazon EC2 instances by using an AMI from the AWS Marketplace. Store the email template in Amazon Simple Email Service (Amazon SES) with parameters for the customer data.Create an AWS Lambda function to call the SES template and to pass customer data to replace the parameters. Use the AWS Marketplace SMTP server to send the email message.

D.

Set up Amazon Simple Email Service (Amazon SES) to send email messages. Store the email template on Amazon SES with parameters for the customer data. Create an AWS Lambda function to call the SendTemplatedEmail API operation and to pass customer data to replace the parameters and the email destination.

A solutions architect must analyze a company's Amazon EC2 Instances and Amazon Elastic Block Store (Amazon EBS) volumes to determine whether the company is using resources efficiently The company is running several large, high-memory EC2 instances lo host database dusters that are deployed in active/passive configurations The utilization of these EC2 instances varies by the applications that use the databases, and the company has not identified a pattern

The solutions architect must analyze the environment and take action based on the findings.

Which solution meets these requirements MOST cost-effectively?

A.

Create a dashboard by using AWS Systems Manager OpsConter Configure visualizations tor Amazon CloudWatch metrics that are associated with the EC2 instances and their EBS volumes Review the dashboard periodically and identify usage patterns Right size the EC2 instances based on the peaks in the metrics

B.

Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes Create and review a dashboard that is based on the metrics Identify usage patterns Right size the FC? instances based on the peaks In the metrics

C.

Install the Amazon CloudWatch agent on each of the EC2 Instances Turn on AWS Compute Optimizer, and let it run for at least 12 hours Review the recommendations from Compute Optimizer, and right size the EC2 instances as directed

D.

Sign up for the AWS Enterprise Support plan Turn on AWS Trusted Advisor Wait 12 hours Review the recommendations from Trusted Advisor, and rightsize the EC2 instances as directed

A company has a latency-sensitive trading platform that uses Amazon DynamoDB as a storage backend. The company configured the DynamoDB table to use on-demand capacity mode. A solutions architect needs to design a solution to improve the performance of the trading platform. The new solution must ensure high availability for the trading platform.

Which solution will meet these requirements with the LEAST latency?

A.

Create a two-node DynamoDB Accelerator (DAX) cluster Configure an application to read and write data by using DAX.

B.

Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoDB table.

C.

Create a three-node DynamoDB Accelerator (DAX) cluster. Configure an application to read data directly from the DynamoDB table and to write data by using DAX.

D.

Create a single-node DynamoD8 Accelerator (DAX) cluster. Configure an application to read data by using DAX and to write data directly to the DynamoD8 table.

A company has mounted sensors to collect information about environmental parameters such as humidity and light throughout all the company's factories. The company needs to stream and analyze the data in the AWS Cloud in real time. If any of the parameters fall out of acceptable ranges, the factory operations team must receive a notification immediately.

Which solution will meet these requirements?

A.

Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS Step Functions to consume and analyze the data in the Kinesis Data Firehose delivery stream. use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.

B.

Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate taskto analyze the data. Use Amazon Simple Email Service (Amazon SES) to notify the operations team.

C.

Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function to consume the Kinesis data stream and to analyze the data. UseAmazon Simple Notification Service (Amazon SNS) to notify the operations team.

D.

Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automatically scaled and containerized service in Amazon Elastic Container Service (Amazon ECS) to consume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notify the operations team.

A company runs a software-as-a-service

Which solution meets these requirements'?

A.

Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold

B.

Migrate the database to Amazon Aurora, and add a read replica Add a database connection pool outside of the Lambda handler function

C.

Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53 weighted records

D.

Migrate the database to Amazon Aurora and add an Aurora Replica Configure Amazon RDS Proxy to manage database connection pools

Question:

A company is migrating a large on-prem Oracle database (withstored procedures) to AWS. The solution must usemanaged services, behighly available, and enable afast migrationwithminimal downtime.

A.

Use AWS DMS to replicate data to RDS for Oracle. Store database files in S3.

B.

Use backup and restore into EC2-hosted Oracle cluster.

C.

Use DMS to move data to DynamoDB. Recreate stored procedures in Lambda.

D.

Use DMS to migrate toAmazon Aurora PostgreSQL. UseAWS SCTto convert stored procedures.

A company wants to migrate an application to Amazon EC2 from VMware Infrastructure that runs in an on-premises data center. A solutions architect must preserve the software and configuration settings during the migration.

What should the solutions architect do to meet these requirements?

A.

Configure the AWS DataSync agent to start replicating the data store to Amazon FSx for Windows File Server Use the SMB share to host the VMware data store. Use VM Import/Export to move the VMs to Amazon EC2.

B.

Use the VMware vSphere client to export the application as an image in Open Virealization Format (OVF) format Create an Amazon S3 bucket to store the image in the destination AWS Region. Create and apply an IAM role for VM Import Use the AWS CLI to run the EC2 import command.

C.

. Configure AWS Storage Gateway for files service to export a Common Internet File System(CIFSJ share. Create a backup copy to the shared folder. Sign in to the AWS Management Console and create an AMI from the backup copy Launch an EC2 instance that is based on the AMI.

D.

Create a managed-instance activation for a hybrid environment in AWS Systems Manager. Download and install Systems Manager Agent on the on-premises VM Register the VM with Systems Manager to be a managed instance Use AWS Backup to create a snapshot of the VM and create an AMI. Launch an EC2 instance that is based on the AMI

A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application’s user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing.

Which solution will provide a consistent user experience that will allow the application and database tiers to scale?

A.

Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.

B.

Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.

C.

Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.

D.

Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.

A solutions architect is auditing the security setup of an AWS Lambda function for a company. The Lambda function retrieves the latest changes from an Amazon Aurora database. The Lambda function and the database run in the same VPC. Lambda environment variables are providing the database credentials to the Lambda function.

The Lambda function aggregates data and makes the data available in an Amazon S3 bucket that is configured for server-side encryption with AWS KMS managed encryption keys (SSE-KMS). The data must not travel across the internet. If any database credentials become compromised, the company needs a solution that minimizes the impact of the compromise.

What should the solutions architect recommend to meet these requirements?

A.

Enable IAM database authentication on the Aurora DB cluster. Change the IAM role for the Lambda function to allow the function to access the database by using IAM database authentication. Deploy a gateway VPC endpoint for Amazon S3 in the VPC.

B.

Enable IAM database authentication on the Aurora DB cluster. Change the IAM role for the Lambda function to allow the function to access the database by using IAM database authentication. Enforce HTTPS on the connection to Amazon S3 during data transfers.

C.

Save the database credentials in AWS Systems Manager Parameter Store. Set up password rotation on the credentials in Parameter Store. Change the IAM role for the Lambda function to allow the function to access Parameter Store. Modify the Lambda function to retrieve the credentials from Parameter Store. Deploy a gateway VPC endpoint for Amazon S3 in the VPC.

D.

Save the database credentials in AWS Secrets Manager. Set up password rotation on the credentials in Secrets Manager. Change the IAM role for the Lambda function to allow the function to access Secrets Manager. Modify the Lambda function to retrieve the credentials Om Secrets Manager. Enforce HTTPS on the connection to Amazon S3 during data transfers.

A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a total network will generate 6 TB of data in a preprimary formal over the course of 1 week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move lie data to object storage in the AWS Cloud as soon. as possible after the experiment.

Which solution will meet these requirements?

A.

Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment return the device to AWS so that the data can be loaded into Amazon S3.

B.

Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS) volume.

C.

Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.

D.

Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.

A company wants to run a custom network analysis software package to inspect traffic as traffic leaves and enters a VPC. The company has deployed the solution by using AWS Cloud Formation on three Amazon EC2 instances in an Auto Scaling group. All network routing has been established to direct traffic to the EC2 instances.

Whenever the analysis software stops working, the Auto Scaling group replaces an instance. The network routes are not updated when the instance replacement occurs.

Which combination of steps will resolve this issue? {Select THREE.)

A.

Create alarms based on EC2 status check metrics that will cause the Auto Scaling group to replace the failed instance.

B.

Update the Cloud Formation template to install the Amazon CloudWatch agent on the EC2 instances. Configure the CloudWatch agent to send process metrics for the application.

C.

Update the Cloud Formation template to install AWS Systems Manager Agent on the EC2 instances. Configure Systems Manager Agent to send process metrics for the application.

D.

Create an alarm for the custom metric in Amazon CloudWatch for the failure scenarios. Configure the alarm to publish a message to an Amazon Simple Notification Service {Amazon SNS) topic.

E.

Create an AWS Lambda function that responds to the Amazon Simple Notification Service (Amazon SNS) message to take the instance out of service. Update the network routes to point to the replacement instance.

F.

In the Cloud Formation template, write a condition that updates the network routes when a replacement instance is launched.

A company has a critical application in which the data tier is deployed in a single AWS Region. The data tier uses an Amazon DynamoDB table and an Amazon Aurora MySQL DB cluster. The current Aurora MySQL engine version supports a global database. The application tier is already deployed in two Regions.

Company policy states that critical applications must have application tier components and data tier components deployed across two Regions. The RTO and RPO must be no more than a few minutes each. A solutions architect must recommend a solution to make the data tier compliant with company policy.

Which combination of steps will meet these requirements? (Choose two.)

A.

Add another Region to the Aurora MySQL DB cluster

B.

Add another Region to each table in the Aurora MySQL DB cluster

C.

Set up scheduled cross-Region backups for the DynamoDB table and the Aurora MySQL DB cluster

D.

Convert the existing DynamoDB table to a global table by adding another Region to its configuration

E.

Use Amazon Route 53 Application Recovery Controller to automate database backup and recovery to the secondary Region

A company wants to use AWS for disaster recovery for an on-premises application. The company has hundreds of Windows-based servers that run the application. All the servers mount a common share.

The company has an RTO of 15 minutes and an RPO of 5 minutes. The solution must support native failover and fallback capabilities.

Which solution will meet these requirements MOST cost-effectively?

A.

Create an AWS Storage Gateway File Gateway. Schedule daily Windows server backups. Save the data lo Amazon S3. During a disaster, recover the on-premises servers from the backup. During failback. run the on-premises servers on Amazon EC2 instances.

B.

Create a set of AWS CloudFormation templates to create infrastructure. Replicate all data to Amazon Elastic File System (Amazon EFS) by using AWS DataSync. During a disaster, use AWS CodePipeline to deploy the templates to restore the on-premises servers. Fail back the data by using DataSync.

C.

Create an AWS Cloud Development Kit (AWS CDK) pipeline to stand up a multi-site active-active environment on AWS. Replicate data into Amazon S3 by using the s3 sync command. During a disaster, swap DNS endpoints to point to AWS. Fail back the data by using the s3 sync command.

D.

Use AWS Elastic Disaster Recovery to replicate the on-premises servers. Replicate data to an Amazon FSx for Windows File Server file system by using AWS DataSync. Mount the file system to AWS servers. During a disaster, fail over the on-premises servers to AWS. Fail back to new or existing servers by using Elastic Disaster Recovery.

A company wants to use AWS IAM Identity Center (AWS Single Sign-On) to manage employee access to AWS services. The company uses AWS Organizations to manage its AWS accounts.

Each employee has their own IAM user. Each IAM user is a member of at least one IAM group. Each IAM group has an attached policy that allows members to assume

specific roles across the accounts. The roles contain appropriate policies for the expected activities of each group of users in each account. All relevant accounts exist inside a single OU.

The company has already created new users and groups in IAM Identity Center to match the permissions that exist in IAM.

How should the company use IAM Identity Center to implement the existing permissions?

A.

For each group, create policies in each account. Give the policies the same name in each account. Create a new permission set. Add the name of the newpolicies to the permission set. Assign user access to the AWS accounts in IAM Identity Center.

B.

For each group, create a new permission set. Attach the relevant existing IAM roles in each account to the permission set. Create a new customer managedpolicy that allows the group to assume the roles. Assign user access to the AWS accounts in IAM Identity Center.

C.

For each group, create a new permission set. Create policies in each account. Give each policy a unique name. Set the path of each policy to match thename of the permission set. Assign user access to the AWS accounts in IAM Identity Center.

D.

Add the OU to the accounts configuration in IAM Identity Center. For each group, create policies in each account. Create a new permission set. Add the newpolicies to the permission set as customer managed policies. Attach each new policy to the correct account in the account configuration in IAM IdentityCenter.

Page: 2 / 6
Total 562 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved