Weekend Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmaspas7

Easiest Solution 2 Pass Your Certification Exams

SAP-C02 Amazon Web Services AWS Certified Solutions Architect - Professional Free Practice Exam Questions (2025 Updated)

Prepare effectively for your Amazon Web Services SAP-C02 AWS Certified Solutions Architect - Professional certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2025, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 4 / 6
Total 569 questions

A company is migrating to the cloud. It wants to evaluate the configurations of virtual machines in its existing data center environment to ensure that it can size new Amazon EC2 instances accurately. The company wants to collect metrics, such as CPU. memory, and disk utilization, and it needs an inventory of what processes are running on each instance. The company would also like to monitor network connections to map communications between servers.

Which would enable the collection of this data MOST cost effectively?

A.

Use AWS Application Discovery Service and deploy the data collection agent to each virtual machine in the data center.

B.

Configure the Amazon CloudWatch agent on all servers within the local environment and publish metrics to Amazon CloudWatch Logs.

C.

Use AWS Application Discovery Service and enable agentless discovery in the existing visualization environment.

D.

Enable AWS Application Discovery Service in the AWS Management Console and configure the corporate firewall to allow scans over a VPN.

A company with several AWS accounts is using AWS Organizations and service control policies (SCPs). An Administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111:

Developers working in account 1111-1111-1111 complain that they cannot create Amazon S3 buckets. How should the Administrator address this problem?

A.

Add s3:CreateBucket withג€Allowג€ effect to the SCP.

B.

Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111.

C.

Instruct the Developers to add Amazon S3 permissions to their IAM entities.

D.

Remove the SCP from account 1111-1111-1111.

A company runs AWS workloads that are integrated with software as a service (SaaS) applications. The company needs to analyze the SaaS applications to identify unused licenses. Which solution will meet this requirement with the LEAST operational overhead?

A.

Use AWS License Manager automated discovery to retrieve audit logs from the SaaS applications. Use Amazon Athena to analyze the data and to identify unused SaaS licenses.

B.

Create an AWS Lambda function to retrieve audit logs from the SaaS applications and to store the data in Amazon S3. Use Amazon EMR to analyze the data and to identify unused SaaS licenses.

C.

Use AWS AppFabric to ingest audit logs from the SaaS applications into Amazon S3. Use Amazon Athena to analyze the data and to identify unused SaaS licenses.

D.

Use AWS App Runner to ingest audit logs from the SaaS applications into Amazon S3. Use Amazon EMR to analyze the data and to identify unused SaaS licenses.

An online survey company runs its application in the AWS Cloud. The application is distributed and consists of microservices that run in an automatically scaled Amazon Elastic Container Service (Amazon ECS) cluster. The ECS cluster is a target for an Application Load Balancer (ALB). The ALB is a custom origin for an Amazon CloudFront distribution.

The company has a survey that contains sensitive data. The sensitive data must be encrypted when it moves through the application. The application's data-handling microservice is the only microservice that should be able to decrypt the data.

Which solution will meet these requirements?

A.

Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a field-level encryption profile and a configuration. Associate the KMS key and the configuration with the CloudFront cache behavior.

B.

Create an RSA key pair that is dedicated to the data-handling microservice. Upload the public key to the CloudFront distribution. Create a field-level encryption profile and a configuration. Add the configuration to the CloudFront cache behavior.

C.

Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the KMS key to encrypt the sensitive data.

D.

Create an RSA key pair that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the private key of the RSA key pair to encrypt the sensitive data.

A company is developing a solution to analyze images. The solution uses a 50 TB reference dataset and analyzes images up to 1 TB in size. The solution spreads requests across an Auto Scaling group of Amazon EC2 Linux instances in a VPC. The EC2 instances are attached to shared Amazon EBS io2 volumes in each Availability Zone. The EBS volumes store the reference dataset.

During testing, multiple parallel analyses led to numerous disk errors, which caused job failures. The company wants the solution to provide seamless data reading for all instances.

Which solution will meet these requirements MOST cost-effectively?

A.

Create a new EBS volume for each EC2 instance. Copy the data from the shared volume to the new EBS volume regularly. Update the application to reference the new EBS volume.

B.

Move all the reference data to an Amazon S3 bucket. Install Mountpoint for Amazon S3 on the EC2 instances. Create gateway endpoints for Amazon S3 in the VPC. Replace the EBS mount point with the S3 mount point.

C.

Move all the reference data to an Amazon S3 bucket. Create an Amazon S3 backed Multi-AZ Amazon EFS volume. Mount the EFS volume on the EC2 instances. Replace the EBS mount point with the EFS mount point.

D.

Upgrade the instances to local storage. Copy the data from the shared EBS volume to the local storage regularly. Update the application to reference the local storage.

A company has deployed an application on AWS Elastic Beanstalk. The application uses Amazon Aurora for the database layer. An Amazon CloudFront distribution serves web requests and includes the Elastic Beanstalk domain name as the origin server. The distribution is configured with an alternate domain name that visitors use when they access the application.

Each week, the company takes the application out of service for routine maintenance. During the time that the application is unavailable, the company wants visitors to receive an informational message instead of a CloudFront error message.

A solutions architect creates an Amazon S3 bucket as the first step in the process.

Which combination of steps should the solutions architect take next to meet the requirements? (Choose three.)

A.

Upload static informational content to the S3 bucket.

B.

Create a new CloudFront distribution. Set the S3 bucket as the origin.

C.

Set the S3 bucket as a second origin in the original CloudFront distribution. Configure the distribution and the S3 bucket to use an origin access identity (OAI).

D.

During the weekly maintenance, edit the default cache behavior to use the S3 origin. Revert the change when the maintenance is complete.

E.

During the weekly maintenance, create a cache behavior for the S3 origin on the new distribution. Set the path pattern to \ Set the precedence to 0. Delete the cache behavior when the maintenance is complete.

F.

During the weekly maintenance, configure Elastic Beanstalk to serve traffic from the S3 bucket.

Question:

A company is migrating its on-premises file transfer solution to AWS Transfer Family. The current system includes an SFTP server, a transformation application, and a messaging server. Transformations run every 5 minutes and notify the messaging server when complete.

The company wants to simplify and reduce operational overhead.

A.

Use Amazon EFS and a cron job to perform the transformations. Notify using SNS.

B.

Use Amazon EMR to perform the transformations and notify via SNS.

C.

Use Amazon S3 as storage with AWS Glue triggered by S3 events for transformations, and notify via SQS.

D.

Use Amazon EFS with a time-based AWS Glue job every 5 minutes.

An AWS customer has a web application that runs on premises. The web application fetches data from a third-party API that is behind a firewall. The third party accepts only one public CIDR block in each client's allow list.

The customer wants to migrate their web application to the AWS Cloud. The application will be hosted on a set of Amazon EC2 instances behind an Application Load Balancer (ALB) in a VPC. The ALB is located in public subnets. The EC2 instances are located in private subnets. NAT gateways provide internet access to the private subnets.

How should a solutions architect ensure that the web application can continue to call the third-parly API after the migration?

A.

Associate a block of customer-owned public IP addresses to the VPC. Enable public IP addressing for public subnets in the VPC.

B.

Register a block of customer-owned public IP addresses in the AWS account. Create Elastic IP addresses from the address block and assign them lo the NAT gateways in the VPC.

C.

Create Elastic IP addresses from the block of customer-owned IP addresses. Assign the static Elastic IP addresses to the ALB.

D.

Register a block of customer-owned public IP addresses in the AWS account. Set up AWS Global Accelerator to use Elastic IP addresses from the address block. Set the ALB as the accelerator endpoint.

A company is designing an AWS environment tor a manufacturing application. The application has been successful with customers, and the application's user base has increased. The company has connected the AWS environment to the company's on-premises data center through a 1 Gbps AWS Direct Connect connection. The company has configured BGP for the connection.

The company must update the existing network connectivity solution to ensure that the solution is highly available, fault tolerant, and secure.

Which solution win meet these requirements MOST cost-effectively?

A.

Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data in transit and provide resilience for the Direct Conned connection. Configure MACsec to encrypt traffic inside the Direct Connect connection.

B.

Provision another Direct Conned connection between the company's on-premises data center and AWS to increase the transfer speed and provide resilience. Configure MACsec to encrypt traffic inside the Dried Conned connection.

C.

Configure multiple private VIFs. Load balance data across the VIFs between the on-premises data center and AWS to provide resilience.

D.

Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and to provide resilience for the Direct Connect connection.

A company has a solution that analyzes weather data from thousands of weather stations. The weather stations send the data over an Amazon API Gateway REST API that has an AWS Lambda function integration. The Lambda function calls a third-party service for data pre-processing. The third-party service gets overloadedand fails the pre-processing, causing a loss of data.

A solutions architect must improve the resiliency of the solution. The solutions architect must ensure that no data is lost and that data can be processed later if failures occur.

What should the solutions architect do to meet these requirements?

A.

Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the queue as the dead-letter queue for the API.

B.

Create two Amazon Simple Queue Service (Amazon SQS) queues: a primary queue and a secondary queue. Configure the secondary queue as the dead-letter queue for the primary queue. Update the API to use a new integration to the primary queue. Configure the Lambda function as the invocation target for the primary queue.

C.

Create two Amazon EventBridge event buses: a primary event bus and a secondary event bus. Update the API to use a new integration to the primary event bus. Configure an EventBridge rule to react to all events on the primary event bus. Specify the Lambda function as the target of the rule. Configure the secondary event bus as the failure destination for the Lambda function.

D.

Create a custom Amazon EventBridge event bus. Configure the event bus as the failure destination for the Lambda function.

A company has an application in the AWS Cloud. The application runs on a fleet of 20 Amazon EC2 instances. The EC2 instances are persistent and store data on multiple attached Amazon Elastic Block Store (Amazon EBS) volumes.

The company must maintain backups in a separate AWS Region. The company must be able to recover the EC2 instances and their configuration within I business day, with loss of no more than I day's worth of data. The company has limited staff and needs a backup solution that optimizes operational efficiency and cost. The company already has created an AWS CloudFormation template that can deploy the required network configuration in a secondary Region.

Which solution will meet these requirements?

A.

Create a second CloudFormation template that can recreate the EC2 instances in the secondary Region. Run daily multivolume snapshots by using AWS Systems Manager Automation runbooks. Copy the snapshots to the secondary Region. In the event of a failure, launch the CloudFormation templates, restore the EBS volumes from snapshots, and transfer usage to the secondary Region.

B.

Use Amazon Data Lifecycle Manager (Amazon DLM) to create daily multivolume snapshots of the EBS volumes. In the event of a failure, launch theCloudFormation template and use Amazon DLM to restore the EBS volumes and transfer usage to the secondary Region.

C.

Use AWS Backup to create a scheduled daily backup plan for the EC2 instances. Configure the backup task to copy the backups to a vault in the secondary Region. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.

D.

Deploy EC2 instances of the same size and configuration to the secondary Region. Configure AWS DataSync daily to copy data from the primary Region to the secondary Region. In the event of a failure, launch the CloudFormation template and transfer usage to the secondaryRegion.

A software-as-a-service (SaaS) provider exposes APIs through an Application Load Balancer (ALB). The ALB connects to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is deployed in the us-east-I Region. The exposed APIs contain usage of a few non-standard REST methods: LINK, UNLINK, LOCK, and UNLOCK.

Users outside the United States are reporting long and inconsistent response times for these APIs. A solutions architect needs to resolve this problem with a solution that minimizes operational overhead.

Which solution meets these requirements?

A.

Add an Amazon CloudFront distribution. Configure the ALB as the origin.

B.

Add an Amazon API Gateway edge-optimized API endpoint to expose the APIs. Configure the ALB as the target.

C.

Add an accelerator in AWS Global Accelerator. Configure the ALB as the origin.

D.

Deploy the APIs to two additional AWS Regions: eu-west-l and ap-southeast-2. Add latency-based routing records in Amazon Route 53.

A company migrated to AWS and uses AWS Business Support. The company wants to monitor thecost-effectiveness of Amazon EC2 instances. The EC2 instances have tags for department, business unit, and environment. Development EC2 instances have high cost but low utilization.

The company needs to detect and stop any underutilized development EC2 instances. Instances are underutilized if they had 10% or less average CPU utilization and 5 MB or less network I/O for at least 4 of the past 14 days.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure Amazon CloudWatch dashboards to monitor EC2 instance utilization based on tags for department, business unit, and environment. Create an Amazon EventBridge rule that invokes an AWS Lambda function to stop underutilized development EC2 instances.

B.

Configure AWS Systems Manager to track EC2 instance utilization and report underutilized instances to Amazon CloudWatch. Filter the CloudWatch data by tags for department, business unit, and environment. Create an Amazon EventBridge rule that invokes an AWS Lambda function to stop underutilized EC2 instances.

C.

Create an Amazon EventBridge rule to detect low utilization of EC2 instances reported by AWS Trusted Advisor. Configure the rule to invoke a Lambda function that filters the data by tags for department, business unit, and environment and stops underutilized development EC2 instances.

D.

Create an AWS Lambda function to run daily to retrieve utilization data for all EC2 instances. Save the data to an Amazon DynamoDB table. Create a QuickSight dashboard that uses the DynamoDB table as a data source to identify and stop underutilized development EC2 instances.

A company that uses AWS Organizations allows developers to experiment on AWS. As part of the landing zone that the company has deployed, developers use their company email address to request an account. The company wants to ensure that developers are not launching costly services or running services unnecessarily. The company must give developers a fixed monthly budget to limit their AWS costs.

Which combination of steps will meet these requirements? (Choose three.)

A.

Create an SCP to set a fixed monthly account usage limit. Apply the SCP to the developer accounts.

B.

Use AWS Budgets to create a fixed monthly budget for each developer's account as part of the account creation process.

C.

Create an SCP to deny access to costly services and components. Apply the SCP to the developer accounts.

D.

Create an IAM policy to deny access to costly services and components. Apply the IAM policy to the developer accounts.

E.

Create an AWS Budgets alert action to terminate services when the budgeted amount is reached. Configure the action to terminate all services.

F.

Create an AWS Budgets alert action to send an Amazon Simple Notification Service (Amazon SNS) notification when the budgeted amount is reached. Invoke an AWS Lambda function to terminate all services.

A software company needs to create short-lived test environments to test pull requests as part of its development process. Each test environment consists of a single Amazon EC2 instance that is in an Auto Scaling group.

The test environments must be able to communicate with a central server to report test results. The central server is located in an on-premises data center. A solutions architect must implement a solution so that the company can create and delete test environments without any manual intervention. The company has created a transit gateway with a VPN attachment to the on-premises network.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an AWS CloudFormation template that contains a transit gateway attachment and related routing configurations. Create a CloudFormation stack set that includes this template. Use CloudFormation StackSets to deploy a new stack for each VPC in the account. Deploy a new VPC for each test environment.

B.

Create a single VPC for the test environments. Include a transit gateway attachment and related routing configurations. Use AWS CloudFormation to deploy all test environments into the VPC.

C.

Create a new OU in AWS Organizations for testing. Create an AWS CloudFormation template that contains a VPC, necessary networking resources, a transit gateway attachment, and related routing configurations. Create a CloudFormation stack set that includes this template. Use CloudFormation StackSets for deployments into each account under the testing 01.1. Create a new account for each test environment.

D.

Convert the test environment EC2 instances into Docker images. Use AWS CloudFormation to configure an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in a new VPC, create a transit gateway attachment, and create related routing configurations. Use Kubernetes to manage the deployment and lifecycle of the test environments.

A company needs to migrate a 2 TB MySQL database from an on-premises data center to an Amazon Aurora cluster. The database receives hundreds of updates every minute. The on-premises database server is not accessible through the internet.

The migration solution must ensure that no data is lost between the start of migration and cutover. The migration must begin as soon as possible and must minimize downtime.

Which solution will meet these requirements?

A.

Create an AWS Site-to-Site VPN connection between the on-premises data center and the VPC that hosts the Aurora duster. Create a dump of the on-premises database by using mysqldump. Upload the dump to Amazon S3 by using multipart upload. Use an Amazon EC2 instance with appropriate permissions to import the dump to the Aurora cluster.

B.

Create an AWS Site-to-Site VPN connection between the on-premises data center and the VPC that hosts the Aurora cluster. Specify the on-premises database as the source endpoint in AWS DMS. Specify the Aurora duster as the target endpoint. Configure a DMS task with ongoing replication.

C.

Set up an AWS Direct Connect connection between the on-premises data center and the VPC that hosts the Aurora duster. Create a dump of the on-premises database by using mysqldump. Upload the dump to Amazon S3 by using multipart upload. Use an Amazon EC2 instance with appropriate permissions to import the dump to the Aurora cluster. Set up replication between the data center and the Aurora cluster.

D.

Set up an AWS Direct Connect connection between the on-premises data center and the VPC that hosts the Aurora cluster. Specify the on-premises database as the source endpoint in AWS DMS. Specify the Aurora duster as the target endpoint Configure a DMS task with ongoing replication.

A company wants to use AWS IAM Identity Center (AWS Single Sign-On) to manage employee access to AWS services. The company uses AWS Organizations to manage its AWS accounts.

Each employee has their own IAM user. Each IAM user is a member of at least one IAM group. Each IAM group has an attached policy that allows members to assume

specific roles across the accounts. The roles contain appropriate policies for the expected activities of each group of users in each account. All relevant accounts exist inside a single OU.

The company has already created new users and groups in IAM Identity Center to match the permissions that exist in IAM.

How should the company use IAM Identity Center to implement the existing permissions?

A.

For each group, create policies in each account. Give the policies the same name in each account. Create a new permission set. Add the name of the newpolicies to the permission set. Assign user access to the AWS accounts in IAM Identity Center.

B.

For each group, create a new permission set. Attach the relevant existing IAM roles in each account to the permission set. Create a new customer managedpolicy that allows the group to assume the roles. Assign user access to the AWS accounts in IAM Identity Center.

C.

For each group, create a new permission set. Create policies in each account. Give each policy a unique name. Set the path of each policy to match thename of the permission set. Assign user access to the AWS accounts in IAM Identity Center.

D.

Add the OU to the accounts configuration in IAM Identity Center. For each group, create policies in each account. Create a new permission set. Add the newpolicies to the permission set as customer managed policies. Attach each new policy to the correct account in the account configuration in IAM IdentityCenter.

A company is planning a one-time migration of an on-premises MySQL database to Amazon Aurora MySQL in the us-east-1 Region. The company's current internet connection has limited bandwidth. The on-premises MySQL database is 60 TB in size The company estimates that it will take a month to transfer the data to AWS over the current internet connection.

The company needs a migration solution that will migrate the database more quickly

Which solution will migrate the database in the LEAST amount of time?

A.

Request a 1 Gbps AWS Direct Connect connection between the on-premises data center and AWS Use AWS Database Migration Service (AWS DMS) to migrate the on-premises MySQL database to Aurora MySQL.

B.

Use AWS DataSync with the current internet connection to accelerate the data transfer between the on-premises data center and AWS Use AWS Application Migration Service to migrate the on-premises MySQL database to Aurora MySQL.

C.

Order an AWS Snowball Edge Device Load the data into an Amazon S3 bucket by using the S3 interface Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to Aurora MySQL

D.

Order an AWS Snowball Device Load the data into an Amazon S3 bucket by usingthe S3 Adapter for Snowball Use AWS Application Migration Service to migrate the data from Amazon S3 to Aurora MySQL.

A company has 20 accounts in an organization in AWS Organizations. The accounts are in two OUs: development and production. Multiple teams use the development accounts.

The company wants to control the cost that is associated with the development accounts. The company needs a solution that provides a notification when the forecasted monthly cost for all development accounts exceeds a threshold.

A solutions architect creates an Amazon SNS topic and subscribes an email address to the topic.

What should the solutions architect do next to meet the notification requirement with the LEAST configuration effort?

A.

Enable Amazon CloudWatch billing alerts in the organization's management account. Create a CloudWatch billing alarm by configuring the EstimatedCharges metric for each development account as a linked account. Configure the SNS topic for email alerts when the EstimatedCharges metric value exceeds the threshold.

B.

Create an AWS Cost and Usage Report in the organization's management account. Configure report delivery to an Amazon S3 bucket. Configure an AWS Glue job to extract the report data into Amazon Athena. Configure AWS Step Functions to analyze the consolidated cost of all the development accounts. Configure the SNS topic for email alerts when the cost exceeds the threshold.

C.

Use AWS Budgets to create a cost budget in the organization's management account. Configure each development account as a linked account. Configure an alert threshold. Configure the SNS topic for email alerts.

D.

Enable AWS Cost Explorer in the organization's management account. Configure each development account as a linked account. Configure an alert threshold. Configure the SNS topic for email alerts.

A company has an on-premises Microsoft SOL Server database that writes a nightly 200 GB export to a local drive. The company wants to move the backups to more robust cloud storage on Amazon S3. The company has set up a 10 Gbps AWS Direct Connect connection between the on-premises data center and AWS.

Which solution meets these requirements MOST cost-effectively?

A.

Create a new S3 bucket. Deploy an AWS Storage Gateway file gateway within the VPC that Is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share.

B.

Create an Amazon FSx for Windows File Server Single-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.

C.

Create an Amazon FSx for Windows File Server Multi-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.

D.

Create a new S3 bucket. Deploy an AWS Storage Gateway volume gateway within the VPC that Is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share on the volume gateway, and automate copies of this data to an S3 bucket.

An education company is running a web application used by college students around the world. The application runs in an Amazon Elastic Container Service (Amazon ECS) cluster in an Auto Scaling group behind an Application Load Balancer (ALB). A system administrator detected a weekly spike in the number of failed logic attempts. Which overwhelm the application’s authentication service. All the failed login attempts originate from about 500 different IP addresses that change each week. A solutions architect must prevent the failed login attempts from overwhelming the authentication service.

Which solution meets these requirements with the MOST operational efficiency?

A.

Use AWS Firewall Manager to create a security group and security group policy to deny access from the IP addresses.

B.

Create an AWS WAF web ACL with a rate-based rule, and set the rule action to Block. Connect the web ACL to the ALB.

C.

Use AWS Firewall Manager to create a security group and security group policy to allow access only to specific CIDR ranges.

D.

Create an AWS WAF web ACL with an IP set match rule, and set the rule action to Block. Connect the web ACL to the ALB.

A company runs an unauthenticated static website (www.example.com) that includes a registration form for users. The website uses Amazon S3 for hosting and uses Amazon CloudFront as the content delivery network with AWS WAF configured. When the registration form is submitted, the website calls an Amazon API Gateway API endpoint that invokes an AWS Lambda function to process the payload and forward the payload to an external API call.

During testing, a solutions architect encounters a cross-origin resource sharing (CORS) error. The solutions architect confirms that the CloudFront distribution origin has the Access-Control-Allow-Origin header set towww.example.com.

What should the solutions architect do to resolve the error?

A.

Change the CORS configuration on the S3 bucket. Add rules for CORS to the Allowed Origin element forwww.example.com.

B.

Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-Control-Allow-Origin header is set towww.example.com.

C.

Enable the CORS setting on the API Gateway API endpoint. Ensure that the API endpoint is configured to return all responses that have the Access-Control -Allow-Origin header set towww.example.com.

D.

Enable the CORS setting on the Lambda function. Ensure that the return code of the function has the Access-Control-Allow-Origin header set towww.example.com.

A company recently deployed an application on AWS. The application uses Amazon DynamoDB.The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load tor the rest of the week. The access pattern includes many more writes to the table than reads of the table.

A solutions architect needs to implement a solution to minimize the cost of the table.

Which solution will meet these requirements?

A.

Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load.

B.

Configure on-demand capacity mode for the table.

C.

Configure DynamoDB Accelerator (DAX) in front of the table. Reduce the provisioned read capacity to match the new peak load on the table.

D.

Configure DynamoDB Accelerator (DAX) in front of the table. Configure on-demand capacity mode for the table.

A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWS account to a new AWS account in the same AWS Region. Both accounts are members of the same organization in AWS Organizations.

The company must minimize database service interruption before the company performs DNS cutover to the new database.

Which migration strategy will meet this requirement?

A.

Take a snapshot of the existing Aurora database. Share the snapshot with the new AWS account. Create an Aurora DB cluster in the new account from the snapshot.

B.

Create an Aurora DB cluster in the new AWS account. Use AWS Database Migration Service (AWS DMS) to migrate data between the two Aurora DB clusters.

C.

Use AWS Backup to share an Aurora database backup from the existing AWS account to the new AWS account. Create an Aurora DB cluster in the new AWS account from the snapshot.

D.

Create an Aurora DB cluster in the new AWS account. Use AWS Application Migration Service to migrate data between the two Aurora DB clusters.

A company is using AWS to develop and manage its production web application. The application includes an Amazon API Gateway HTTP API that invokes an AWS Lambda function. The Lambda function processes and then stores data in a database.

The company wants to implement user authorization for the web application in an integrated way. The company already uses a third-party identity provider that issues OAuth tokens for the company's other applications.

Which solution will meet these requirements?

A.

Integrate the company's third-party identity provider with API Gateway. Configure an API Gateway Lambda authorizer to validate tokens from the identity provider. Require the Lambda authorizer on all API routes. Update the web application to get tokens from the identity provider and include the tokens in the Authorization header when calling the API Gateway HTTP API.

B.

Integrate the company's third-party identity provider with AWS Directory Service. Configure Directory Service as an API Gateway authorizer to validate tokens from the identity provider. Require the Directory Service authorizer on all API routes. Configure AWS IAM Identity Center as a SAML 2.0 identity provider. Configure the web application as a custom SAML 2.0 application.

C.

Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure API Gateway to use IAM Identity Center for zero-configuration authentication and authorization. Update the web application to retrieve AWS STS tokens from IAM Identity Center and include the tokens in the Authorization header when calling the API Gateway HTTP API.

D.

Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure IAM users with permissions to call the API Gateway HTTP API. Update the web application to extract request parameters from the IAM users and include the parameters in the Authorization header when calling the API Gateway HTTP API.

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data from an external vendor API, stores data in an Amazon DynamoDB global table, and retrieves data from the DynamoDB global table. The API key for the vendor's API is stored in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key Management Service (AWS KMS). The company has deployed its own API into a single AWS Region.

A solutions architect needs to change the API components of the company's API to ensure that the components can run across multiple Regions in an active-active configuration.

Which combination of changes will meet this requirement with the LEAST operational overhead? (Choose three.)

A.

Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue answer routing policy.

B.

Create a new KMS multi-Region customer managed key. Create a new KMS customer managed replica key in each in-scope Region.

C.

Replicate the existing Secrets Manager secret to other Regions. For each in-scope Region's replicated secret, select the appropriate KMS key.

D.

Create a new AWS managed KMS key in each in-scope Region. Convert an existing key to a multi-Region key. Use the multi-Region key in other Regions.

E.

Create a new Secrets Manager secret in each in-scope Region. Copy the secret value from the existing Region to the new secret in each in-scope Region.

F.

Modify the deployment process for the Lambda function to repeat the deployment across in-scope Regions. Turn on the multi-Region option for the existing API. Select the Lambda function that is deployed in each Region as the backend for the multi-Region API.

A company is using an on-premises Active Directory service for user authentication. The company wants to use the same authentication service to sign in to the company's AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company's AWS accounts.

The company's security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.

Which solution will meet these requirements?

A.

Configure AWS Single Sign-On (AWS SSO) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross- domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attribute-based access controls (ABACs).

B.

Configure AWS Single Sign-On (AWS SSO) by using AWS SSO as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using AWS SSO permission sets.

C.

In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users.

D.

In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles.

A solutions architect needs to improve an application that is hosted in the AWS Cloud. The application uses an Amazon Aurora MySQL DB instance that is experiencing overloaded connections. Most of the application's operations insert records into the database. The application currently stores credentials in a text-based configuration file.

The solutions architect needs to implement a solution so that the application can handle the current connection load. The solution must keep the credentials secure and must provide the ability to rotatethe credentials automatically on a regular basis.

Which solution will meet these requirements?

A.

Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connection credentials as a secret in AWS Secrets Manager.

B.

Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connection credentials in AWS Systems Manager Parameter Store.

C.

Create an Aurora Replica. Store the connection credentials as a secret in AWS Secrets Manager.

D.

Create an Aurora Replica. Store the connection credentials in AWS Systems Manager Parameter Store.

A solutions architect is designing an application to accept timesheet entries from employees on their mobile devices. Timesheets will be submitted weekly, with most of the submissions occurring on Friday. The data must be stored in a format that allows payroll administrators to run monthly reports The infrastructure must be highly available and scale to match the rate of incoming data and reporting requests.

Which combination of steps meets these requirements while minimizing operational overhead? (Select TWO}

A.

Deploy the application to Amazon EC2 On-Demand Instances with load balancing across multiple Availability Zones. Use scheduled Amazon EC2 Auto Scaling to add capacity before the high volume of submissions on Fridays

B.

Deploy the application in a container using Amazon Elastic Container Service (Amazon ECS) with load balancing across multiple Availability Zones Use scheduled Service Auto Scaling to add capacity before the high volume of submissions on Fridays

C.

Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront Deploy the application backend using Amazon API Gateway with an AWSLambda proxy integration

D.

Store the timesheet submission data in Amazon Redshift Use Amazon QuickSight to generate the reports using Amazon Redshift as the data source

E.

Store the timesheet submission data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source.

A company runs a processing engine in the AWS Cloud The engine processes environmental data from logistics centers to calculate a sustainability index The company has millions of devices in logistics centers that are spread across Europe The devices send information to the processing engine through a RESTful API

The API experiences unpredictable bursts of traffic The company must implement a solution to process all data that the devices send to the processing engine Data loss is unacceptable

Which solution will meet these requirements?

A.

Create an Application Load Balancer (ALB) for the RESTful API Create an Amazon Simple Queue Service (Amazon SQS) queue Create a listener and a target group for the ALB Add the SQS queue as the target Use a container that runs in Amazon Elastic Container Service (Amazon ECS) with the Fargate launch type to process messages in the queue

B.

Create an Amazon API Gateway HTTP API that implements the RESTful API Create an Amazon Simple Queue Service (Amazon SQS) queue Create an API Gateway service integration with the SQS queue Create an AWS Lambda function toprocess messages in the SQS queue

C.

Create an Amazon API Gateway REST API that implements the RESTful API Create a fleet of Amazon EC2 instances in an Auto Scaling group Create an API Gateway Auto Scaling group proxy integration Use the EC2 instances to process incoming data

D.

Create an Amazon CloudFront distribution for the RESTful API Create a data stream in Amazon Kinesis Data Streams Set the data stream as the origin for the distribution Create an AWS Lambda function to consume and process data in the data stream

Page: 4 / 6
Total 569 questions
Copyright © 2014-2025 Solution2Pass. All Rights Reserved