Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: s2p65

Easiest Solution 2 Pass Your Certification Exams

SAA-C03 Amazon Web Services AWS Certified Solutions Architect - Associate (SAA-C03) Free Practice Exam Questions (2026 Updated)

Prepare effectively for your Amazon Web Services SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) certification with our extensive collection of free, high-quality practice questions. Each question is designed to mirror the actual exam format and objectives, complete with comprehensive answers and detailed explanations. Our materials are regularly updated for 2026, ensuring you have the most current resources to build confidence and succeed on your first attempt.

Page: 2 / 7
Total 649 questions

A solutions architect is designing a web application that will run on Amazon EC2 instances behind an Application Load Balancer (ALB). The company strictly requires that the application be resilient against malicious internet activity and attacks, and protect against new common vulnerabilities and exposures.

What should the solutions architect recommend?

A.

Leverage Amazon CloudFront with the ALB endpoint as the origin.

B.

Deploy an appropriate managed rule for AWS WAF and associate it with the ALB.

C.

Subscribe to AWS Shield Advanced and ensure common vulnerabilities and exposures are blocked.

D.

Configure network ACLs and security groups to allow only ports 80 and 443 to access the EC2 instances.

A company is enhancing the security of its AWS environment, where the company stores a significant amount of sensitive customer data. The company needs a solution that automatically identifies and classifies sensitive data that is stored in multiple Amazon S3 buckets. The solution must automatically respond to data breaches and alert the company's security team through email immediately when noncompliant data is found.

Which solution will meet these requirements?

A.

Use Amazon GuardDuty. Configure an AWS Lambda function to route alerts to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team to the SNS topic.

B.

Use Amazon GuardDuty. Configure an AWS Lambda function to route alerts to an Amazon Simple Queue Service (Amazon SQS) queue. Configure a second Lambda function to periodically poll the SQS queue and to send emails to the security team by using Amazon Simple Email Service (Amazon SES).

C.

Use Amazon Macie. Integrate Amazon EventBridge with Macie, and configure EventBridge to send alerts to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team to the SNS topic.

D.

Use Amazon Macie. Integrate Amazon EventBridge with Macie, and configure EventBridge to route alerts to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to periodically poll the SQS queue and to send alerts to the security team by using Amazon Simple Email Service (Amazon SES).

A data science team needs storage for nightly log processing. The size and number of logs is unknown, and the logs persist for only 24 hours.

What is the MOST cost-effective solution?

A.

Amazon S3 Glacier Deep Archive

B.

Amazon S3 Standard

C.

Amazon S3 Intelligent-Tiering

D.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

An adventure company has launched a new feature on its mobile app. Users can use the feature to upload their hiking and rafting photos and videos anytime. The photos and videos are stored in Amazon S3 Standard storage in an S3 bucket and are served through Amazon CloudFront.

The company needs to optimize the cost of the storage. A solutions architect discovers that most of the uploaded photos and videos are accessed infrequently after 30 days. However, some of the uploaded photos and videos are accessed frequently after 30 days. The solutions architect needs to implement a solution that maintains millisecond retrieval availability of the photos and videos at the lowest possible cost.

Which solution will meet these requirements?

A.

Configure S3 Intelligent-Tiering on the S3 bucket.

B.

Configure an S3 Lifecycle policy to transition image objects and video objects from S3 Standard to S3 Glacier Deep Archive after 30 days.

C.

Replace Amazon S3 with an Amazon Elastic File System (Amazon EFS) file system that is mounted on Amazon EC2 instances.

D.

Add a Cache-Control: max-age header to the S3 image objects and S3 video objects. Set the header to 30 days.

A company stores a large volume of critical data in Amazon RDS for PostgreSQL tables. The company is developing several new features for an upcoming product launch. Some of the new features require many table alterations.

The company needs a solution to test the altered tables for several days. After testing, the solution must make the new features available to customers in production.

Which solution will meet these requirements with the HIGHEST availability?

A.

Create a new instance of the database in RDS for PostgreSQL to test the new features. When the testing is finished, take a backup of the test database, and restore the test database to the production database.

B.

Create new database tables in the production database to test the new features. When the testing is finished, copy the data from the older tables to the new tables. Delete the older tables, and rename the new tables accordingly.

C.

Create an Amazon RDS read replica to deploy a new instance of the database. Make updates to the database tables in the replica instance. When the testing is finished, promote the replica instance to become the new production instance.

D.

Use an Amazon RDS blue/green deployment to deploy a new test instance of the database. Make database table updates in the test instance. When the testing is finished, promote the test instance to become the new production instance.

An advertising company stores terabytes of data in an Amazon S3 data lake. The company wants to build its own foundation model (FM) and has deployed a training cluster on AWS. The company loads file-based data from Amazon S3 to the training cluster to train the FM. The company wants to reduce data loading time to optimize the overall deployment cycle.

The company needs a storage solution that is natively integrated with Amazon S3. The solution must be scalable and provide high throughput.

Which storage solution will meet these requirements?

A.

Mount an Amazon Elastic File System (Amazon EFS) file system to the training cluster. Use AWS DataSync to migrate data from Amazon S3 to the EFS file system to train the FM.

B.

Use an Amazon FSx for Lustre file system and Amazon S3 with Data Repository Association (DRA). Preload the data from Amazon S3 to the Lustre file system to train the FM.

C.

Attach Amazon Block Store (Amazon EBS) volumes to the training cluster. Load the data from Amazon S3 to the EBS volumes to train the FM.

D.

Use AWS DataSync to migrate the data from Amazon S3 to the training cluster as files. Train the FM on the local file-based data.

A company hosts a video streaming web application in a VPC. The company uses a Network Load Balancer (NLB) to handle TCP traffic for real-time data processing. There have been unauthorized attempts to access the application.

The company wants to improve application security with minimal architectural change to prevent unauthorized attempts to access the application.

Which solution will meet these requirements?

A.

Implement a series of AWS WAF rules directly on the NLB to filter out unauthorized traffic.

B.

Recreate the NLB with a security group to allow only trusted IP addresses.

C.

Deploy a second NLB in parallel with the existing NLB configured with a strict IP address allow list.

D.

Use AWS Shield Advanced to provide enhanced DDoS protection and prevent unauthorized access attempts.

A financial services company plans to launch a new application on AWS to handle sensitive financial transactions. The company will deploy the application on Amazon EC2 instances. The company will use Amazon RDS for MySQL as the database. The company's security policies mandate that data must be encrypted at rest and in transit.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.

B.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit

C.

Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.

D.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys Configure a VPN connection to enable private connectivity to encrypt data in transit.

A company wants to migrate hundreds of gigabytes of unstructured data from an on-premises location to an Amazon S3 bucket. The company has a 100-Mbps internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.

A.

Use AWS Database Migration Service (AWS DMS) to synchronize the on-premises data to a destination S3 bucket.

B.

Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket.

C.

Use an AWS Snowball Edge device to migrate the data to an S3 bucket. Use an AWS CloudHSM key to encrypt the data on the Snowball Edge device.

D.

Set up an AWS Direct Connect connection between the on-premises location and AWS. Use the s3 cp command to move the data directly to an S3 bucket.

A company uses AWS to run its e-commerce platform, which is critical to its operations and experiences a high volume of traffic and transactions. The company has configured a multi-factor authentication (MFA) device to secure its AWS account root user credentials. The company wants to ensure that it will not lose access to the root user account if the MFA device is lost.

Which solution will meet these requirements?

A.

Set up a backup administrator account that the company can use to log in if the company loses the MFA device.

B.

Add multiple MFA devices for the root user account to handle the disaster scenario.

C.

Create a new administrator account when the company cannot access the root account.

D.

Attach the administrator policy to another IAM user when the company cannot access the root account.

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?

A.

Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.

B.

Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.

C.

Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.

D.

Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.

Which storage solution should a solutions architect recommend for use after the migration?

A.

AWS DataSync

B.

Amazon Elastic Block Store (Amazon EB5)

C.

Amazon Elastic File System (Amazon EF5)

D.

Amazon EMR File System (Amazon EMRFS)

A solutions architect needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days. Which solution will meet these requirements in the MOST operationally efficient way?

A.

Create a manual copy of the snapshot.

B.

Export the contents of the snapshot to an Amazon S3 bucket.

C.

Change the retention period of the snapshot to 45 days.

D.

Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.

A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS

CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.

The administrator is using an IAM role that has the following IAM policy attached:

What is the cause of the unsuccessful request?

A.

The EC2 instance has a resource-based policy with a Deny statement.

B.

The principal has not been specified in the policy statement.

C.

The "Action" field does not grant the actions that are required to terminate the EC2 instance.

D.

The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.

A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the storage performance of the cluster as the load on the application increases

Which solution will meet these requirements MOST cost-effectively?

A.

Configure the cluster to use the Aurora Standard storage configuration.

B.

Configure the cluster storage type as Provisioned IOPS.

C.

Configure the cluster storage type as General Purpose.

D.

Configure the cluster to use the Aurora l/O-Optimized storage configuration.

A company has a batch processing application that runs every day. The process typically takes an average 3 hours to complete. The application can handle interruptions and can resume the process after a restart. Currently, the company runs the application on Amazon EC2 On-Demand Instances. The company wants to optimize costs while maintaining the same performance level. Which solution will meet these requirements MOST cost-effectively?

A.

Purchase a 1-year EC2 Instance Savings Plan for the appropriate instance family and size to meet the requirements of the application.

B.

Use EC2 On-Demand Capacity Reservations based on the appropriate instance family and size to meet the requirements of the application. Run the EC2 instances in an Auto Scaling group.

C.

Determine the appropriate instance family and size to meet the requirements of the application. Convert the application to run on AWS Batch with EC2 On-Demand Instances. Purchase a 1-year Compute Savings Plan.

D.

Determine the appropriate instance family and size to meet the requirements of the application. Convert the application to run on AWS Batch with EC2 Spot Instances.

A company runs business applications on AWS. The company uses 50 AWS accounts, thousands of VPCs, and three AWS Regions across the United States and Europe. The company has an existing AWS Direct Connect connection that connects an on-premises data center to a single Region.

A solutions architect needs to establish network connectivity between the on-premises data center and the remaining two Regions. The solutions architect must also establish connectivity between the VPCs. On-premises users and applications must be able to connect to applications that run in the VPCs. The solutions architect creates a transit gateway in each Region and configures the transit gateways as inter-Region peers.

What should the solutions architect do next to meet these requirements?

A.

Create a private virtual interface (VIF) with a gateway type of virtual private gateway. Configure the private VIF to use a virtual private gateway that is associated with one of the VPCs.

B.

Create a private virtual interface (VIF) to a new Direct Connect gateway. Associate the new Direct Connect gateway with a virtual private gateway in each VPC.

C.

Create a transit virtual interface (VIF) with a gateway association to a new Direct Connect gateway. Associate each transit gateway with the new Direct Connect gateway.

D.

Create an AWS Site-to-Site VPN connection that uses a public virtual interface (VIF) for the Direct Connect connection. Attach the Site-to-Site VPN connection to the transit gateways.

A company uses Amazon S3 to store customer data that contains personally identifiable information (PII) attributes. The company needs to make the customer information available to company resources through an AWS Glue Catalog. The company needs to have fine-grained access control for the data so that only specific IAM roles can access the PII data.

A.

Create one IAM policy that grants access to PII. Create a second IAM policy that grants access to non-PII data. Assign the PII policy to the specified IAM roles.

B.

Create one IAM role that grants access to PII. Create a second IAM role that grants access to non-PII data. Assign the PII policy to the specified IAM roles.

C.

Use AWS Lake Formation to provide the specified IAM roles access to the PII data.

D.

Use AWS Glue to create one view for PII data. Create a second view for non-PII data. Provide the specified IAM roles access to the PII view.

A company runs a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that must run 24/7. The backend nodes only need to run for short periods depending on the workload.

Frontend nodes accept jobs and place them in queues. Backend nodes asynchronously process jobs from the queues, and jobs can be restarted. The company wants to scale infrastructure based on workload, using the most cost-effective option.

Which solution meets these requirements MOST cost-effectively?

A.

Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

B.

Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.

C.

Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.

D.

Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

A company is building a serverless application to process orders from an e-commerce site. The application needs to handle bursts of traffic during peak usage hours and to maintain high availability. The orders must be processed asynchronously in the order the application receives them.

A.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use an AWS Lambda function to process the orders.

B.

Use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to receive orders. Use an AWS Lambda function to process the orders.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue to receive orders. Use AWS Batch jobs to process the orders.

D.

Use an Amazon Simple Notification Service (Amazon SNS) topic to receive orders. Use AWS Batch jobs to process the orders.

A company hosts its order processing system on AWS. The architecture consists of a frontend and a backend. The frontend includes an Application Load Balancer (ALB) and Amazon EC2 instances in an Auto-Scaling group. The backend includes an EC2 instance and an Amazon RDS MySQL database.

To prevent incomplete or lost orders, the company wants to ensure that order states are always preserved. The company wants to ensure that every order will eventually be processed, even after an outage or pause. Every order must be processed exactly once.

A.

Create an Auto Scaling group and an ALB for the backend. Create a read replica for the RDS database in a second Availability Zone. Update the backend RDS endpoint.

B.

Create an Auto Scaling group and an ALB for the backend. Create an Amazon RDS proxy in front of the RDS database. Update the backend EC2 instance to use the Amazon RDS proxy endpoint.

C.

Create an Auto Scaling group for the backend. Configure the backend EC2 instances to con-sume messages from an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure a dead-letter queue (DLQ) for the SQS queue.

D.

Create an AWS Lambda function to replace the backend EC2 instance. Subscribe the func-tion to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the frontend to send orders to the SNS topic.

A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database that runs on the same EC2 instance. The company needs a highly available and automatically scalable solution to handle increased traffic.

Which solution will meet these requirements?

A.

Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application Load Balancer. Create an Amazon Redshift cluster that has multiple MySQL-compatible nodes.

B.

Deploy the application to EC2 instances that are configured as a target group behind an Application Load Balancer. Create an Amazon RDS for MySQL cluster that has multiple instances.

C.

Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application Load Balancer. Create an Amazon Aurora Serverless MySQL cluster for the database layer.

D.

Deploy the application to EC2 instances that are configured as a target group behind an Application Load Balancer. Create an Amazon ElastiCache (Redis OSS) cluster that uses the MySQL connector.

An ecommerce company is redesigning a product catalog system to handle millions of products and provide fast access to product information. The system needs to store structured product data such as product name, price, description, and category. The system also needs to store unstructured data such as high-resolution product videos and user manuals. The architecture must be highly available and must be able to handle sudden spikes in traffic during large-scale sales events.

A.

Use an Amazon RDS Multi-AZ deployment to store product information. Store product videos and user manuals in Amazon S3.

B.

Use Amazon DynamoDB to store product information. Store product videos and user manuals in Amazon S3.

C.

Store all product information, including product videos and user manuals, in Amazon DynamoDB.

D.

Deploy an Amazon DocumentDB (with MongoDB compatibility) cluster to store all product information, product videos, and user manuals.

A company is building a stock trading application in the AWS Cloud. The company requires a highly available solution that provides low-latency access to block storage across multiple Availability Zones.

A.

Use an Amazon S3 bucket and an S3 File Gateway as shared storage for the application.

B.

Create an Amazon EC2 instance in each Availability Zone. Attach a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume to each EC2 instance. Create a Bash script to sync data between volumes.

C.

Use an Amazon FSx for NetApp ONTAP Multi-AZ file system to access data by using the iSCSI protocol.

D.

Create an Amazon EC2 instance in each Availability Zone. Attach a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume to each EC2 instance. Create a Python script to sync data between volumes.

A company receives data transfers from a small number of external clients that use SFTP software on an Amazon EC2 instance. The clients use an SFTP client to upload data. The clients use SSH keys for authentication. Every hour, an automated script transfers new uploads to an Amazon S3 bucket for processing.

The company wants to move the transfer process to an AWS managed service and to reduce the time required to start data processing. The company wants to retain the existing user management and SSH key generation process. The solution must not require clients to make significant changes to their existing processes.

Which solution will meet these requirements?

A.

Reconfigure the script that runs on the EC2 instance to run every 15 minutes. Create an S3 Event Notifications rule for all new object creation events. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination.

B.

Create an AWS Transfer Family SFTP server that uses the existing S3 bucket as a target. Use service-managed users to enable authentication.

C.

Require clients to add the AWS DataSync agent into their local environments. Create an IAM user for each client that has permission to upload data to the target S3 bucket.

D.

Create an AWS Transfer Family SFTP connector that has permission to access the target S3 bucket for each client. Store credentials in AWS Systems Manager. Create an IAM role to allow the SFTP connector to securely use the credentials.

A company wants to migrate a Microsoft SQL Server database server from an on-premises data center to AWS. The company needs access to the operating system of the SQL Server database.

Which solution will meet these requirements?

A.

Migrate the database to Amazon Aurora Serverless.

B.

Migrate the database to Amazon RDS for SQL Server.

C.

Migrate the database to Amazon EC2 instances that run SQL Server.

D.

Migrate the database to Amazon Redshift.

A company plans to rehost an application to Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) as the attached storage

A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes

Which solution will meet these requirements?

A.

Configure the EC2 account attributes to always encrypt new EBS volumes.

B.

Use AWS Config. Configure the encrypted-volumes identifier Apply the default AWS Key Management Service (AWS KMS) key.

C.

Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes

D.

Create a customer managed key in AWS Key Management Service (AWS KMS) Configure AWS Migration Hub to use the key when the company migrates workloads.

A company is planning to migrate an on-premises online transaction processing (OLTP) database that uses MySQL to an AWS managed database management system. Several reporting and analytics applications use the on-premises database heavily on weekends and at the end of each month. The cloud-based solution must be able to handle read-heavy surges during weekends and at the end of each month.

Which solution will meet these requirements?

A.

Migrate the database to an Amazon Aurora MySQL cluster. Configure Aurora Auto Scaling to use replicas to handle surges.

B.

Migrate the database to an Amazon EC2 instance that runs MySQL. Use an EC2 instance type that has ephemeral storage. Attach Amazon EBS Provisioned IOPS SSD (io2) volumes to the instance.

C.

Migrate the database to an Amazon RDS for MySQL database. Configure the RDS for MySQL database for a Multi-AZ deployment, and set up auto scaling.

D.

Migrate from the database to Amazon Redshift. Use Amazon Redshift as the database for both OLTP and analytics applications.

A company hosts an application that allows authorized users to upload and download documents. The application uses Amazon EC2 instances and an Amazon Elastic File System (Amazon EFS) file system.

The company plans to deploy the application into a second AWS Region. The company will launch a new EFS file system and a new set of EC2 instances in the second Region. A solutions architect must develop a highly available and fault-tolerant solution to establish two-way synchronization across the Regions.

Which solution will meet these requirements?

A.

Create an Amazon EFS VPC endpoint for the original EFS file system in the second Region. Mount both the original and the new EFS file system to the new set of EC2 instances in the second Region. Configure an rsync cron job to run every 5 minutes.

B.

Set up EFS replication between the two EFS file systems. Set the new file system as the source. Set the original file system in the first Region as the destination. Turn off overwrite protection for the destination file system.

C.

Set up one AWS DataSync agent in each Region. Configure Amazon EFS VPC endpoints, EFS transfer locations, and EFS transfer tasks with opposite directions on the two DataSync agents.

D.

Mount the EFS file system in the second Region to the new set of EC2 instances in the second Region. Use AWS Transfer Family to establish SFTP access to the EFS file system in the original Region. Configure an rsync cron job to run every 5 minutes.

A company runs an application on Amazon EC2 instances that have instance store volumes attached. The application uses Amazon Elastic File System (Amazon EFS) to store files that are shared across a cluster of Linux servers. The shared files are at least 1 GB in size.

The company accesses the files often for the first 7 days after creation. The files must remain readily available after the first 7 days.

The company wants to optimize costs for the application.

Which solution will meet these requirements?

A.

Configure an AWS Storage Gateway Amazon S3 File Gateway to cache frequently accessed files locally. Store older files in Amazon S3.

B.

Move the files from Amazon EFS, and store the files locally on each EC2 instance.

C.

Configure a lifecycle policy to move the files to the EFS Infrequent Access (IA) storage class after 7 days.

D.

Deploy AWS DataSync to automatically move files older than 7 days to Amazon S3 Glacier Deep Archive.

Page: 2 / 7
Total 649 questions
Copyright © 2014-2026 Solution2Pass. All Rights Reserved