1 / 74

SOA-C02 exam dumps

Certifiedumps simplifies your SOA-C02 preparation with exam-focused content that strengthens your knowledge of AWS CLI, CloudFormation, and security configurations.

ABDUL398
Download Presentation

SOA-C02 exam dumps

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Amazon SOA-C02 Exam AWS Certified SysOps Administrator - Associate Questions & Answers (Demo Version - Limited Content) Thank you for Downloading SOA-C02 exam PDF Demo Get Full File: https://www.certifiedumps.com/amazon/soa-c02-dumps.html

  2. Page 2 Questions & Answers PDF Version: 16.0 Topic 1, Mix Questions Question: 1 A Sysops administrator creates an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses AWS Fargate. The cluster is deployed successfully. The Sysops administrator needs to manage the cluster by using the kubect1 command line tool. Which of the following must be configured on the Sysops administrator's machine so that kubect1 can communicate with the cluster API server? A. The kubeconfig file B. The kube-proxy Amazon EKS add-on C. The Fargate profile www.certifiedumps.com

  3. Page 3 Questions & Answers PDF D. The eks-connector.yaml file Answer: A Explanation: The kubeconfig file is a configuration file used to store cluster authentication information, which is required to make requests to the Amazon EKS cluster API server. The kubeconfig file will need to be configured on the SysOps administrator's machine in order for kubectl to be able to communicate with the cluster API server. https://aws.amazon.com/blogs/developer/running-a-kubernetes-job-in-amazon-eks-on-aws-fargate- using-aws-stepfunctions/ Question: 2 A Sysops administrator needs to configure automatic rotation for Amazon RDS database credentials. The credentials must rotate every 30 days. The solution must integrate with Amazon RDS. Which solution will meet these requirements with the LEAST operational overhead? A. Store the credentials in AWS Systems Manager Parameter Store as a secure string. Configure automatic rotation with a rotation interval of 30 days. B. Store the credentials in AWS Secrets Manager. Configure automatic rotation with a rotation interval of 30 days. C. Store the credentials in a file in an Amazon S3 bucket. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days. www.certifiedumps.com

  4. Page 4 Questions & Answers PDF D. Store the credentials in AWS Secrets Manager. Deploy an AWS Lambda function to automatically rotate the credentials every 30 days. Answer: B Explanation: Storing the credentials in AWS Secrets Manager and configuring automatic rotation with a rotation interval of 30 days is the most efficient way to meet the requirements with the least operational overhead. AWS Secrets Manager automatically rotates the credentials at the specified interval, so there is no need for an additional AWS Lambda function or manual rotation. Additionally, Secrets Manager is integrated with Amazon RDS, so the credentials can be easily used with the RDS database. Question: 3 A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an Amazon EC2 Auto Scaling group with scheduled scaling actions. However, the capacity does not always increase at the scheduled times, and instances terminate many times a day. A Sysops administrator must ensure that the instances launch on time and have fewer interruptions. Which action will meet these requirements? A. Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group. www.certifiedumps.com

  5. Page 5 Questions & Answers PDF B. Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group. C. Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group. D. Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group. Answer: A Explanation: Specifying the capacity-optimized allocation strategy for Spot Instances and adding more instance types to the Auto Scaling group is the best action to meet the requirements. Increasing the size of the instances in the Auto Scaling group will not necessarily help with the launch time or reduce interruptions, as the Spot Instances could still be interrupted even with larger instance sizes. Question: 4 A company stores its data in an Amazon S3 bucket. The company is required to classify the data and find any sensitive personal information in its S3 files. Which solution will meet these requirements? A. Create an AWS Config rule to discover sensitive personal information in the S3 files and mark them as noncompliant. B. Create an S3 event-driven artificial intelligence/machine learning (AI/ML) pipeline to classify sensitive personal information by using Amazon Recognition. www.certifiedumps.com

  6. Page 6 Questions & Answers PDF C. Enable Amazon GuardDuty. Configure S3 protection to monitor all data inside Amazon S3. D. Enable Amazon Macie. Create a discovery job that uses the managed data identifier. Answer: D Explanation: Amazon Macie is a security service designed to help organizations find, classify, and protect sensitive data stored in Amazon S3. Amazon Macie uses machine learning to automatically discover, classify, and protect sensitive data in Amazon S3. Creating a discovery job with the managed data identifier will allow Macie to identify sensitive personal information in the S3 files and classify it accordingly. Enabling AWS Config and Amazon GuardDuty will not help with this requirement as they are not designed to automatically classify and protect data. Question: 5 A company has an application that customers use to search for records on a website. The application's data is stored in an Amazon Aurora DB cluster. The application's usage varies by season and by day of the week. The website's popularity is increasing, and the website is experiencing slower performance because of increased load on the DB cluster during periods of peak activity. The application logs show that the performance issues occur when users are searching for information. The same search is rarely performed multiple times. A SysOps administrator must improve the performance of the platform by using a solution that maximizes resource efficiency. Which solution will meet these requirements? A. Deploy an Amazon ElastiCache for Redis cluster in front of the DB cluster. Modify the application to check the cache before the application issues new queries to the database. Add the results of any queries to the cache. B. Deploy an Aurora Replica for the DB cluster. Modify the application to use the reader endpoint for www.certifiedumps.com

  7. Page 7 Questions & Answers PDF search operations. Use Aurora Auto Scaling to scale the number of replicas based on load. Most Voted C. Use Provisioned IOPS on the storage volumes that support the DB cluster to improve performance sufficiently to support the peak load on the application. D. Increase the instance size in the DB cluster to a size that is sufficient to support the peak load on the application. Use Aurora Auto Scaling to scale the instance size based on load. Answer: A Explanation: Step-by-Step Understand the Problem: The application experiences slower performance during peak activity due to increased load on the Amazon Aurora DB cluster. Performance issues occur primarily during search operations. The goal is to improve performance and maximize resource efficiency. Analyze the Requirements: The solution should improve the performance of the platform. It should maximize resource efficiency, which implies cost-effective and scalable options. Evaluate the Options: Option A: Deploy an Amazon ElastiCache for Redis cluster. ElastiCache for Redis is a managed in-memory caching service that can significantly reduce the load on the database by caching frequently accessed data. By modifying the application to check the cache before querying the database, repeated searches for the same information will be served from the cache, reducing the number of database reads. This is efficient and cost-effective as it reduces database load and improves response times. Option B: Deploy an Aurora Replica and use Auto Scaling. Adding Aurora Replicas can help distribute read traffic and improve performance. Aurora Auto Scaling can adjust the number of replicas based on the load. www.certifiedumps.com

  8. Page 8 Questions & Answers PDF However, this option may not be as efficient in terms of resource usage compared to caching because it still involves querying the database. Option C: Use Provisioned IOPS. Provisioned IOPS can improve performance by providing fast and consistent I/O. This option focuses on improving the underlying storage performance but doesn't address the inefficiency of handling repeated searches directly. Option D: Increase the instance size and use Auto Scaling. Increasing the instance size can provide more resources to handle peak loads. Aurora Auto Scaling can adjust instance sizes based on the load. This option can be costly and may not be as efficient as caching in handling repeated searches. Select the Best Solution: Option A is the best solution because it leverages caching to reduce the load on the database, which directly addresses the issue of repeated searches causing performance problems. Caching is generally more resource-efficient and cost-effective compared to scaling database instances or storage. Reference: Amazon ElastiCache for Redis Documentation Amazon Aurora Documentation AWS Auto Scaling Using ElastiCache for Redis aligns with best practices for improving application performance by offloading repetitive read queries from the database, leading to faster response times and more efficient resource usage. Question: 6 The security team is concerned because the number of AWS Identity and Access Management (IAM) policies being used in the environment is increasing. The team tasked a SysOps administrator to report on the current number of IAM policies in use and the total available IAM policies. Which AWS service should the administrator use to check how current IAM policy usage compares to current service limits? www.certifiedumps.com

  9. Page 9 Questions & Answers PDF A. AWS Trusted Advisor B. Amazon Inspector C. AWS Config D. AWS Organizations Answer: A Explanation: Step-by-Step Understand the Problem: The security team is concerned about the increasing number of IAM policies. The task is to report on the current number of IAM policies and compare them to the service limits. Analyze the Requirements: The solution should help in checking the usage of IAM policies against the service limits. Evaluate the Options: Option A: AWS Trusted Advisor AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. It includes a service limits check that alerts you when you are approaching the limits of your AWS service usage, including IAM policies. Option B: Amazon Inspector Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not report on IAM policy usage. Option C: AWS Config AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. While useful for compliance, it does not provide a comparison against service limits. Option D: AWS Organizations www.certifiedumps.com

  10. Page 10 Questions & Answers PDF AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. It does not provide insights into IAM policy limits. Select the Best Solution: Option A: AWS Trusted Advisor is the correct answer because it includes a service limits check that can report on the current number of IAM policies in use and compare them to the service limits. Reference: AWS Trusted Advisor Documentation IAM Service Limits AWS Trusted Advisor is the appropriate tool for monitoring IAM policy usage and comparing it against service limits, providing the necessary insights to manage and optimize IAM policies effectively. Question: 7 A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service requirements. Which action will maintain uptime for the application MOST cost-effectively? A. Use a Spot Fleet with an On-Demand capacity of 6 instances. B. Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances. C. Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances. D. Use a Spot Fleet with a target capacity of 6 instances. Answer: A Explanation: Step-by-Step www.certifiedumps.com

  11. Page 11 Questions & Answers PDF Understand the Problem: The company has a stateless application on 10 EC2 On-Demand Instances in an Auto Scaling group. At least 6 instances are needed to meet service requirements. The goal is to maintain uptime cost-effectively. Analyze the Requirements: Maintain a minimum of 6 instances to meet service requirements. Optimize costs by using a mix of instance types. Evaluate the Options: Option A: Use a Spot Fleet with an On-Demand capacity of 6 instances. Spot Fleets allow you to request a combination of On-Demand and Spot Instances. Ensuring a minimum of 6 On-Demand Instances guarantees the required capacity while leveraging lower-cost Spot Instances to meet additional demand. Option B: Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 On-Demand Instances. This option ensures the minimum required capacity but does not optimize costs since it only uses On- Demand Instances. Option C: Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 On-Demand Instances. This does not meet the requirement of maintaining at least 6 instances at all times. Option D: Use a Spot Fleet with a target capacity of 6 instances. This option relies entirely on Spot Instances, which may not always be available, risking insufficient capacity. Select the Best Solution: Option A: Using a Spot Fleet with an On-Demand capacity of 6 instances ensures the necessary uptime with a cost-effective mix of On-Demand and Spot Instances. Reference: Amazon EC2 Auto Scaling Amazon EC2 Spot Instances www.certifiedumps.com

  12. Page 12 Questions & Answers PDF Spot Fleet Documentation Using a Spot Fleet with a combination of On-Demand and Spot Instances offers a cost-effective solution while ensuring the required minimum capacity for the application. Question: 8 A SysOps administrator has launched a large general purpose Amazon EC2 instance to regularly process large data files. The instance has an attached 1 TB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. The instance also is EBS-optimized. To save costs, the SysOps administrator stops the instance each evening and restarts the instance each morning. When data processing is active, Amazon CloudWatch metrics on the instance show a consistent 3.000 VolumeReadOps. The SysOps administrator must improve the I/O performance while ensuring data integrity. Which action will meet these requirements? A. Change the instance type to a large, burstable, general purpose instance. B. Change the instance type to an extra large general purpose instance. C. Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume. D. Move the data that resides on the EBS volume to the instance store. Answer: C Explanation: Step-by-Step Understand the Problem: The EC2 instance processes large data files and uses a 1 TB General Purpose SSD (gp2) EBS volume. CloudWatch metrics show consistent high VolumeReadOps. The requirement is to improve I/O performance while ensuring data integrity. Analyze the Requirements: www.certifiedumps.com

  13. Page 13 Questions & Answers PDF Improve I/O performance. Maintain data integrity. Evaluate the Options: Option A: Change the instance type to a large, burstable, general-purpose instance. Burstable instances provide a baseline level of CPU performance with the ability to burst to a higher level when needed. However, this does not address the I/O performance directly. Option B: Change the instance type to an extra-large general-purpose instance. A larger instance type might improve performance, but it does not directly address the I/O performance of the EBS volume. Option C: Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume. Increasing the size of a General Purpose SSD (gp2) volume can increase its IOPS. The larger the volume, the higher the baseline performance in terms of IOPS. Option D: Move the data that resides on the EBS volume to the instance store. Instance store volumes provide high I/O performance but are ephemeral, meaning data will be lost if the instance is stopped or terminated. This does not ensure data integrity. Select the Best Solution: Option C: Increasing the EBS volume size to 2 TB will provide higher IOPS, improving I/O performance while maintaining data integrity. Reference: Amazon EBS Volume Types General Purpose SSD (gp2) Volumes Increasing the size of the General Purpose SSD (gp2) volume is an effective way to improve I/O performance while ensuring data integrity remains intact. Question: 9 With the threat of ransomware viruses encrypting and holding company data hostage, which action should be taken to protect an Amazon S3 bucket? www.certifiedumps.com

  14. Page 14 Questions & Answers PDF A. Deny Post. Put. and Delete on the bucket. B. Enable server-side encryption on the bucket. C. Enable Amazon S3 versioning on the bucket. D. Enable snapshots on the bucket. Answer: C Explanation: Step-by-Step Understand the Problem: The threat of ransomware encrypting and holding company data hostage. Need to protect an Amazon S3 bucket. Analyze the Requirements: Ensure that data in the S3 bucket is protected against unauthorized encryption or deletion. Evaluate the Options: Option A: Deny Post, Put, and Delete on the bucket. Denying these actions would prevent any uploads or modifications to the bucket, making it unusable. Option B: Enable server-side encryption on the bucket. Server-side encryption protects data at rest but does not prevent the encryption of data by ransomware. Option C: Enable Amazon S3 versioning on the bucket. S3 versioning keeps multiple versions of an object in the bucket. If a file is overwritten or encrypted by ransomware, previous versions of the file can still be accessed. Option D: Enable snapshots on the bucket. Amazon S3 does not have a snapshot feature; this option is not applicable. Select the Best Solution: Option C: Enabling Amazon S3 versioning is the best solution as it allows access to previous versions www.certifiedumps.com

  15. Page 15 Questions & Answers PDF of objects, providing protection against ransomware encryption by retaining prior, unencrypted versions. Reference: Amazon S3 Versioning Best Practices for Protecting Data with Amazon S3 Enabling S3 versioning ensures that previous versions of objects are preserved, providing a safeguard against ransomware by allowing recovery of unencrypted versions of data. Question: 10 A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID. and routing policy have been set appropriately for both primary and secondary servers. Which next step should be taken to configure Route 53? A. Create an A record for each server. Associate the records with the Route 53 HTTP health check. B. Create an A record for each server. Associate the records with the Route 53 TCP health check. C. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check. D. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check. Answer: C Explanation: To configure Route 53 for high availability with failover between a primary and a secondary server, www.certifiedumps.com

  16. Page 16 Questions & Answers PDF the following steps should be taken: Create Health Checks: Create HTTP health checks for both the primary and secondary servers. Ensure these health checks are configured to look for HTTP 2xx or 3xx status codes. Reference: Creating and Updating Health Checks Create Alias Records: Create an alias record for the primary server. Set "Evaluate Target Health" to Yes. Associate this record with the primary server's HTTP health check. Create an alias record for the secondary server. Set "Evaluate Target Health" to Yes. Associate this record with the secondary server's HTTP health check. Reference: Creating Records by Using the Amazon Route 53 Console Set Routing Policy: Ensure the routing policy for both records is set to "Failover." Assign appropriate "Set IDs" and configure the primary record as the primary failover record and the secondary record as the secondary failover record. Reference: Route 53 Routing Policies Test Configuration: Test the failover configuration to ensure that when the primary server health check fails, traffic is routed to the secondary server. Reference: Testing Failover Question: 11 A SysOps administrator noticed that a large number of Elastic IP addresses are being created on the company's AWS account, but they are not being associated with Amazon EC2 instances, and are incurring Elastic IP address charges in the monthly bill. How can the administrator identify who is creating the Elastic IP addresses? A. Attach a cost-allocation tag to each requested Elastic IP address with the IAM user name of the www.certifiedumps.com

  17. Page 17 Questions & Answers PDF developer who creates it. B. Query AWS CloudTrail logs by using Amazon Athena to search for Elastic IP address events. C. Create a CloudWatch alarm on the ElPCreated metric and send an Amazon SNS notification when the alarm triggers. D. Use Amazon Inspector to get a report of all Elastic IP addresses created in the last 30 days. Answer: B Explanation: To identify who is creating the Elastic IP addresses, the following steps should be taken: Enable CloudTrail Logging: Ensure AWS CloudTrail is enabled to log all API activities in your AWS account. Reference: Setting Up AWS CloudTrail Create an Athena Table for CloudTrail Logs: Set up an Athena table that points to the S3 bucket where CloudTrail logs are stored. Reference: Creating Tables in Athena Query CloudTrail Logs: Use Athena to run SQL queries to search for AllocateAddress events, which represent the creation of Elastic IP addresses. Example Query: sql Copy code SELECT userIdentity.userName, eventTime, eventSource, eventName, requestParameters FROM cloudtrail_logs WHERE eventName = 'AllocateAddress'; Reference: Analyzing AWS CloudTrail Logs Review Results: www.certifiedumps.com

  18. Page 18 Questions & Answers PDF Review the results to identify which IAM user or role is creating the Elastic IP addresses. Reference: AWS CloudTrail Log Analysis Question: 12 A company has an Amazon CloudFront distribution that uses an Amazon S3 bucket as its origin. During a review of the access logs, the company determines that some requests are going directly to the S3 bucket by using the website hosting endpoint. A SysOps administrator must secure the S3 bucket to allow requests only from CloudFront. What should the SysOps administrator do to meet this requirement? A. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Remove access to and from other principals in the S3 bucket policy. Update the S3 bucket policy to allow access only from the OAI. B. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin. C. Create an origin access identity (OAI) in CloudFront. Associate the OAI with the distribution. Update the S3 bucket policy to allow access only from the OAI. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin. D. Update the S3 bucket policy to allow access only from the CloudFront distribution. Remove access to and from other principals in the S3 bucket policy. Disable website hosting. Create a new origin, and specify the S3 bucket as the new origin. Update the distribution behavior to use the new origin. Remove the existing origin. Answer: A Explanation: To secure the S3 bucket and allow access only from CloudFront, the following steps should be taken: Create an OAI in CloudFront: www.certifiedumps.com

  19. Page 19 Questions & Answers PDF In the CloudFront console, create an origin access identity (OAI) and associate it with your CloudFront distribution. Reference: Restricting Access to S3 Buckets Update S3 Bucket Policy: Modify the S3 bucket policy to allow access only from the OAI. This involves adding a policy statement that grants the OAI permission to get objects from the bucket and removing any other public access permissions. Example Policy: json Copy code { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3EXAMPLE" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::example-bucket/*" } ] } Reference: Bucket Policy Examples Test Configuration: Ensure that the S3 bucket is not publicly accessible and that requests to the bucket through the CloudFront distribution are successful. Reference: Testing CloudFront OAI www.certifiedumps.com

  20. Page 20 Questions & Answers PDF Question: 13 A SysOps administrator must create an IAM policy for a developer who needs access to specific AWS services. Based on the requirements, the SysOps administrator creates the following policy: Which actions does this policy allow? (Select TWO.) A. Create an AWS Storage Gateway. B. Create an IAM role for an AWS Lambda function. C. Delete an Amazon Simple Queue Service (Amazon SQS) queue. D. Describe AWS load balancers. E. Invoke an AWS Lambda function. Answer: DE Explanation: www.certifiedumps.com

  21. Page 21 Questions & Answers PDF The provided IAM policy grants the following permissions: Describe AWS Load Balancers: The policy allows actions with the prefix elasticloadbalancing:. This includes actions like DescribeLoadBalancers and other Describe* actions related to Elastic Load Balancing. Reference: Elastic Load Balancing API Actions Invoke AWS Lambda Function: The policy allows actions with the prefix lambda:, which includes InvokeFunction and other actions that allow listing and describing Lambda functions. Reference: AWS Lambda API Actions The actions related to AWS Storage Gateway (create), IAM role (create), and Amazon SQS (delete) are not allowed by this policy. The policy only grants describe/list permissions for storagegateway, elasticloadbalancing, lambda, and list permissions for SQS. Question: 14 A company is trying to connect two applications. One application runs in an on-premises data center that has a hostname of hostl .onprem.private. The other application runs on an Amazon EC2 instance that has a hostname of hostl.awscloud.private. An AWS Site-to-Site VPN connection is in place between the on-premises network and AWS. The application that runs in the data center tries to connect to the application that runs on the EC2 instance, but DNS resolution fails. A SysOps administrator must implement DNS resolution between on-premises and AWS resources. Which solution allows the on-premises application to resolve the EC2 instance hostname? A. Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the inbound resolver endpoint. B. Set up an Amazon Route 53 inbound resolver endpoint. Associate the resolver with the VPC of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the inbound resolver endpoint. www.certifiedumps.com

  22. Page 22 Questions & Answers PDF C. Set up an Amazon Route 53 outbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward onprem.private DNS queries to the outbound resolver endpoint. D. Set up an Amazon Route 53 outbound resolver endpoint. Associate the resolver with the AWS Region of the EC2 instance. Configure the on-premises DNS resolver to forward awscloud.private DNS queries to the outbound resolver endpoint. Answer: A Explanation: Step-by-Step Understand the Problem: There are two applications, one in an on-premises data center and the other on an Amazon EC2 instance. DNS resolution fails when the on-premises application tries to connect to the EC2 instance. The goal is to implement DNS resolution between on-premises and AWS resources. Analyze the Requirements: Need to resolve the hostname of the EC2 instance from the on-premises network. Utilize the existing AWS Site-to-Site VPN connection for DNS queries. Evaluate the Options: Option A: Set up an Amazon Route 53 inbound resolver endpoint with a forwarding rule for the onprem.private hosted zone. This allows DNS queries from on-premises to be forwarded to Route 53 for resolution. The resolver endpoint is associated with the VPC, enabling resolution of AWS resources. Option B: Set up an Amazon Route 53 inbound resolver endpoint without specifying the forwarding rule. This option does not address the specific need to resolve onprem.private DNS queries. www.certifiedumps.com

  23. Page 23 Questions & Answers PDF Option C: Set up an Amazon Route 53 outbound resolver endpoint. Outbound resolver endpoints are used for forwarding DNS queries from AWS to on-premises, not vice versa. Option D: Set up an Amazon Route 53 outbound resolver endpoint without specifying the forwarding rule. Similar to Option C, this does not meet the requirement of resolving on-premises queries in AWS. Select the Best Solution: Option A: Setting up an inbound resolver endpoint with a forwarding rule for onprem.private and associating it with the VPC ensures that DNS queries from on-premises can resolve AWS resources effectively. Reference: Amazon Route 53 Resolver Integrating AWS and On-Premises Networks with Route 53 Using an Amazon Route 53 inbound resolver endpoint with a forwarding rule ensures that on- premises applications can resolve EC2 instance hostnames effectively. Question: 15 While setting up an AWS managed VPN connection, a SysOps administrator creates a customer gateway resource in AWS. The customer gateway device resides in a data center with a NAT gateway in front of it. What address should be used to create the customer gateway resource? A. The private IP address of the customer gateway device B. The MAC address of the NAT device in front of the customer gateway device C. The public IP address of the customer gateway device D. The public IP address of the NAT device in front of the customer gateway device Answer: D www.certifiedumps.com

  24. Page 24 Questions & Answers PDF Explanation: Step-by-Step Understand the Problem: Setting up an AWS managed VPN connection requires creating a customer gateway resource. The customer gateway device is behind a NAT gateway in the data center. Analyze the Requirements: The customer gateway resource needs to be created using an IP address that can be reached by AWS. Evaluate the Options: Option A: The private IP address of the customer gateway device. A private IP address is not reachable by AWS over the internet. Option B: The MAC address of the NAT device. MAC addresses are not used for identifying gateways in AWS. Option C: The public IP address of the customer gateway device. This would be correct if the device were directly connected to the internet, but it is behind a NAT. Option D: The public IP address of the NAT device in front of the customer gateway device. The NAT device's public IP address is reachable by AWS and will route traffic to the customer gateway device. Select the Best Solution: Option D: Using the public IP address of the NAT device ensures that AWS can establish a VPN connection with the customer gateway device behind the NAT. Reference: AWS Site-to-Site VPN Documentation Customer Gateway Devices Behind a NAT Specifying the public IP address of the NAT device ensures proper routing of VPN traffic to the customer gateway device. www.certifiedumps.com

  25. Page 25 Questions & Answers PDF Question: 16 A large company is using AWS Organizations to manage its multi-account AWS environment. According to company policy, all users should have read-level access to a particular Amazon S3 bucket in a central account. The S3 bucket data should not be available outside the organization. A SysOps administrator must set up the permissions and add a bucket policy to the S3 bucket. Which parameters should be specified to accomplish this in the MOST efficient manner? A. Specify "' as the principal and PrincipalOrgld as a condition. B. Specify all account numbers as the principal. C. Specify PrincipalOrgld as the principal. D. Specify the organization's management account as the principal. Answer: A Explanation: Step-by-Step Understand the Problem: Ensure all users in the organization have read-level access to a specific S3 bucket. The data should not be accessible outside the organization. Analyze the Requirements: Grant read access to users within the organization. Prevent access from outside the organization. Evaluate the Options: Option A: Specify "*" as the principal and PrincipalOrgId as a condition. This grants access to all AWS principals but restricts it to those within the specified organization using the PrincipalOrgId condition. www.certifiedumps.com

  26. Page 26 Questions & Answers PDF Option B: Specify all account numbers as the principal. This is impractical for a large organization and requires constant updates if accounts are added or removed. Option C: Specify PrincipalOrgId as the principal. The PrincipalOrgId condition must be used within a policy, not as a principal. Option D: Specify the organization's management account as the principal. This grants access only to the management account, not to all users within the organization. Select the Best Solution: Option A: Using "*" as the principal with the PrincipalOrgId condition ensures all users within the organization have the required access while preventing external access. Reference: Amazon S3 Bucket Policies AWS Organizations Policy Examples Using "*" as the principal with the PrincipalOrgId condition efficiently grants read access to the S3 bucket for all users within the organization. Question: 17 A SysOps administrator is attempting to download patches from the internet into an instance in a private subnet. An internet gateway exists for the VPC, and a NAT gateway has been deployed on the public subnet; however, the instance has no internet connectivity. The resources deployed into the private subnet must be inaccessible directly from the public internet. www.certifiedumps.com

  27. Page 27 Questions & Answers PDF What should be added to the private subnet's route table in order to address this issue, given the information provided? A. 0.0.0.0/0 IGW B. 0.0.0.0/0 NAT C. 10.0.1.0/24 IGW D. 10.0.1.0/24 NAT Answer: B Explanation: Understand the Problem: An instance in a private subnet needs internet access for downloading patches. There is an existing NAT gateway in the public subnet. Analyze the Requirements: Provide internet access to the private subnet instance through the NAT gateway. Ensure resources in the private subnet remain inaccessible from the public internet. Evaluate the Options: Option A: 0.0.0.0/0 IGW. This would route traffic directly to the internet gateway, exposing the instance to the public internet. Option B: 0.0.0.0/0 NAT. This routes traffic destined for the internet through the NAT gateway, allowing outbound connections while keeping the instance protected from inbound internet traffic. Option C: 10.0.1.0/24 IGW. This does not provide the necessary route for internet access and incorrectly uses the internet gateway for local traffic. Option D: 10.0.1.0/24 NAT. www.certifiedumps.com

  28. Page 28 Questions & Answers PDF This also incorrectly uses the NAT gateway for local traffic, which is unnecessary. Select the Best Solution: Option B: Adding a route for 0.0.0.0/0 with the target set to the NAT gateway ensures that the private subnet instance can access the internet while remaining protected from inbound internet traffic. Reference: Amazon VPC NAT Gateways Private Subnet Route Table Configuring the private subnet route table to use the NAT gateway for 0.0.0.0/0 ensures secure and efficient internet access for instances in the private subnet. Question: 18 A SysOps administrator applies the following policy to an AWS CloudFormation stack: What is the result of this policy? www.certifiedumps.com

  29. Page 29 Questions & Answers PDF A. Users that assume an IAM role with a logical ID that begins with "Production" are prevented from running the update-stack command. B. Users can update all resources in the stack except for resources that have a logical ID that begins with "Production". C. Users can update all resources in the stack except for resources that have an attribute that begins with "Production". D. Users in an IAM group with a logical ID that begins with "Production" are prevented from running the update-stack command. Answer: B Explanation: The policy provided includes two statements: The first statement explicitly denies the Update:* action on resources with a LogicalResourceId that begins with "Production". The second statement allows the Update:* action on all resources. In AWS IAM policy evaluation logic, explicit denies always take precedence over allows. Therefore, the effect of this policy is that users can update all resources in the stack except for those with a logical ID that begins with "Production". Reference: IAM JSON Policy Elements: Effect Policy Evaluation Logic Question: 19 A company's IT department noticed an increase in the spend of their developer AWS account. There are over 50 developers using the account, and the finance team wants to determine the service costs incurred by each developer. What should a SysOps administrator do to collect this information? (Select TWO.) www.certifiedumps.com

  30. Page 30 Questions & Answers PDF A. Activate the createdBy tag in the account. B. Analyze the usage with Amazon CloudWatch dashboards. C. Analyze the usage with Cost Explorer. D. Configure AWS Trusted Advisor to track resource usage. E. Create a billing alarm in AWS Budgets. Answer: AC Explanation: To determine the service costs incurred by each developer, follow these steps: Activate the createdBy Tag: Tagging resources with a createdBy tag helps identify which user created the resource. This tag should be applied consistently across all resources created by the developers. Reference: Tagging Your Resources Analyze Usage with Cost Explorer: Use Cost Explorer to filter and group cost and usage data by the createdBy tag. This provides a breakdown of costs incurred by each developer. Reference: Analyzing Your Costs with Cost Explorer These two steps together will provide a detailed analysis of the costs incurred by each developer in the AWS account. Question: 20 A company website contains a web tier and a database tier on AWS. The web tier consists of Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones. The database tier runs on an Amazon ROS for MySQL Multi-AZ DB instance. The database subnet network ACLs are restricted to only the web subnets that need access to the database. The web subnets use the default network ACL with the default rules. The company's operations team has added a third subnet to the Auto Scaling group configuration. After an Auto Scaling event occurs, some users report that they intermittently receive an error www.certifiedumps.com

  31. Page 31 Questions & Answers PDF message. The error message states that the server cannot connect to the database. The operations team has confirmed that the route tables are correct and that the required ports are open on all security groups. Which combination of actions should a SysOps administrator take so that the web servers can communicate with the DB instance? (Select TWO.) A. On the default ACL. create inbound Allow rules of type TCP with the ephemeral port range and the source as the database subnets. B. On the default ACL, create outbound Allow rules of type MySQL/Aurora (3306). Specify the destinations as the database subnets. C. On the network ACLs for the database subnets, create an inbound Allow rule of type MySQL/Aurora (3306). Specify the source as the third web subnet. D. On the network ACLs for the database subnets, create an outbound Allow rule of type TCP with the ephemeral port range and the destination as the third web subnet. E. On the network ACLs for the database subnets, create an outbound Allow rule of type MySQL/Aurora (3306). Specify the destination as the third web subnet. Answer: CD Explanation: To ensure that the new web subnet can communicate with the database instance, follow these steps: Create an Inbound Allow Rule for MySQL/Aurora (3306): On the network ACL for the database subnets, add an inbound allow rule to permit traffic from the third web subnet on port 3306 (MySQL/Aurora). Reference: Network ACLs Create an Outbound Allow Rule for Ephemeral Ports: On the network ACL for the database subnets, add an outbound allow rule to permit traffic to the third web subnet on the ephemeral port range (1024-65535). Reference: Ephemeral Ports These changes will ensure that the new subnet can communicate with the database, resolving the connectivity issues. www.certifiedumps.com

  32. Page 32 Questions & Answers PDF Question: 21 A company is running an application on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances are launched by an Auto Scaling group and are automatically registered in a target group. A SysOps administrator must set up a notification to alert application owners when targets fail health checks. What should the SysOps administrator do to meet these requirements? A. Create an Amazon CloudWatch alarm on the UnHealthyHostCount metric. Configure an action to send an Amazon Simple Notification Service (Amazon SNS) notification when the metric is greater than 0. B. Configure an Amazon EC2 Auto Scaling custom lifecycle action to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is in the Pending:Wait state. C. Update the Auto Scaling group. Configure an activity notification to send an Amazon Simple Notification Service (Amazon SNS) notification for the Unhealthy event type. D. Update the ALB health check to send an Amazon Simple Notification Service (Amazon SNS) notification when an instance is unhealthy. Answer: A Explanation: To set up a notification for failed health checks of targets in the ALB, follow these steps: Create a CloudWatch Alarm: Navigate to CloudWatch and create a new alarm based on the UnHealthyHostCount metric of the target group. Reference: Creating Alarms Configure the Alarm Action: Configure the alarm to send an Amazon SNS notification when the UnHealthyHostCount metric is greater than 0. www.certifiedumps.com

  33. Page 33 Questions & Answers PDF Reference: Using Amazon SNS for CloudWatch Alarms This setup will notify application owners whenever a target fails health checks. Question: 22 A company wants to build a solution for its business-critical Amazon RDS for MySQL database. The database requires high availability across different geographic locations. A SysOps administrator must build a solution to handle a disaster recovery (DR) scenario with the lowest recovery time objective (RTO) and recovery point objective (RPO). Which solution meets these requirements? A. Create automated snapshots of the database on a schedule. Copy the snapshots to the DR Region. B. Create a cross-Region read replica for the database. C. Create a Multi-AZ read replica for the database. D. Schedule AWS Lambda functions to create snapshots of the source database and to copy the snapshots to a DR Region. Answer: B Explanation: To ensure high availability and disaster recovery for the RDS for MySQL database, follow these steps: Create a Cross-Region Read Replica: In the RDS console, create a cross-Region read replica of the primary database instance. This read replica will provide high availability and the lowest RTO and RPO in case of a disaster. Reference: Creating a Read Replica in a Different AWS Region This solution ensures that your database is replicated across regions, providing robust disaster recovery capabilities with minimal recovery time and data loss. Question: 23 www.certifiedumps.com

  34. Page 34 Questions & Answers PDF A SysOps administrator is using Amazon EC2 instances to host an application. The SysOps administrator needs to grant permissions for the application to access an Amazon DynamoDB table. Which solution will meet this requirement? A. Create access keys to access the DynamoDB table. Assign the access keys to the EC2 instance profile. B. Create an EC2 key pair to access the DynamoDB table. Assign the key pair to the EC2 instance profile. C. Create an IAM user to access the DynamoDB table. Assign the IAM user to the EC2 instance profile. D. Create an IAM role to access the DynamoDB table. Assign the IAM role to the EC2 instance profile. Answer: D Explanation: Access to Amazon DynamoDB requires credentials. Those credentials must have permissions to access AWS resources, such as an Amazon DynamoDB table or an Amazon Elastic Compute Cloud (Amazon EC2) instance. The following sections provide details on how you can use AWS Identity and Access Management (IAM) and DynamoDB to help secure access to your resources. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/authentication-and-access- control.html Question: 24 A company has a web application with a database tier that consists of an Amazon EC2 instance that runs MySQL. A SysOps administrator needs to minimize potential data loss and the time that is required to recover in the event of a database failure. What is the MOST operationally efficient solution that meets these requirements? A. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric to invoke an AWS Lambda function that stops and starts the EC2 instance. www.certifiedumps.com

  35. Page 35 Questions & Answers PDF B. Create an Amazon RDS for MySQL Multi-AZ DB instance. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application. C. Create an Amazon RDS for MySQL Single-AZ DB instance with a read replica. Use a MySQL native backup that is stored in Amazon S3 to restore the data to the new database. Update the connection string in the web application. D. Use Amazon Data Lifecycle Manager (Amazon DLM) to take a snapshot of the Amazon Elastic Block Store (Amazon EBS) volume every hour. In the event of an EC2 instance failure, restore the EBS volume from a snapshot. Answer: B Explanation: Step-by-Step Understand the Problem: Minimize potential data loss and recovery time for a MySQL database running on an Amazon EC2 instance. Analyze the Requirements: Reduce the risk of data loss. Ensure quick recovery in the event of a database failure. Aim for operational efficiency. Evaluate the Options: Option A: Create a CloudWatch alarm to invoke a Lambda function that stops and starts the EC2 instance. This addresses system failures but does not help with minimizing data loss or ensuring database redundancy. Option B: Create an Amazon RDS for MySQL Multi-AZ DB instance and use a MySQL native backup stored in Amazon S3. RDS Multi-AZ deployments provide high availability and durability by automatically replicating data to a standby instance in a different Availability Zone. Using a MySQL native backup stored in Amazon S3 ensures data can be restored efficiently. www.certifiedumps.com

  36. Page 36 Questions & Answers PDF Option C: Create an RDS for MySQL Single-AZ DB instance with a read replica. This provides read scalability but does not ensure high availability or failover capabilities. Option D: Use Amazon DLM to take hourly snapshots of the EBS volume. This helps with backups but does not provide immediate failover capabilities or minimize downtime effectively. Select the Best Solution: Option B: Using Amazon RDS for MySQL Multi-AZ ensures high availability and automated backups, significantly minimizing data loss and recovery time. Reference: Amazon RDS Multi-AZ Deployments Backing Up and Restoring an Amazon RDS DB Instance Using Amazon RDS for MySQL Multi-AZ provides a highly available and durable solution with automated backups, ensuring minimal data loss and quick recovery. Question: 25 A company migrated an I/O intensive application to an Amazon EC2 general purpose instance. The EC2 instance has a single General Purpose SSD Amazon Elastic Block Store (Amazon EBS) volume attached. Application users report that certain actions that require intensive reading and writing to the disk are taking much longer than normal or are failing completely. After reviewing the performance metrics of the EBS volume, a SysOps administrator notices that the VolumeQueueLength metric is consistently high during the same times in which the users are reporting issues. The SysOps administrator needs to resolve this problem to restore full performance to the application. Which action will meet these requirements? A. Modify the instance type to be storage optimized. B. Modify the volume properties by deselecting Auto-Enable Volume 10. C. Modify the volume properties to increase the IOPS. D. Modify the instance to enable enhanced networking. www.certifiedumps.com

  37. Page 37 Questions & Answers PDF Answer: C Explanation: Step-by-Step Understand the Problem: An I/O intensive application on an Amazon EC2 general purpose instance is experiencing performance issues. Users report delays or failures during intensive read/write operations. The VolumeQueueLength metric is consistently high during these periods. Analyze the Requirements: Address the high VolumeQueueLength, which indicates that the EBS volume is unable to handle the I/O requests efficiently. Improve the disk I/O performance to restore full application performance. Evaluate the Options: Option A: Modify the instance type to be storage optimized. Storage optimized instances are designed for workloads that require high, sequential read and write access to large data sets on local storage. This could help, but if the issue is primarily with EBS volume performance, increasing IOPS would be a more direct solution. Option B: Modify the volume properties by deselecting Auto-Enable Volume I/O. Auto-Enable Volume I/O is a setting that automatically enables I/O for the EBS volume after an event such as a snapshot restore. Deselecting this option will not address the issue of high I/O demand. Option C: Modify the volume properties to increase the IOPS. Increasing the IOPS (Input/Output Operations Per Second) will directly address the high VolumeQueueLength by allowing the volume to handle more I/O operations concurrently. This is the most effective and direct solution to improve the performance of I/O intensive tasks. Option D: Modify the instance to enable enhanced networking. www.certifiedumps.com

  38. Page 38 Questions & Answers PDF Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. While beneficial for network performance, this does not directly impact the EBS volume’s I/O performance. Select the Best Solution: Option C: Modifying the volume properties to increase the IOPS directly addresses the high VolumeQueueLength and improves the EBS volume's ability to handle intensive read/write operations. Reference: Amazon EBS Volume Types Amazon EBS Volume Performance Optimizing EBS Performance Increasing the IOPS of the EBS volume ensures that the application can handle the required intensive read/write operations more efficiently, directly addressing the high VolumeQueueLength and restoring full performance. Question: 26 A SysOps administrator is trying to set up an Amazon Route 53 domain name to route traffic to a website hosted on Amazon S3. The domain name of the website is www.anycompany.com and the S3 bucket name is anycompany-static. After the record set is set up in Route 53, the domain name www.anycompany.com does not seem to work, and the static website is not displayed in the browser. Which of the following is a cause of this? A. The S3 bucket must be configured with Amazon CloudFront first. B. The Route 53 record set must have an IAM role that allows access to the S3 bucket. C. The Route 53 record set must be in the same region as the S3 bucket. D. The S3 bucket name must match the record set name in Route 53. www.certifiedumps.com

  39. Page 39 Questions & Answers PDF Answer: C Explanation: Step-by-Step Understand the Problem: The application on a general-purpose EC2 instance experiences performance issues during I/O intensive tasks. High VolumeQueueLength indicates a bottleneck in I/O operations. Analyze the Requirements: Improve disk I/O performance to handle intensive read/write operations. Evaluate the Options: Option A: Modify the instance type to be storage optimized. This could help but may not be necessary if increasing IOPS on the EBS volume resolves the issue. Option B: Deselect Auto-Enable Volume I/O. This option is not relevant to addressing high VolumeQueueLength. Option C: Modify the volume properties to increase the IOPS. Increasing IOPS directly addresses the I/O performance bottleneck. Option D: Enable enhanced networking. Enhanced networking improves network performance but not disk I/O performance. Select the Best Solution: Option C: Increasing the IOPS of the EBS volume directly addresses the high VolumeQueueLength and improves I/O performance. Reference: Amazon EBS Volume Types Optimizing EBS Performance Modifying the volume properties to increase the IOPS ensures that the application can handle intensive read/write operations more effectively. www.certifiedumps.com

  40. Page 40 Questions & Answers PDF Question: 27 An Amazon EC2 instance needs to be reachable from the internet. The EC2 instance is in a subnet with the following route table: Which entry must a SysOps administrator add to the route table to meet this requirement? A. A route for 0.0.0.0/0 that points to a NAT gateway B. A route for 0.0.0.0/0 that points to an egress-only internet gateway C. A route for 0.0.0.0/0 that points to an internet gateway D. A route for 0.0.0.0/0 that points to an elastic network interface Answer: C Explanation: Step-by-Step Understand the Problem: An EC2 instance needs to be reachable from the internet. Analyze the Requirements: Ensure proper routing for internet connectivity. Evaluate the Options: Option A: A route for 0.0.0.0/0 that points to a NAT gateway. NAT gateways allow instances in private subnets to access the internet, not the other way around. Option B: A route for 0.0.0.0/0 that points to an egress-only internet gateway. Egress-only gateways are for IPv6 traffic and do not provide inbound internet access. Option C: A route for 0.0.0.0/0 that points to an internet gateway. www.certifiedumps.com

  41. Page 41 Questions & Answers PDF Internet gateways provide both inbound and outbound internet access. Option D: A route for 0.0.0.0/0 that points to an elastic network interface. Elastic network interfaces do not provide internet routing. Select the Best Solution: Option C: Adding a route to the internet gateway ensures the EC2 instance is reachable from the internet. Reference: Amazon VPC Internet Gateway Configuring a route to the internet gateway ensures proper internet connectivity for the EC2 instance. Question: 28 A SysOps administrator has enabled AWS CloudTrail in an AWS account. If CloudTrail is disabled, it must be re-enabled immediately. What should the SysOps administrator do to meet these requirements WITHOUT writing custom code? A. Add the AWS account to AWS Organizations. Enable CloudTrail in the management account. B. Create an AWS Config rule that is invoked when CloudTrail configuration changes. Apply the AWS- ConfigureCloudTrailLogging automatic remediation action. C. Create an AWS Config rule that is invoked when CloudTrail configuration changes. Configure the rule to invoke an AWS Lambda function to enable CloudTrail. D. Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail. Answer: B Explanation: www.certifiedumps.com

  42. Page 42 Questions & Answers PDF Step-by-Step Understand the Problem: CloudTrail must be re-enabled immediately if it is disabled. Analyze the Requirements: Implement an automatic solution to monitor and re-enable CloudTrail. Evaluate the Options: Option A: Add the AWS account to AWS Organizations and enable CloudTrail in the management account. This provides centralized management but does not ensure automatic re-enabling of CloudTrail. Option B: Create an AWS Config rule with automatic remediation. AWS Config can monitor changes and automatically remediate by re-enabling CloudTrail. Option C: Create an AWS Config rule that invokes a Lambda function. This requires custom code, which is not preferred. Option D: Create an EventBridge rule with a Systems Manager Automation document. This can re-enable CloudTrail but is more complex compared to AWS Config’s built-in remediation. Select the Best Solution: Option B: Using AWS Config with automatic remediation ensures CloudTrail is re-enabled without writing custom code. Reference: AWS Config Rules Automatic Remediation with AWS Config Creating an AWS Config rule with automatic remediation ensures that CloudTrail is immediately re- enabled if it is disabled. Question: 29 A company has a stateless application that runs on four Amazon EC2 instances. The application requires tour instances at all times to support all traffic. A SysOps administrator must design a highly www.certifiedumps.com

  43. Page 43 Questions & Answers PDF available, fault-tolerant architecture that continually supports all traffic if one Availability Zone becomes unavailable. Which configuration meets these requirements? A. Deploy two Auto Scaling groups in two Availability Zones with a minimum capacity of two instances in each group. B. Deploy an Auto Scaling group across two Availability Zones with a minimum capacity of four instances. C. Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of four instances. D. Deploy an Auto Scaling group across three Availability Zones with a minimum capacity of six instances. Answer: C Explanation: Question: 30 A company's backend infrastructure contains an Amazon EC2 instance in a private subnet. The private subnet has a route to the internet through a NAT gateway in a public subnet. The instance must allow connectivity to a secure web server on the internet to retrieve data at regular intervals. The client software times out with an error message that indicates that the client software could not establish the TCP connection. What should a SysOps administrator do to resolve this error? A. Add an inbound rule to the security group for the EC2 instance with the following parameters: Type - HTTP, Source - 0.0.0.0/0. B. Add an inbound rule to the security group for the EC2 instance with the following parameters: Type - HTTPS, Source - 0.0.0.0/0. www.certifiedumps.com

  44. Page 44 Questions & Answers PDF C. Add an outbound rule to the security group for the EC2 instance with the following parameters: Type - HTTP, Destination - 0.0.0.0/0. D. Add an outbound rule to the security group for the EC2 instance with the following parameters: Type - HTTPS. Destination - 0.0.0.0/0. Answer: D Explanation: To allow the EC2 instance in the private subnet to establish a secure connection to an external web server, follow these steps: Modify Security Group: Add an outbound rule to the security group of the EC2 instance with the following parameters: Type: HTTPS Destination: 0.0.0.0/0 This allows outbound HTTPS traffic to the internet. Reference: Security Group Rules Ensure NAT Gateway Configuration: Ensure that the NAT gateway is properly configured in the public subnet to allow internet access for the instances in the private subnet. Reference: NAT Gateways This configuration will resolve the connectivity issue. Question: 31 A software development company has multiple developers who work on the same product. Each developer must have their own development environment, and these development environments must be identical. Each development environment consists of Amazon EC2 instances and an Amazon RDS DB instance. The development environments should be created only when necessary, and they must be terminated each night to minimize costs. What is the MOST operationally efficient solution that meets these requirements? www.certifiedumps.com

  45. Page 45 Questions & Answers PDF A. Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly cron job on each development instance to stop all running processes to reduce CPU utilization to nearly zero. B. Provide developers with access to the same AWS CloudFormation template so that they can provision their development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to delete the AWS CloudFormation stacks. C. Provide developers with CLI commands so that they can provision their own development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function to terminate all EC2 instances and the DB instance. D. Provide developers with CLI commands so that they can provision their own Answer: B development environment when necessary. Schedule a nightly Amazon EventBridge (Amazon CloudWatch Events) Explanation: rule to cause AWS CloudFormation to delete all of the development environment To efficiently manage and automate the creation and termination of development environments: resources. AWS CloudFormation Templates: Provide a standardized CloudFormation template for developers to create identical development environments. Reference: AWS CloudFormation User Guide Automate Termination: Use Amazon EventBridge (CloudWatch Events) to schedule a nightly rule that invokes an AWS Lambda function. The Lambda function should be designed to delete the CloudFormation stacks created for development environments. Reference: Amazon EventBridge This solution ensures operational efficiency and cost management. www.certifiedumps.com

  46. Page 46 Questions & Answers PDF Question: 32 A company runs a stateless application that is hosted on an Amazon EC2 instance. Users are reporting performance issues. A SysOps administrator reviews the Amazon CloudWatch metrics for the application and notices that the instance's CPU utilization frequently reaches 90% during business hours. What is the MOST operationally efficient solution that will improve the application's responsiveness? A. Configure CloudWatch logging on the EC2 instance. Configure a CloudWatch alarm for CPU utilization to alert the SysOps administrator when CPU utilization goes above 90%. B. Configure an AWS Client VPN connection to allow the application users to connect directly to the EC2 instance private IP address to reduce latency. C. Create an Auto Scaling group, and assign it to an Application Load Balancer. Configure a target tracking scaling policy that is based on the average CPU utilization of the Auto Scaling group. D. Create a CloudWatch alarm that activates when the EC2 instance's CPU utilization goes above 80%. Configure the alarm to invoke an AWS Lambda function that vertically scales the instance. Answer: C Explanation: To improve application responsiveness and handle high CPU utilization: Create Auto Scaling Group: Create an Auto Scaling group (ASG) for the EC2 instances running the application. Reference: Auto Scaling Groups Assign to Application Load Balancer: Use an Application Load Balancer (ALB) to distribute traffic across the instances in the ASG. Reference: Application Load Balancers Configure Target Tracking Scaling Policy: Set up a target tracking scaling policy based on average CPU utilization, for example, keeping CPU utilization around 50-60%. www.certifiedumps.com

  47. Page 47 Questions & Answers PDF Reference: Scaling Policies for Auto Scaling This configuration ensures that the application scales out to handle increased load and improves performance during peak times. Question: 33 A company is testing Amazon Elasticsearch Service (Amazon ES) as a solution for analyzing system logs from a fleet of Amazon EC2 instances. During the test phase, the domain operates on a single- node cluster. A SysOps administrator needs to transition the test domain into a highly available production-grade deployment. Which Amazon ES configuration should the SysOps administrator use to meet this requirement? A. Use a cluster of four data nodes across two AWS Regions. Deploy four dedicated master nodes in each Region. B. Use a cluster of six data nodes across three Availability Zones. Use three dedicated master nodes. C. Use a cluster of six data nodes across three Availability Zones. Use six dedicated master nodes. D. Use a cluster of eight data nodes across two Availability Zones. Deploy four master nodes in a failover AWS Region. Answer: B Explanation: To transition the Amazon Elasticsearch Service (Amazon ES) domain to a highly available, production- grade deployment: Cluster Configuration: Use a cluster of six data nodes to handle data ingestion and querying. Distribute these nodes across three Availability Zones (AZs) for high availability and fault tolerance. Reference: Amazon Elasticsearch Service Best Practices Dedicated Master Nodes: www.certifiedumps.com

  48. Page 48 Questions & Answers PDF Use three dedicated master nodes to manage the cluster state and perform cluster management tasks. This separation of master and data nodes helps in maintaining cluster stability and performance. Reference: Dedicated Master Nodes This configuration ensures that the Elasticsearch cluster is highly available and can handle production workloads effectively. Question: 34 A company recently acquired another corporation and all of that corporation's AWS accounts. A financial analyst needs the cost data from these accounts. A SysOps administrator uses Cost Explorer to generate cost and usage reports. The SysOps administrator notices that "No Tagkey" represents 20% of the monthly cost. What should the SysOps administrator do to tag the "No Tagkey" resources? A. Add the accounts to AWS Organizations. Use a service control policy (SCP) to tag all the untagged resources. B. Use an AWS Config rule to find the untagged resources. Set the remediation action to terminate the resources. C. Use Cost Explorer to find and tag all the untagged resources. D. Use Tag Editor to find and taq all the untaqqed resources. Answer: D Explanation: "You can add tags to resources when you create the resource. You can use the resource's service console or API to add, change, or remove those tags one resource at a time. To add tags to—or edit or delete tags of—multiple resources at once, use Tag Editor. With Tag Editor, you search for the resources that you want to tag, and then manage tags for the resources in your search results." https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html www.certifiedumps.com

  49. Page 49 Questions & Answers PDF Question: 35 A company is using Amazon Elastic File System (Amazon EFS) to share a file system among several Amazon EC2 instances. As usage increases, users report that file retrieval from the EFS file system is slower than normal. Which action should a SysOps administrator take to improve the performance of the file system? A. Configure the file system for Provisioned Throughput. B. Enable encryption in transit on the file system. C. Identify any unused files in the file system, and remove the unused files. D. Resize the Amazon Elastic Block Store (Amazon EBS) volume of each of the EC2 instances. Answer: A Explanation: Step-by-Step Understand the Problem: Users report that file retrieval from the Amazon EFS file system is slower than normal. Analyze the Requirements: Improve the performance of the EFS file system to handle increased usage and file retrieval speed. Evaluate the Options: Option A: Configure the file system for Provisioned Throughput. Provisioned Throughput mode allows you to specify the throughput of your file system independent of the amount of data stored. This option is suitable for applications with high throughput-to-storage ratio requirements and ensures consistent performance. Option B: Enable encryption in transit on the file system. While this enhances security, it does not directly improve performance. www.certifiedumps.com

  50. Page 50 Questions & Answers PDF Option C: Identify any unused files in the file system, and remove the unused files. This may free up some resources but does not address the root cause of slow performance. Option D: Resize the Amazon EBS volume of each of the EC2 instances. EFS performance issues are not directly related to the size of EBS volumes attached to EC2 instances. Select the Best Solution: Option A: Configuring the file system for Provisioned Throughput ensures consistent performance by allowing you to set the required throughput regardless of the amount of data stored. Reference: Amazon EFS Performance Provisioned Throughput Mode Configuring Provisioned Throughput for Amazon EFS provides a consistent and higher performance level, which is suitable for high throughput requirements. Question: 36 A SysOps administrator is helping a development team deploy an application to AWS Trie AWS CloudFormat on temp ate includes an Amazon Linux EC2 Instance an Amazon Aurora DB cluster and a hard coded database password that must be rotated every 90 days What is the MOST secure way to manage the database password? A. Use the AWS SecretsManager Secret resource with the GenerateSecretString property to automatically generate a password Use the AWS SecretsManager RotationSchedule resource lo define a rotation schedule lor the password Configure the application to retrieve the secret from AWS Secrets Manager access the database B. Use me AWS SecretsManager Secret resource with the SecretStrmg property Accept a password as a CloudFormation parameter Use the AllowedPatteen property of the CloudFormaton parameter to require e minimum length, uppercase and lowercase letters and special characters Configure me application to retrieve the secret from AWS Secrets Manager to access the database C. Use the AWS SSM Parameter resource Accept input as a Qoudformatton parameter to store the parameter as a secure sting Configure the application to retrieve the parameter from AWS Systems Manager Parameter Store to access the database www.certifiedumps.com

More Related