Every time you’re reading about yet another unprotected S3 bucket that has leaked, you’re probably wondering how is it possible to let this happen.
When your AWS footprint gets bigger, it gets more difficult to keep track of all your AWS resources. You might have multiple AWS accounts, making it even more difficult to keep all your resources secure. When then departments inside the organization start creating their own AWS accounts and resources, it starts to get really difficult to keep track of the resources created.
AWS has multiple services to help solve this problem. One of them is AWS Config. AWS Config can track resource changes based on a set of rules. It can then report its findings. If a rule says that all EBS volumes need to be encrypted with a specific KMS key, but there’s an EBS volume encrypted with another key, then the resource will be reported as “out of compliance”. A notification can be sent, and a cloud security team can keep track of out of compliance items and the overall security posture.
In this post, we’ll show you the steps to create a custom AWS Config rule to protect S3 buckets from public access.
Keeping S3 buckets secure
To keep S3 buckets secure, we can test whether the “block public access” setting has been enabled on a bucket. AWS Config custom rules can invoke a lambda to run the test. Once setup, when a user creates or changes an S3 bucket, AWS Config invokes the custom rule and lambda, and a notification can be sent if the bucket is out of compliance. Below you can see the architecture diagram.
Custom lambda
For this setup we need to write a custom lambda. At IN4IT we do our infrastructure development in golang, so we wrote the lambda in golang. The full source code can be found at https://github.com/in4it/aws-config-compliance-checks/. In the next paragraphs you can read how the lambda was built.
First, to get our function started, we need a handler to handle our request:
This gives us access to the configEvent, declared here by AWS (GitHub link):
The configEvent contains a InvokingEvent property which will contain a JSON with the resource configuration change of that specific resource, S3 in our case (example config event).
Using the InvokingEvent JSON, we can check whether the resource is really of type “AWS::S3::Bucket”, and if yes, we can extract the PublicAccessBlockConfiguration from that JSON. If all the ACLs in the PublicAccessBlockConfiguration are set to true, our S3 resource is in compliance with the rule, and we can return “COMPLIANT”. We do this in a separate evaluateCompliance function in our lambda.
As a last step we need to do a PutEvaluations API call to the AWS Config Service to send our evaluations, our findings. An evaluation looks like this:
It contains the complianceResourceID (the s3 bucket name), the resource type (AWS::S3::Bucket), the ComplianceType (whether it was COMPLIANT or NON_COMPLIANT), and a timestamp for ordering. This is then sent using the PutEvalations API call to AWS Config and will then appear in the AWS Console as a compliant or non compliant S3 resource.
From this point, we only have to configure CloudWatch Events, to listen for compliant/non-compliant events, and send a notification to our SNS topic if an event occurs. When using AWS ChatBot, we can have pretty messages in our slack channel, notifying us of non-compliant resources, and take action.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.