Questions tagged with AWS Config
Content language: English
Sort by most recent
Hello,
I am trying to connect to the linux base EC2 instance using putty but I can't connect to the instance.
Also I create linux base application server using User Data and trying to access over browser, I am not able to access it.
Please help me to reach out that problem
Hello.
I would like to estimate prices for aws security hub, but I have some questions about it:
1. What means Number Of Security Checks per Account?. How can I calculate that number?
2. What means Number Of Finding Ingested per Account? How can I calculate that number?
3. About AWS Config used by Security Hub, how can I calculate the Number of Configuration items recorded and Number of Config rule evaluations?
Thank you.
I need to use Debian/Ubuntu as my beanstalk ami. How can I do it? what is pre-requesit?
I've got a lambda that is giving me the warning from the subject line in my logs. Specifically, I'm getting this message over and over:
```
[WARNING] 2022-09-26T20:27:57.948Z 7994926c-f98a-4501-8aca-76c9d5b8aa34 Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
[WARNING] 2022-09-26T20:27:57.958Z 7994926c-f98a-4501-8aca-76c9d5b8aa34 Connection pool is full, discarding connection: canvas.instructure.com. Connection pool size: 10
.....
```
I'm trying to collect grade data for students in a class. Say there are 24 students in a class. I'll get this warning 14 times, since the pool size is capped at 10.
Seems like it should be simple enough to increase the pool size, but I've tried consulting [this](https://github.com/boto/botocore/issues/619) post to no avail. i.e. I've set:
```
client_config = botocore.config.Config(
max_pool_connections=50,
)
```
and passed that in for all my clients. Hasn't fixed anything.
What can be done if setting the config for all my clients isn't dealing with the warnings? Could the fact that in my concurrent calls, I'm calling a function residing in a separate lambda layer be to blame?
I have created a fully private cluster and it is working fine (means kubectl, eksctl, and aws commands are working) but there is a problem with the cluster. Whenever I create Amazon Linux 2 node instances, they successfully join the cluster but when I try to create Ubuntu instances then I get the following error message.
```
Instance failed to join the kubernetes cluster,(Service:null, Status Code: 0, Request ID:null)(RequestToken:c912435454-d3d1-2352-542321-4523543243, HandlerErrorCode:GeneralServiceException)
The issue is our accounts are in control tower environment and in control tower there are no options to add config rules other than Predefined ones, in those predefined ones there is non for security groups. How can we enable more config rules at organization level e.g. *security group verification rule.*
I have the option to enable this at per account level but not at aggregator level, but there are hundreds of account and it is not feasible to have this one by one for each account.
1. Can you deploy to service catalog from a GitLab pipeline using Terraform?
2. Can you create a GitLab pipeline that invoke the service catalog to instantiate something?
3. Can you provide a set of Catalog items with a pre-defined AWS configuration?
4. Can you monitor for drift remediation vs the baseline configure using either Terraform, CloudFormation or AWS Config
Hello folks
I am having a hard time understanding how AWS guard rules that fail and pass are evaluated when used with Config. I wanted to replicate an existing rule that detects public S3 buckets: https://github.com/aws-cloudformation/cloudformation-guard/blob/901d40a6f01553d14adf9ab398c7eec55c2b5a36/guard/resources/rules-dir/s3_bucket_public_read_prohibited.guard
I realized that this rule applies to a cloudformation template. I wanted to apply it to a Config recorded object so i adapted the rule to:
```
rule isPublicAccessBlockConfigurationBlockSecure when isPublicAccessBlockConfigurationBlockPresent {
supplementaryConfiguration.PublicAccessBlockConfiguration exists
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicPolicy == true
supplementaryConfiguration.PublicAccessBlockConfiguration.ignorePublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.restrictPublicBuckets == true
}
```
When testing this locally (cfn-guard) i got a fail on an open bucket with an explanation along the lines:
```
Property traversed until [/supplementaryConfiguration] in data [PublicBucketAccess-test-fail.json] is not compliant with [PublicBucketAccess.guard/absentPublicAccessBlockConfigurationBlock] due to retrieval error.
```
I was under the assumption that if there is a retrieval error, Config marks the resource as non-compliant but it either provides no results or marks it as compliant and does not give any error. However, when i changed to:
```
rule isBucketToBeSecured when resourceType == "AWS::S3::Bucket" {
...some checks...
}
rule isPublicAccessBlockConfigurationBlockPresent when isBucketToBeSecured {
supplementaryConfiguration.PublicAccessBlockConfiguration exists
}
rule isPublicAccessBlockConfigurationBlockSecure when isPublicAccessBlockConfigurationBlockPresent {
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicPolicy == true
supplementaryConfiguration.PublicAccessBlockConfiguration.ignorePublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.restrictPublicBuckets == true
}
```
It now works. Does anyone know why Config has such a strange evaluation mechanism where a failure to retrieve a key gives no compliance results or marks the resources as good to go?
Also, is there a cleaner way to test for the existence of a key before trying to access subkeys without causing a failure. When i used:
```
rule taggedBucketIsSecure2 when resourceType == "AWS::S3::Bucket" {
let publicAccessBlockConfiguration = supplementaryConfiguration.PublicAccessBlockConfiguration
when %publicAccessBlockConfiguration exists {
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.blockPublicPolicy == true
supplementaryConfiguration.PublicAccessBlockConfiguration.ignorePublicAcls == true
supplementaryConfiguration.PublicAccessBlockConfiguration.restrictPublicBuckets == true
}
}
```
I got:
```
Rule [PublicBucketAccess.guard/taggedBucketIsSecure2] is not applicable for template [PublicBucketAccess-test-fail.json]
```
I assume the problem is that since when does not evaluate to true, it skips the evaluation and instead of marking the resource as non-compliant it either fails or marks it as compliant.
Thanks in advance
want to know how can we add our custom security checks in security hub
I created a new organization using AWS Control Tower (version 3.0). It seems that it has created two aggregators:
* An accounts aggregator under the audit account named control `aws-controltower-GuardrailsComplianceAggregator`. This aggregator is defined to collect from specific accounts (all member accounts, excluding the management account), and from all regions. However, at least in my case, the authorizations given from these accounts to aggregation seem messed up - each account was only set up to authorize aggregation from 5 regions, and the aggregator indeed identifies the aggregation from some accounts and regions as failed as a result. FYI, I currently created my control tower landing zone on a single region, not sure why this setup happened.
* An organization aggregator in the management account named `aws-controltower-ConfigAggregatorForOrganizations`. This organization aggregator automatically collects from all accounts and regions in the organization, and it is working well.
Any idea why both aggregators were defined? I know that until a recent version of the landing zone, there was no support for organization aggregators. But now that it has been added, why keep the account-specific aggregator in the audit account (that seems to be misconfigured anyway)?
On the flip side, given that the best practice is to use the audit account for, well, auditing - why is the organization aggregator defined on the management account and not the audit account? Doesn't that mean that to enjoy its aggregation I need to login to the management account?
Thanks,
The domain is with Godaddy and Hosted in AWS Account. We have updated all DNS in AWS account but we facing issues and receiving any mails
Hello, I am currently having a problem following along this blog post: https://aws.amazon.com/blogs/mt/visualizing-aws-config-data-using-amazon-athena-and-amazon-quicksight/ along with this blog which is referenced: https://aws.amazon.com/blogs/mt/how-to-query-your-aws-resource-configuration-states-using-aws-config-and-amazon-athena/
So far I have created an AWS Config rule, created and configured an S3 bucket to receive the Config data, created an Amazon Athena table for my Config data as well as the lambda function. I also have two t2.micro instances running with configurations that make it non-compliant.
Every time I run the simple example query (https://aws.amazon.com/blogs/mt/how-to-query-your-aws-resource-configuration-states-using-aws-config-and-amazon-athena/) to list EC2 instances of type "t2.micro" no results show.
My database name in Athena is different than the one shown in the blogs. So I have replaced the database name "sampledb" to "default" wherever applicable.
Also, it mentions to edit the region and dt partition keys, based on the Region and date of the given configuration snapshot files in the Lambda function but I do not see where I can do that. The function should automatically retrieve my region and date.
Let me know if you require more information.
Thanks.