Questions tagged with AWS Config
Content language: English
Sort by most recent
I have an organization that's updating its accounts to Control Tower Landing Zone 3.0. As we do so, we're finding that the upgraded accounts fail Security Hub AWS Foundational Security Best Practices rule Config.1 "AWS Config should be enabled". The failure appears to be caused by a change to Config where global resource recording only happens in the home Control Tower region. The Config.1 failures we see are in secondary regions, and we confirmed that the failing accounts don't have global resource recording active in the secondary regions.
My question is: is there a plan to update the Security Hub rule to reflect the Control Tower change? Control Tower has it right, we only need to record global resources in one region. It's also very annoying to undo the change in Landing Zone 3.0 as we have to move accounts out of CT-managed OUs or log in as the CT role to change Config.
Hello, i try to use AWS Config Rule with Auto Remediation, the rule should detect security groups with open SSH and remove the ingress.
I Use "INCOMING_SSH_DISABLED" (restricted-ssh) managed rule and AWS-DisablePublicAccessForSecurityGroup SSM document,
the remediation is configured with terraform:
```
target_id = "AWS-DisablePublicAccessForSecurityGroup"
target_type = "SSM_DOCUMENT"
resource_type = "AWS::EC2::SecurityGroup"
target_version = "1"
parameter {
name = "AutomationAssumeRole"
static_value = aws_iam_role.ssh-remediation-role.arn
}
parameter {
name = "GroupId"
resource_value = "RESOURCE_ID"
```
The role is:
```
data "aws_iam_policy_document" "ssm-automation-assume-role" {
version = "2012-10-17"
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
identifiers = ["ssm.amazonaws.com"]
type = "Service"
}
condition {
test = "StringEquals"
variable = "aws:SourceAccount"
values = [local.account-id]
}
condition {
test = "ArnLike"
variable = "aws:SourceArn"
values = ["arn:aws:ssm:*:${local.account-id}:automation-execution/*"]
}
}
}
resource "aws_iam_role" "ssh-remediation-role" {
assume_role_policy = data.aws_iam_policy_document.ssm-automation-assume-role.json
managed_policy_arns = [
"arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole",
"arn:aws:iam::aws:policy/AmazonEC2FullAccess"
]
```
When i create such security group AWS Config detects it, runs remediation, the Automation finishes with result 'Success' (and the security group is properly updated, so the remediation works) but AWS Config
shows "Failed", when i try to see some details with `aws configservice describe-remediation-execution-status `
i get:
```
"State": "FAILED",
"StepDetails": [
{
"Name": "GetAutomationExecution",
"State": "FAILED",
"ErrorMessage": "AccessDeniedException while calling STS for execution: SsmExecutionId(value=d69b27e5-da83-43de-b563-9d9040c2cf03)"
}
],
```
I tried to google this error but i have not found anything. How can i solve this issue?
Thank you for your help.
My api service was up and running on the ec2 instance but suddenly started throwing error message: {"status":false,"message":"failure","result":{"code":0,"message":"Request failed with status code 451","data":{}},"responseCode":500} while any user trying to re-login. The API is allowing new users to register but not allowing to login back.
I fear is it something https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/451. If yes how can I verify?
I am trying to use SSO on Windows and I am following the configuration instructions, as provided by AWS.
1. I successfully completed the **aws configure sso** step, input all of the required information like start URL, region, etc. The browser opens and authentication completes.
2. I execute the **aws s3 ls --profile <profile-name>** command and it lists all the buckets I can access.
3. When I execute **aws sso login --profile <assumed-role>**, I get the message *"Missing the following required SSO configuration values: sso_start_url, sso_region."*
The message at step 3 is asking me to complete what I did in step 1, even though everything appears to have worked properly. I've deleted the AWS config file, removed and reinstalled AWSCLIv2, but no joy.
Note: The Assumed Role has been added to the config file, and the setting validated by a co-worker.
Any ideas? TIA.
I realized that when I have a .aws folder with config file inside it, AWS responds about 100 times faster (to any upload, download, or query) than when config file does not exist.
I do not want to have aws folder on my system, so I used the following code to set the config region. However, it is still not as fast as when config file existed.
```
Aws::Client::ClientConfiguration clientConfig;
clientConfig.region = "us-west-2";
Aws::Auth::AWSCredentials credentials("abcd", "abcd");
Aws::S3::S3Client s3_client(credentials, clientConfig);
```
**What else needs to be set in the code to do what config file was doing?**
Hi everyone!
I reviewed the aws post about receive custom email notifications when a resource is created in "my AWS account" using aws config. https://aws.amazon.com/es/premiumsupport/knowledge-center/config-email-resource-created/?nc1=h_ls
But my the problem is that I still can't find a way to implement this case, not only for the resources created by a single account but of all the aws organizations accounts. ¿If someone could help me to see how to solve it or some another way to receive notifications of creative resources for all of my aws organizations accounts?
AWS Config is considered best practice to be enabled in all regions, but do we need to enable it in regions other than the one we are using?
We've recently noticed that the AWS Control Tower control: "Detect whether MFA is enabled for AWS IAM users of the AWS Console" is reporting a false positive result (NON_COMPLIANT) for a user that was deleted over a week ago.
One thing we have noticed is that the false positive result is being picked up in us-east-2 when normally IAM non-compliant is picked up in us-east-1 so I don't know whether this may be related to the incorrect results being displayed.
Has anyone experienced this issue before? How do we get it resolved as the results are misleading to user?
Note: We have tried re-evaluating the AWS Config for the control and redeploying the Controls and Landing Zone in case it was a configuration issue but it seems more related to a data issue being report from IAM.
I've turned on S3 bucket versioning and, as root user, turned on MFADelete on my S3 buckets. In AWS Config, some S3 buckets show as Compliant for the rule s3-bucket-versioning-enabled, some show as Noncompliant.
When I run "aws s3api get-bucket-versioning" for the Compliant and Noncompliant S3 buckets, I get both enabled:
{
"Status": "Enabled",
"MFADelete": "Enabled"
}
In Config, in Resources, for the S3 bucket that are Noncompliant, under View Configuration Item (JSON), it shows this:
"BucketVersioningConfiguration": {
"status": "Enabled",
"isMfaDeleteEnabled": null
},
For S3 buckets that are Compliant, the JSON shows this:
"BucketVersioningConfiguration": {
"status": "Enabled",
"isMfaDeleteEnabled": true
},
For the Noncompliant S3 buckets, I have tried suspending S3 bucket versioning and disabling MFA Delete, then re-enabling both. This did not change the Noncompliant status.
I've set up inventory management with SSM on several instances, and am currently recording changes in AWS config. I can go to the console and see the various changes on various days. but as the world is going I need this to be automated. I need to figure out a way to get sns notifications for configuration changes send to me
If possible only configuration changes of major and minor versions of applications would be ideal. but I will be happy with any notification to start.
I'm using windows 10 with all the latest fixes and I have wireshark installed, too. How can I verify that my AWS upload to my S3 bucket is using 6 processes like I told it to use in the config. I'm not that familiar with wireshark and tasklist doesn't help me either. I have my upload script coded in Python using boto3.
Hi,
I am trying to use a lamdba to pull from multi accounts and grab CloudFront information, but the following aliases "cname" won't come back
```
selectExpression = "select accountId,resourceId,awsRegion,arn,resourceCreationTime,configurationItemStatus,configuration.domainName,configuration.lastModifiedTime,configuration.distributionConfig.aliases.items,configuration.distributionConfig.origins.items.customOriginConfig.*,configuration.distributionConfig.origins.items.customOriginConfig.httpPort,configuration.distributionConfig.origins.items.customOriginConfig.httpsPort,configuration.distributionConfig.origins.items.customOriginConfig.originSslProtocols,configuration.distributionConfig.origins.items.domainName"
selectExpression = selectExpression + " where resourceType = 'AWS::CloudFront::Distribution'
print(result['configuration']['distributionConfig']['aliases']['items'])
```
gets an error below but get origin works fine:
```
print(result['configuration']['distributionConfig']['origins']['items'])
```
Any suggestions?
also in their docs:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-distributionconfig.html#cfn-cloudfront-distribution-distributionconfig-aliases and works with CLI
```
Error:
Response
{
"errorMessage": "'Aliases'",
"errorType": "KeyError",
"requestId": "345fga5-a4f4-405b-8c43-319f750e6f1a",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 62, in lambda_handler\n print(result['configuration']['distributionConfig']['Aliases']['items'])\n"
]
}
```
```
{
"aliases": {
"items": [
"www.foo.com"
]
},
"origins": {
"items": [
{
"domainName": "awseb-e-j-AWSEBLA-1XXXXXXXXXX.us-east-2.elb.amazonaws.com",
"customOriginConfig": {
"originSslProtocols": {
"quantity": 3,
"items": [
"TLSv1.2"
]
},
"httpPort": 80,
"httpsPort": 443
}
}
]
}
}
```