By using AWS re:Post, you agree to the Terms of Use
/AWS Identity and Access Management/

Questions tagged with AWS Identity and Access Management

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago

How can I restrict S3 bucket access to allow only VPC Flow logs from within an organization?

Hello, I have a landing zone created with Control Tower (Audit and Logging account and so on) In the logging account I have an S3 bucket in which I want to receive the VPC Flow logs from all current and future accounts from that organization. So, I want to create a bucket policy that only allows receiving VPC Flow logs as long as the source account is in the organization. The new accounts are created with Control Tower account factory by other teams in a self service fashion so I need to filter by organization, not account ids or specific ARNs. According to the VPC Flow logs user guide, you have to add the following statement (and another similar one but let's simplify things) to the S3 bucket policy to the destination bucket: ``` { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": {"Service": "delivery.logs.amazonaws.com"}, "Action": "s3:PutObject", "Resource": "my-s3-arn", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": account_id }, "ArnLike": { "aws:SourceArn": "arn:aws:logs:region:account_id:*" } } } ``` As I need to filter by organization and not by account, I tried using the aws:PrincipalOrgID condition key instead of the SourceAccount and SourceArn. However, I get an error saying that the aws:PrincipalOrgID does not support service principals and I cannot create the policy. I also tried with the aws:PrincipalOrgPaths condition key. Then, it lets me create the policy but when I try to create the Flow log it says "Access Denied for LogDestination: bucket_name. Please check LogDestination permissions." I have also tried keeping the principal as "*" and adding the "aws:PrincipalServiceName": "delivery.logs.amazonaws.com" to the condition but I get the same error when trying to create the Flow logs. Does anyone have any idea on how can I do that? Thanks in advance
1
answers
0
votes
7
views
AWS-User-3850534
asked 5 days ago

Deny EFS actions to all but specific user

I'm trying to deny EFS actions to all users, except for one specific user(s). When attaching a file system policy to my EFS, using a deny entry with NotPrincipal, I'm not able to access the EFS as I would have expected to. Example file system policy: ``` { "Sid": "Limit to deployer/CI", "Effect": "Deny", "NotPrincipal": { "AWS": [ "arn:aws:sts::account_id:assumed-role/role_name/my_email@my_domain.com" ] }, "Action": [ "elasticfilesystem:DescribeMountTargets", ], "Resource": "arn:aws:elasticfilesystem:eu-west-2:account_id:file-system/efs_id" } ``` My expectation would be that my role session would have access to the listed action, but no one else would have access. However, when testing this, even my user is denied access. https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/ suggests that both the `role` ARN and `assumed-role` ARN should be used in this scenario, however when testing this, it does not function. Following the logic used within the blog post, I can create the following: ``` { "Sid": "Limit to deployer", "Effect": "Deny", "Principal": { "AWS": "*" }, "Action": [ "elasticfilesystem:DescribeMountTargets" ], "Resource": "arn:aws:elasticfilesystem:eu-west-2:account_id:file-system/efs_id", "Condition": { "StringNotLike": { "aws:userId": [ "role_principal_id:my_email@my_domain.com", "account_id" ] } } } ``` This does appear to work as I intend, however I'd like to understand the reasoning behind the first example not working, because it is much more usable and easily understandable.
3
answers
0
votes
6
views
MattPalmer-HS
asked 6 days ago

InvalidClientTokenId sending message to SQS that works for SES

I'm having trouble sending a message to a new SQS queue and receive this error: [Type] => Sender [Code] => InvalidClientTokenId [Message] => The AWS Access Key Id you provided does not exist in our records. Any suggestions on why SQS is not recognizing the Key ID for SQS SendMessage, but does accept it for SES calls? - Key Id is the identical key used for successfully sending SES mail - Same Elasticbeanstalk instance, application, AWS SDK. - PHP 7.4 64 bit - Elasticbeanstalk instance on Amazon Linux 2/3.3.9 - AWS SDK 1.5.14 (tried 3.x, same results) Php code: require_once(<path>/aws-sdk/sdk.class.php'); require_once(<path>/aws-sdk/services/sqs.class.php'); $options = array('key' => 'AKIAblahblahblah','secret' => 'blahblahblahblahblahblahblahblahblahblahblahblah'); $sqs = new AmazonSQS($options); $sqs->set_region('sqs.us-east-2.amazonaws.com'); $sqs_queue = 'https://sqs.us-east-2.amazonaws.com/111112345678/my-app-sa'; $message = 'test'; $r = $sqs->send_message($sqs_queue, $message); Elasticbeanstalk: IAM instance profile: aws-elasticbeanstalk-ec2-role Service role: arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role IAM User: Name=my-app-sa User ARN=arn:aws:iam::111112345678:user/my-app-sa Permissions: Policy=AmazonSQSFullAccess Created AccessKey: keyID=AKIAblahblahblah SQS Queue: Name=my-sqs-queue Access Policy: { "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111112345678:root" }, "Action": "SQS:*", "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" }, { "Sid": "__sender_statement", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-ec2-role", "arn:aws:iam::111112345678:user/my-app-sa", "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role" ] }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" }, { "Sid": "__receiver_statement", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-ec2-role", "arn:aws:iam::111112345678:user/my-app-sa", "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role" ] }, "Action": [ "SQS:ChangeMessageVisibility", "SQS:DeleteMessage", "SQS:ReceiveMessage" ], "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" } ] }
2
answers
0
votes
5
views
AWS-User-1947056
asked 8 days ago

Container Insights on Amazon EKS Fluent Bit AccessDeniedException

I'm trying to add a Container Insight to my EKS cluster but running into a bit of an issue when deploying. According to my logs, I'm getting the following: ``` [error] [output:cloudwatch_logs:cloudwatch_logs.2] CreateLogGroup API responded with error='AccessDeniedException' [error] [output:cloudwatch_logs:cloudwatch_logs.2] Failed to create log group ``` The strange part about this is the role it seems to be assuming is the same role found within my EC2 worker nodes rather than the role for the service account I have created. I'm creating the service account and can see it within AWS successfully using the following command: ``` eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name cloudwatch-agent --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve ``` Despite the serviceaccount being created successfully, I continue to get my AccessDeniedException. One thing I found was the logs work fine when I manually add the CloudWatchAgentServerPolicy to my worker nodes, however this is not the implementation I would like and instead would rather us the automative approach of adding the service account and not touching the worker nodes directly if possible. The steps I followed can be found at the bottom of this [https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-prerequisites.html](). Thanks so much!
0
answers
0
votes
3
views
AWS-User-8353451
asked 9 days ago

aws-elasticbeanstalk-ec2-role aws-elasticbeanstalk-ec2-role is not authorized to perform: secretsmanager:GetSecretValue although the default role is updated to include policy

There is an EC2 instance attempting to get a secret from SecretsManager but errors with the following: ``` Error getting database credentials from Secrets Manager AccessDeniedException: User: arn:aws:sts::{AccountNumber}:assumed-role/aws-elasticbeanstalk-ec2-role/i-{instanceID} is not authorized to perform: secretsmanager:GetSecretValue on resource: rds/staging/secretName because no identity-based policy allows the secretsmanager:GetSecretValue action ``` I have tried adding the following policy to the general aws-elasticbeanstalk-ec2-role to allow for access but it is still not able to get the secrets: GetSecretsPolicy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "arn:aws:secretsmanager:*:{AccountNumber}:secret:rds/production/secretName" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "secretsmanager:GetRandomPassword", "Resource": "*" }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "arn:aws:secretsmanager:*:{AccountNumber}:secret:rds/staging/secretName" } ] } ``` I continue to get the error and am wondering if there is something I can tweak to make it able to have proper access to the secret values
1
answers
0
votes
7
views
AWS-User-1866056
asked 11 days ago

AWS Cognito - How to force select account when signing in with Google

I am using Cognito User Pools and `federatedSignIn({provider: 'Google'})` to have the user login using Google. The user may have multiple accounts with our application, and there is a high chance that they may use a work vs personal Google account to login. Unfortunately, if the user has already logged into the application via Google, we're unable to force account selection for them to select a new one without calling the Cognito LOGOUT endpoint (which would kill their existing Google cookie with Cognito, which is what we don't want to do here). A solution would have been to use Google's `prompt=select_account`, but there is no way to specify this in the `federatedSignIn()` call. There has been posts describing a "workaround" where you use the Google SDK directly to do the login with the parameter, then call `federatedSignIn()` with the Google Auth JWT, but that workaround does not work because that is specific to *identity pool* federation, which would not give you back the Cognito JWTs in the *user pool* federation scenario. This is the non-working hack that users generally refer to: - (The actual code for the hack) https://docs.amplify.aws/lib/auth/advanced/q/platform/js/#google-sign-in-react - https://github.com/aws-amplify/amplify-js/issues/4044 - https://forums.aws.amazon.com/thread.jspa?messageID=980276&#980276 - https://stackoverflow.com/questions/58154256/aws-cognito-how-to-force-select-account-when-signing-in-with-google It will not give you back the Cognito User Pool tokens.
0
answers
0
votes
1
views
theogravity-switchboard
asked 11 days ago

CreateBotLocale is erroring with user has no permissions

Hi, I am using java sdk to create lex bot v2. Here is the code to create a bot: ``` final DataPrivacy dataPrivacy = DataPrivacy.builder().childDirected(isDataPrivacyRequired).build(); final CreateBotRequest botRequest = CreateBotRequest.builder().botName(botName).roleArn(roleARN).idleSessionTTLInSeconds(idleSessionTTLInSeconds).dataPrivacy(dataPrivacy).build(); final CreateBotResponse response = this.lexClient.createBot(botRequest); ``` The bot gets created. As a next step I create bot locale like the following: `` ``` final CreateBotLocaleRequest botLocaleRequest = CreateBotLocaleRequest.builder().botId(botId).nluIntentConfidenceThreshold(0.4).botVersion("DRAFT").localeId("en_US").build(); final CreateBotLocaleResponse botLocaleResponse = this.lexClient.createBotLocale(botLocaleRequest); ``` The above doesnt work and I get the following error: software.amazon.awssdk.services.lexmodelsv2.model.LexModelsV2Exception: User: arn:aws:iam::xxxxxxxxxxx:user/ci-user is not authorized to perform: null (Service: LexModelsV2, Status Code: 403, Request ID: f9ebd3de-c0d4-4c3d-b1ad-8a2c38a22552, Extended Request ID: null) The only difference in creating the bot and botlocale is roleArn. I am not sure if that is creating this problem. How can I solve? Any insights? Btw I am using the following code to get the lex client: ``` public LexModelsV2Client getLexClient() { Region region = Region.AP_SOUTHEAST_1; DefaultCredentialsProvider provider = DefaultCredentialsProvider.create(); return LexModelsV2Client.builder().credentialsProvider(provider).region(region).build(); } ``` This IAM user has all AWS permission to access and i have used simlator to test the policy and it gives access. Not sure what is missing!
2
answers
0
votes
5
views
AWS-User-6871093
asked 11 days ago

Trying to isolate IAM user to have AmazonEC2ReadOnlyAccess to only select instances using python boto3

Ok so the policy `arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess` looks like this: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:Describe*", "Resource": "*" }, { "Effect": "Allow", "Action": [ "cloudwatch:ListMetrics", "cloudwatch:GetMetricStatistics", "cloudwatch:Describe*" ], "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:Describe*", "Resource": "*" } ] } ``` This works to allow the IAM user to perform most ec2 read functions. Problem is that this is too permissive. What I need to do is allow all the same functionality as this but ONLY for certain instances. So what I attempted to do is isolate this given a list of instance ids `instanceids` (using python boto3): ``` ResourceIds = [ f"arn:aws:ec2:{REGION_NAME}:{AWS_ACCOUNTID}:instance/{iid}" for iid in instanceids] Ec2ReadOnlyPolicy = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": ResourceIds }, { "Effect": "Allow", "Action": "elasticloadbalancing:Describe*", "Resource": ResourceIds }, { "Effect": "Allow", "Action": [ "cloudwatch:ListMetrics", "cloudwatch:GetMetricStatistics", "cloudwatch:Describe*" ], "Resource": ResourceIds }, { "Effect": "Allow", "Action": "autoscaling:Describe*", "Resource": ResourceIds } ] } response = iam_client.put_group_policy( PolicyDocument=json.dumps(Ec2ReadOnlyPolicy), PolicyName=EC2_RO_POLICY_NAME, GroupName=UserGroupName, ) ``` Problem is that this doesn't seem to allow the user to list instances they have access to: ``` $ aws ec2 describe-instances An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation. ``` What am I doing wrong?
2
answers
0
votes
14
views
chrisjd20
asked 13 days ago

MSK Connect on t3.small fails due to not-retryable SaslAuthenticationException - reconnect.backoff.ms worker configuration will not help - can AWS remove the connection limit?

Hello, we are encountering the same issues as e.g. https://github.com/aws/aws-msk-iam-auth/issues/28 regarding the `SaslAuthenticationException` while using MSK Connect with a kafka.t3.small instance. Setting `reconnect.backoff.ms` to e.g. 10000 ms will not resolve the issue, since the exception that is being thrown (`SaslAuthenticationException`) is not retryable (see https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L808) and ultimately leads to a new creation of a client, not a reconnect. When would the reconnect take place? As I went through the implementation, what I see is: 1. that `startConnect()` in `ConnectDistributed` is calling the constructor of `Worker` 2. the constructor of `Worker` calls `ConnectUtils.lookupKafkaClusterId(config)` 3. that method calls `Admin.create(config.originals())` - which opens up a new connection 4. if you follow the calls from there, you will see that you end up not retrying upon obtaining `SaslAuthenticationException` (https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L808) Even if the retry would work, several AdminClients are created, which all connect to the MSK cluster. Since this is not a reconnect, `reconnect.backoff.ms` settings cannot work for remediation. There is no mechanism in the Kafka code that would globally allow restricting these connections to happen only every x seconds. Unless I oversee something, MSK Connect should only work by chance with t3.small instances. This forces us to either: * not use IAM and go for SASL/SCRAM * use a kafka.m5.large instance and go from about 32 USD/Month to 151 USD/Month per instance - meaning 90 USD vs 450 USD in our case The limitation on the t3.small instance really limits what we want to achieve. The workaround presented [here](https://aws.amazon.com/premiumsupport/knowledge-center/msk-connector-connect-errors/) is not working and thus forces us to buy the larger instance. We have no need for a large instance and we don't want to have additional costs for simply using IAM for MSK Connect. **Can AWS remove the limit on the t3.small instance or present a different workaround? That would be great :) ** I cannot open a support case for this, since we don't have the required subscription and I believe that this could be of general interest. See parts of our logs using AWS MSK Connect: ```` [Worker-05ea3408948fa0a4c] [2022-01-01 22:41:53,059] INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils:49) [Worker-05ea3408948fa0a4c] [2022-01-01 22:41:53,061] INFO AdminClientConfig values: ... [Worker-05ea3408948fa0a4c] reconnect.backoff.max.ms = 10000 [Worker-05ea3408948fa0a4c] reconnect.backoff.ms = 10000 [Worker-05ea3408948fa0a4c] request.timeout.ms = 30000 [Worker-05ea3408948fa0a4c] retries = 2147483647 [Worker-05ea3408948fa0a4c] retry.backoff.ms = 10000 ... [Worker-05ea3408948fa0a4c] [2022-01-01 22:41:54,269] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:86) [Worker-05ea3408948fa0a4c] org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties. [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.runtime.Worker.<init>(Worker.java:140) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.runtime.Worker.<init>(Worker.java:127) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:118) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80) [Worker-05ea3408948fa0a4c] Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SaslAuthenticationException: [e4afe53f-73b5-4b94-9ac3-30d737071e56]: Too many connects [Worker-05ea3408948fa0a4c] at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45) [Worker-05ea3408948fa0a4c] at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32) [Worker-05ea3408948fa0a4c] at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89) [Worker-05ea3408948fa0a4c] at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) [Worker-05ea3408948fa0a4c] at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64) [Worker-05ea3408948fa0a4c] ... 5 more [Worker-05ea3408948fa0a4c] Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: [e4afe53f-73b5-4b94-9ac3-30d737071e56]: Too many connects [Worker-05ea3408948fa0a4c] [2022-01-01 22:41:54,281] INFO Stopped http_0.0.0.08083@68631b1d{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:381) [Worker-05ea3408948fa0a4c] [2022-01-01 22:41:54,283] INFO Stopped https_0.0.0.08443@611d0763{SSL, (ssl, http/1.1)}{0.0.0.0:8443} (org.eclipse.jetty.server.AbstractConnector:381) [Worker-05ea3408948fa0a4c] MSK Connect encountered errors and failed. ````
0
answers
2
votes
12
views
mfbieber
asked 15 days ago

S3 bucket permissions to run CloudFormation from different accounts and create Lambda Funtions.

Not sure what I am missing but I keep getting permission denied errors when I launch CloudFormation using https URL Here are the details. I have a S3 bucket "mys3bucket" in ACCOUNT A. In this bucket, I have a CloudFormation template stored at s3://mys3bucket/project1/mycft.yml . The bucket us in us-east-1. It uses S3 Serverside Encryption using S3 key [not KMS] For this bucket, I have disabled ACLs , bucket and all objects are private but I have added a bucket policy which is as below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT_B_NUMBER:root" }, "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectTagging", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::mys3bucket", "arn:aws:s3:::mys3bucket/project1/*" ] } ] } Now, I login to Account B --> CloudFormation --> Create new stack --> Template is Ready --> Amazon S3 URL and the I enter the object path to my template in this format https://mys3bucket.s3.amazonaws.com/project1/mycft.yml When I click next, I get the following message on the same page as a banner in red S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Also, just for your information, I am able to list the bucket and objects from Account B if I use Cloud9 and run aws s3 ls s3://mys3bucket/project1/mycft.yml aws s3 cp s3://mys3bucket/project1/mycft.yml . What am I missing? [I think this should work even when bucket is set a private but bucket policy allows cross-account access]. Does this use case require my bucket to be hosted as static website?
2
answers
0
votes
8
views
Alexa
asked 21 days ago

IAM policy editor warnings: Specify log-group resource ARN for the actions

When using IAM visual policy editor, it does not seem to care much whether selected (CloudWatch Logs) actions match the level of the specified ARN resources. Although the syntax is correct, it would subsequently complain about warnings for certain policy statements. > Specify log-group resource ARN for the GetLogGroupFields and 6 more actions. > One or more actions may not support this resource. > Specify log-stream resource ARN for the PutLogEvents and 1 more action. Even if I follow the action listing specifications https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazoncloudwatchlogs.html#amazoncloudwatchlogs-actions-as-permissions and re-group the actions into statements according to their resource scopes (i.e. log-group; log-stream), the warnings still appear, seemingly because the resource ARNs specified still don't tie in to their supposed levels? ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "logs:GetLogRecord", "logs:GetQueryResults", "logs:StopQuery", "logs:TestMetricFilter" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:FilterLogEvents", "logs:GetLogGroupFields", "logs:ListTagsLogGroup", "logs:StartQuery" ], "Resource": [ "arn:aws:logs:REGION:ACCOUNT:log-group:App/all-logs/dev:*", "arn:aws:logs:REGION:ACCOUNT:log-group:App/error-logs/dev:*" ] }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "logs:GetLogEvents", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:REGION:ACCOUNT:log-group:App/all-logs/dev:log-stream:*", "arn:aws:logs:REGION:ACCOUNT:log-group:App/error-logs/dev:log-stream:*" ] } ] } ``` I am particularly confused by the ARN formatting. When I look at a CloudWatch log group direct properties, its specified ARN is arn:aws:logs:REGION:ACCOUNT:log-group:App/all-logs/dev:* but shouldn't it be arn:aws:logs:REGION:ACCOUNT:log-group:App/all-logs/dev since it's not its children resources we're targetting? Regardless, even if I try that alternative format the same warning is present. What am I missing for the statements?
2
answers
0
votes
9
views
icelava
asked a month ago

How to investigate Aurora Postgres IAM authentication errors from rdsauthproxy

I have been using IAM database authentication on an Aurora for Postgres for many months now and everything worked well. A few days ago I started getting login errors until now it is impossible to login at all. I am not sure about the timeline as we only use these accounts for individual user connections. Only accounts not using IAM can login now. I am not aware of any change but I cannot pinpoint the root cause of the error. The error I am getting in Postgres clients is this: ``` Unable to connect to server: FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` If I look into the Postgres logs I get a little more details: ``` * Trying <REDACTED_IP>:1108... * Connected to rdsauthproxy (<REDACTED_IP>) port 1108 (#0) > POST /authenticateRequest HTTP/1.1 Host: rdsauthproxy:1108 Accept: */* Content-Length: 753 Content-Type: multipart/form-data; boundary=------------------------1f9a4da08078f511 * We are completely uploaded and fine * Mark bundle as not supporting multiuse < HTTP/1.1 403 Forbidden < Content-Type: text/html;charset=utf-8 < Content-Length: 0 < * Connection #0 to host rdsauthproxy left intact 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:LOG: pam_authenticate failed: Permission denied 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:FATAL: PAM authentication failed for user "<REDACTED_USERNAME>" 2021-12-05 14:42:43 UTC:10.4.2.137(32029):<REDACTED_USERNAME>@postgres:[7487]:DETAIL: Connection matched pg_hba.conf line 13: "hostssl all +rds_iam all pam" 2021-12-05 14:42:43 UTC:10.4.2.137(13615):<REDACTED_USERNAME>@postgres:[7488]:FATAL: pg_hba.conf rejects connection for host "<REDACTED_IP>", user "<REDACTED_USERNAME>", database "postgres", SSL off ``` So it seems to be "rdsauthproxy" that rejects the authentication. My understanding is that this proxy is part of the Aurora instance and I did not find a way to get its logs where hopefully I could find any information on why the authentication is rejected. I checked the IAM configuration in case something changed but it seems fine. The users have a policy like this: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:eu-west-3:<REDACTED_ACCOUNT_ID>:dbuser:*/<REDACTED_USERNAME>" } ] } ``` The usernames match exactly between IAM and Postgres. In Postgres they all have the "rds_iam" role. Is there anything I could be missing? At least is there a way to retrieve logs of an Aurora rdsauthproxy instance that maybe could point me in the right direction?
1
answers
0
votes
5
views
Fran
asked a month ago

IAM policy to invoke AssumeRoleWithWebIdentity

I am trying to develop a lambda function, which is implemented in Python, for a user federation. This lambda function invokes GetOpenIdTokenForDeveloperIdentity first to get a token from an identity pool, then invokes AssumeRoleWithWebIdentity. However, I got an error when the lambda function attempted to invoke AssumeRoleWithWebIdentity. ``` "An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity ``` The trust relationship and policy attached to the role of the lambda function are as follow. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com", "Federated": "cognito-identity.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:AssumeRoleWithWebIdentity" ] } ] } ``` ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "lambda:InvokeFunction", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "cognito-identity:GetOpenIdTokenForDeveloperIdentity", "sts:AssumeRoleWithWebIdentity" ], "Resource": "*", "Effect": "Allow" } ] } ``` I am wondering if I set enough permission to invoke AssumeRoleWithWebIdentity. I would appreciate it if you could give me any suggestion. Just in case, this is a snippet of the lambda function. ``` # 'provider_name' is a custom provider name set in an identity pool in AWS cog_cli = boto3.client('cognito-identity') cog_id_res = cog_cli.get_open_id_token_for_developer_identity( IdentityPoolId=os.environ['IDENTITY_POOL_ID'], Logins={ provider_name: user_id } ) sts_cli = boto3.client("sts") sts_res = sts_cli.assume_role_with_web_identity( RoleArn=os.environ['TARGET_ROLE_ARN'], RoleSessionName=user_id, WebIdentityToken=cog_id_res['Token'] ) ```
1
answers
0
votes
1
views
mille-printemps
asked 2 years ago

Can I register an IDP with multiple certificates on AWS IAM

What I have is: An OpenId Complaint service (Rest) that provides tokens This service has multiple certificates (keyPairs) for signing tokens depending on some factors when requesting a token - The service is implementing the 2 OpenId Endpoints (well known and certs) What I did: I registered the service as an IDP on AWS IAM service successfully (hence my two OpenId Endpoints are working other wise AWS wont accept the IDP) I created roles on IAM that are to be assumed using the IDP service tokens I got two tokens from the IDP service to be used for assuming role (each signed with different key) Problem: AssumeRole is failing and I'm getting invalid token exception for both tokens. I tried to set the "kid" claim in the tokens each with the corresponding kid of the certificate and it didn't work :(. Note: I'm assuming role using Java AWS API When I remove one of the certificates (from the below sample response) the remaining certificate works fine. So the problem is with having 2 certs, but I need to have two certificates and AWS should have a way of working with such case I just don't know how. Sample of how my certs endpoint looks like: { "keys": \[ { "kid": "kid", "kty": "kty", "use": "use", "alg": "alg", "n": "nValue", "e": "eValue", "x5c": \[ "cert1" ], "x5t": "x5t=", "x5t#S256": "x5t#S256" }, { "kid": "kid1", "kty": "kty", "use": "use", "alg": "alg", "n": "nValue", "e": "eValue", "x5c": \[ "cert2" ], "x5t": "x5t=", "x5t#S256": "x5t#S256" } ] } Edited by: hfakih on Feb 10, 2020 6:49 AM
1
answers
0
votes
1
views
hfakih
asked 2 years ago
8
answers
0
votes
4
views
rswift
asked 2 years ago

STS temporary credentials: "Access Key Id you provided does not exist"

Hello everyone, I'm running ECS Fargate tasks and they need to PUT files to an S3 bucket. I decided to use STS temporary credentials instead of just hardcoding long-lasting credentials in the docker image. So, I start by requesting this url in bash. json=$(curl "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI") It works, it returns this json output: ``` { "RoleArn":"The correct ARN of the Task Role. This role has the s3:PutObject permission.", "AccessKeyId":"ASIA4H7NO7.....", "SecretAccessKey":"Some string", "Token":"Some long string" } ``` Now I use the _AccessKeyId_ and _SecretAccessKey_ I got to perform a V4 signature so I can PUT the file to S3. <https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html> I get this response, do you have any idea why? ``` <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>InvalidAccessKeyId</Code> <Message>The AWS Access Key Id you provided does not exist in our records.</Message> <AWSAccessKeyId>ASIA4H7NO7...</AWSAccessKeyId> <RequestId>AE2074679...</RequestId> <HostId>Some long string</HostId> </Error> ``` How come it says it does not exist? It's the Access key AWS gave me. I'm not using SDKs, just scripting some bash which is indeed working fine when I use long-lasting credentials (e.g the AKIA access key). On a side note... what's with the magic IP 169.254.170.2? Can't I use some host name? Thanks in advance.
1
answers
0
votes
0
views
BrightSoul
asked 2 years ago

Troubleshooting SAML 2.0 Federation, Invalid SAML Response

Hello everyone, I'm trying to SSO into AWS through my IdP (Keycloak). I'm stuck with the error **Your Request Included an Invalid SAML Response. To Logout, Click Here** that is thrown from AWS SingIn. This specific error is described in the AWS Documentation and states that the response from the identity provider does not include an attribute with the **Name** set to **https://aws.amazon.com/SAML/Attributes/Role**. But as you can see in the authentication response below (at the very end) this is set. Any help is appreciated here :) Thanks Carsten ``` <?xml version="1.0"?> <samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" Destination="https://signin.aws.amazon.com/saml" ID="ID_c1b23a5d-0b90-4a2a-b88d-50b5c854bbe7" IssueInstant="2019-10-14T14:06:43.661Z" Version="2.0"> <saml:Issuer>https://auth.acme.org/auth/realms/master</saml:Issuer> <dsig:Signature xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"> <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> <dsig:Reference URI="#ID_c1b23a5d-0b90-4a2a-b88d-50b5c854bbe7"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> <dsig:DigestValue>!REMOVED!</dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue>!REMOVED!</dsig:SignatureValue> <dsig:KeyInfo> <dsig:KeyName>!REMOVED!</dsig:KeyName> <dsig:X509Data> <dsig:X509Certificate>!REMOVED!</dsig:X509Certificate> </dsig:X509Data> <dsig:KeyValue> <dsig:RSAKeyValue> <dsig:Modulus>!REMOVED!</dsig:Modulus> <dsig:Exponent>AQAB</dsig:Exponent> </dsig:RSAKeyValue> </dsig:KeyValue> </dsig:KeyInfo> </dsig:Signature> <samlp:Status> <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/> </samlp:Status> <saml:Assertion xmlns="urn:oasis:names:tc:SAML:2.0:assertion" ID="ID_17ff48a3-e794-4b66-a237-c93820afccea" IssueInstant="2019-10-14T14:06:43.661Z" Version="2.0"> <saml:Issuer>https://auth.acme.org/auth/realms/master</saml:Issuer> <dsig:Signature xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"> <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/> <dsig:Reference URI="#ID_17ff48a3-e794-4b66-a237-c93820afccea"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/> <dsig:DigestValue>!REMOVED!</dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue>!REMOVED!</dsig:SignatureValue> <dsig:KeyInfo> <dsig:KeyName>!REMOVED!</dsig:KeyName> <dsig:X509Data> <dsig:X509Certificate>!REMOVED!</dsig:X509Certificate> </dsig:X509Data> <dsig:KeyValue> <dsig:RSAKeyValue> <dsig:Modulus>!REMOVED!</dsig:Modulus> <dsig:Exponent>AQAB</dsig:Exponent> </dsig:RSAKeyValue> </dsig:KeyValue> </dsig:KeyInfo> </dsig:Signature> <saml:Subject> <saml:NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">G-367233d6-89a5-417b-9f1a-a4fa98f04a9c</saml:NameID> <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"> <saml:SubjectConfirmationData NotOnOrAfter="2019-10-14T14:07:41.661Z" Recipient="https://signin.aws.amazon.com/saml"/> </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2019-10-14T14:06:41.661Z" NotOnOrAfter="2019-10-14T14:07:41.661Z"> <saml:AudienceRestriction> <saml:Audience>urn:amazon:webservices</saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2019-10-14T14:06:43.661Z" SessionIndex="9909c433-23c8-44c1-a0d2-dbd862289b37::d87788fb-e8d3-4f93-b1f0-c638546a7a8e" SessionNotOnOrAfter="2019-10-15T00:06:43.661Z"> <saml:AuthnContext> <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> <saml:AttributeStatement> <saml:Attribute FriendlyName="Session Name" Name="https://aws.amazon.com/SAML/Attributes/RoleSessionName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">firstname.lastname</saml:AttributeValue> </saml:Attribute> <saml:Attribute FriendlyName="Session Duration" Name="https://aws.amazon.com/SAML/Attributes/SessionDuration" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">28800</saml:AttributeValue> </saml:Attribute> <saml:Attribute FriendlyName="Session Role" Name="https://aws.amazon.com/SAML/Attributes/Role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">admin</saml:AttributeValue> <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">arn:aws:iam::MY_REMOVED_ACCOUNTID:role/AssumeRoleSAML,arn:aws:iam::MY_REMOVED_ACCOUNTID:saml-provider/MYPROVIDER</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> </saml:Assertion> </samlp:Response> ``` Edited by: Bob The Builder on Oct 17, 2019 4:02 PM
1
answers
0
votes
1
views
Bob The Builder
asked 2 years ago

InvalidAction on IAM

I'm trying to perform a GetPolicy request via a v4 signature signing request and getting an "InvalidAction" response. In fact, I get the same InvalidAction response for any other action that I specify for the IAM service. If I use the exact same signing code and switch to the EC2 service, it works fine. What am I missing? This is the response I get: ``` Request URL = https://iam.amazonaws.com?Action=GetPolicy&PolicyArn=arn:aws:iam::111111111111:policy/my-policy&Version=2013-10-15 Response code: 400 <ErrorResponse xmlns="http://webservices.amazon.com/AWSFault/2005-15-09"> <Error> <Type>Sender</Type> <Code>InvalidAction</Code> <Message>Could not find operation GetPolicy for version 2013-10-15</Message> </Error> <RequestId>5de4035f-e807-11e9-b65b-85bd552fc7ba</RequestId> </ErrorResponse> ``` This is the code I'm using (I followed the instructions here: <https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html>): ``` import requests import sys, os, base64, datetime, hashlib, hmac method = 'GET' service = 'iam' host = 'iam.amazonaws.com' endpoint = 'https://iam.amazonaws.com' request_parameters = 'Action=GetPolicy&PolicyArn=arn:aws:iam::111111111111:policy/my-policy&Version=2013-10-15' access_key = 'XXX' secret_key = 'XXX' region = 'us-east-1' def sign(key, msg): return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest() def getSignatureKey(key, dateStamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning t = datetime.datetime.utcnow() amzdate = t.strftime('%Y%m%dT%H%M%SZ') datestamp = t.strftime('%Y%m%d') canonical_uri = '/' canonical_querystring = request_parameters canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n' signed_headers = 'host;x-amz-date' payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest() canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash algorithm = 'AWS4-HMAC-SHA256' credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest() signing_key = getSignatureKey(secret_key, datestamp, region, service) signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature headers = {'x-amz-date':amzdate, 'Authorization':authorization_header} request_url = endpoint + '?' + canonical_querystring print('Request URL = ' + request_url) r = requests.get(request_url, headers=headers) print('Response code: %d\n' % r.status_code) ``` Sorry for the poor formatting. Any help would be greatly appreciated.
4
answers
0
votes
1
views
tomdaq
asked 2 years ago

How do you restrict AMI use with IAM using Deny and NotResource

I have a policy that Allows only certain AMI and Security Group combinations to run. Although this works it if there is another policy that allows RunInstance it can be bypassed. It would be more secure if it I could Deny everything except that combination of AMIs and Security Groups, but when I try this the launch always fails. The sole difference is that use of Deny/NotResource instead of Allow/Resource. This is the only policy attached to the user, and there is no SCP. This is the code using the Allow and works (Note: for some reason the codeblock refuses to include the closing '}' so it falls just outside the codeblock): { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadOnlyAccess", "Effect": "Allow", "Action": [ "ec2:Describe*", "ec2:GetConsole*" ], "Resource": "*" }, { "Sid": "Fixed", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:*:*:subnet/*", "arn:aws:ec2:*:*:key-pair/*", "arn:aws:ec2:*:*:instance/*", "arn:aws:ec2:*:*:volume/*", "arn:aws:ec2:*:*:network-interface/*" ] }, { "Sid": "Variable", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:*:*:security-group/sg-03cf946fca20ef2e2", "arn:aws:ec2:us-east-1::image/ami-04681a1dbd79675a5", "arn:aws:ec2:us-east-1::image/ami-0ff8a91507f77f867" ] } ] } This is the code using the Deny which fails to launch: { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadOnlyAccess", "Effect": "Allow", "Action": [ "ec2:Describe*", "ec2:GetConsole*" ], "Resource": "*" }, { "Sid": "Fixed", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:*:*:subnet/*", "arn:aws:ec2:*:*:key-pair/*", "arn:aws:ec2:*:*:instance/*", "arn:aws:ec2:*:*:volume/*", "arn:aws:ec2:*:*:network-interface/*" ] }, { "Sid": "Variable", "Effect": "Deny", "Action": "ec2:RunInstances", "NotResource": [ "arn:aws:ec2:*:*:security-group/sg-03cf946fca20ef2e2", "arn:aws:ec2:us-east-1::image/ami-04681a1dbd79675a5", "arn:aws:ec2:us-east-1::image/ami-0ff8a91507f77f867" ] } ] }
1
answers
0
votes
2
views
Sean_L
asked 3 years ago
  • 1
  • 90 / page