By using AWS re:Post, you agree to the Terms of Use
/IAM Policies/

Questions tagged with IAM Policies

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Cross-Account Connect Athena (account X) to Glue + S3 (account Y)

Hello, This question https://repost.aws/questions/QUSdk1j9-FT02t91W3AU0Qng/cross-account-access-from-athena-to-s-3 from 3 years ago sims to be similar. I did all that is suggested appart from using Lake Formation. I wanted to try and create the permissions manually first. **Account Y**: I have JSON data in an S3 and used Glue to create the catalog in account Y. I configured this owner account such as Step 1.a https://docs.aws.amazon.com/athena/latest/ug/security-iam-cross-account-glue-catalog-access.html I also configured the S3 bucket according to "Apply a cross-account bucket policy" from https://tomgregory.com/s3-bucket-access-from-the-same-and-another-aws-account/ **Account X**: I want to configure Athena to query S3 using the catalog created by Glue I configured this borrower account such as Step 1.b https://docs.aws.amazon.com/athena/latest/ug/security-iam-cross-account-glue-catalog-access.html I also configured the IAM Policies according to "Apply a cross-account bucket policy" from https://tomgregory.com/s3-bucket-access-from-the-same-and-another-aws-account/ Both S3 and Glue Policies are attached to the concerned users in this account. **Problem**: In account X, Athena is capable of accessing Glue and it displays Database, Tables and the catalog. However when I run a query (a same successful query made in account Y) I get the error ``` Permission denied on S3 path: s3://asdf This query ran against the "dbname" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: a3a3a3a... ``` Apparently, I'm missing a S3 permission but I can't find information about it Any help is much appreciated. Thanks,
0
answers
0
votes
3
views
asked 3 hours ago

Insufficient privileges for accessing data in S3 when running a lambda function to create a Personalize dataset import job

I am trying to create a lambda function to automate the creation of a dataset import job in Personalize. I followed this guide: https://docs.aws.amazon.com/personalize/latest/dg/granting-personalize-s3-access.html#attaching-s3-policy-to-role and kept getting the same error saying "Insufficient privileges for accessing data in S3". Here are the steps I took: 1. Add AmazonPersonalizeFullAccess to my IAM user 2. Create a personalizeLambda role with 4 policies: - AmazonS3FullAccess - CloudWatchLogsFullAccess - AmazonPersonalizeFullAccess - AWSLambdaBasicExecutionRole This didn't work with the error above so I added this policy: - PersonalizeS3BucketAccessPolicyCustom: { "Version": "2012-10-17" "Id": "PersonalizeS3BucketAccessPolicyCustom", "Statement": [ { "Sid": "PersonalizeS3BucketAccessPolicy", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*" ] }, { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*" ] }, { "Effect": "Allow", "Action": "lambda:InvokeFunction", "Resource": [ "arn:aws:lambda:<region>:<id>:function:create-personalize-model*", "arn:aws:lambda:<region>:<id>:function:create-personalize-dataset-import-job" ] } ] } 3. Create a bucket policy in the S3 bucket that has the dataset files: { "Version": "2012-10-17", "Id": "PersonalizeS3BucketAccessPolicy", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<id>:role/personalizeLambda", "Service": "personalize.amazonaws.com" }, "Action": "s3:*", "Resource": "arn:aws:s3:::jfna-personalize" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<id>:role/personalizeLambda", "Service": "personalize.amazonaws.com" }, "Action": "s3:*", "Resource": "arn:aws:s3:::jfna-personalize/*" } ] } I still get the same error no matter how many times I've followed the guide. I would really appreciate it if someone could help figure out what I'm missing or did wrong.
2
answers
1
votes
17
views
asked a day ago

I need to attach IAM role to my EC2 instance.

PentestEnvironment-Deployment-Role/octopus is not authorized to perform: iam:PassRole on resource. I have CF template which create Ec2 and Iam role for my env and all this env I create from not-root account. Iam Role for this account it's only main part: { "Sid": "IAM1", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ], "Condition": { "StringEquals": { "iam:PassedToService": "ec2.amazonaws.com" }, "StringLike": { "iam:AssociatedResourceARN": [ "arn:aws:ec2:us-west-2:*:instance/*" ] } } }, { "Sid": "IAM2", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:CreateRole", "iam:DeleteRole", "iam:DetachRolePolicy", "iam:AttachRolePolicy", "iam:PutRolePolicy", "iam:GetRolePolicy" ], "Resource": [ "arn:aws:iam::*:role/Pentest-EC2-Role" ] }, { "Sid": "IAM3", "Effect": "Allow", "Action": [ "iam:ListRoles" ], "Resource": [ "*" ] }, { "Sid": "IAM4", "Effect": "Allow", "Action": [ "iam:GetPolicy", "iam:CreatePolicy", "iam:ListPolicyVersions", "iam:CreatePolicyVersion", "iam:DeletePolicy", "iam:DeletePolicyVersion" ], "Resource": [ "arn:aws:iam::*:policy/Pentest-AWS-resources-Access" ] }, { "Sid": "IAM5", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:RemoveRoleFromInstanceProfile", "iam:AddRoleToInstanceProfile" ], "Resource": "arn:aws:iam::*:instance-profile/Pentest-Instance-Profile" }, { "Sid": "EC2InstanceProfile", "Effect": "Allow", "Action": [ "ec2:DisassociateIamInstanceProfile", "ec2:AssociateIamInstanceProfile", "ec2:ReplaceIamInstanceProfileAssociation" ], "Resource": "arn:aws:ec2:*:*:instance/*" } ] } Why do I have this error?
1
answers
0
votes
26
views
asked 2 days ago

IAM poilcy for an user to access Enhanced Monitoring for RDS.

I am trying to create an IAM user that will have least privileges to be able to view enhanced monitoring for a particular RDS database. I have created a ROLE (Enhanced Monitoring) and attached a managed policy to it:'AmazonRDSEnhancedMonitoringRole'. This role is passed to RDS database using the passrole permission. The policy that I am attaching to this IAM user is as below: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "rds:*", "cloudwatch:GetMetricData", "iam:ListRoles", "cloudwatch:GetMetricStatistics", "cloudwatch:DeleteAnomalyDetector", "cloudwatch:ListMetrics", "cloudwatch:DescribeAnomalyDetectors", "cloudwatch:ListMetricStreams", "cloudwatch:DescribeAlarmsForMetric", "cloudwatch:ListDashboards", "ec2:*", "cloudwatch:PutAnomalyDetector", "cloudwatch:GetMetricWidgetImage" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:PassRole", "cloudwatch:*" ], "Resource": [ "arn:aws:cloudwatch:*:accountnumber:insight-rule/*", "arn:aws:iam::accountnumber:role/Enhanced-Monitoring", "arn:aws:rds:us-east-1:accountnumber:db:dbidentifier" ] } ] } ``` As you can see, I have given almost every permission to this user, but still I am getting 'Not Authorized' error on the IAM user RDS dashboard for enhanced monitoring, although cloudwatch logs are displaying normally. I am following this guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) for enhanced monitoring of RDS. Refer to example 2 on this page.
1
answers
0
votes
30
views
asked 14 days ago

No further functionality after "eventType": "INITIATED" message while implementing Amazon Connect high-volume outbound communications

I have just created Campaign in Connect with Contact flow, Then IAM policies, EventBridge and Pinpoint stuff with Creation of Segments & Journeys and in return i got the first event as "eventType": "INITIATED" with Type "VOICE". But then it stuck and nothing happen, it should dial a number using outbound queue as mention in the [Documentation](https://aws.amazon.com/blogs/contact-center/make-predictive-and-progressive-calls-using-amazon-connect-high-volume-outbound-communications/#:~:text=Under%20%E2%80%9COutbound%20call%20configuration%E2%80%9D%2C%20choose%20any%20phone%20number%20for,button%20at%20the%20top%2Dright.) as below: ``` { "version": "0", "id": "35af9eb2-5dda-fafc-48ce-78f223478a85", "detail-type": "Amazon Connect Contact Event", "source": "aws.connect", "account": "XXX92XXX3XXX", "time": "2022-05-31T08:21:52Z", "region": "us-east-1", "resources": [ "arn:aws:connect:us-east-1:XXX92XXX3XXX:instance/8XXXXXX9-1XXa-4XXf-bXXf-3XXXXXXXXX4", "arn:aws:connect:us-east-1:XXX92XXX3XXX:instance/8XXXXXX9-1XXa-4XXf-bXXf-3XXXXXXXXX4/contact/7b552ed3-b276-42ea-9837-31f8622f4fde" ], "detail": { "initiationTimestamp": "2022-05-31T08:21:52.769Z", "contactId": "7b552ed3-b276-42ea-9837-31f8622f4fde", "channel": "VOICE", "instanceArn": "arn:aws:connect:us-east-1:XXX92XXX3XXX:instance/8XXXXXX9-1XXa-4XXf-bXXf-3XXXXXXXXX4", "initiationMethod": "API", "eventType": "INITIATED", "campaign": { "campaignId": "8b00b16f-b083-4a00-ae86-58332f524b2b" } } } ``` In the end after the time ends it closed the journey with message "Message Not Sent". It should dial an outbound number at numbers added through segment and than return the events but somehow it doesn't working. In CSV segment file what format we have to used for phone number? e.g. General! when we add E.164 phone in CSV file and saved it, it throw an alert `"some features in your workbook might be lost if you save it as csv UTF-8 (comma delimited)"`. May be its changing the format.
1
answers
0
votes
30
views
asked a month ago

Athena Error: Permission Denied on S3 Path.

I am trying to execute athena queries from a lambda function but I am getting this error: `Athena Query Failed to run with Error Message: Permission denied on S3 path: s3://bkt_logs/apis/2020/12/16/14` The bucket `bkt_logs` is the bucket which is used by AWS Glue Crawlers to crawl through all the sub-folders and populate Athena table on which I am querying on. Also, `bkt_logs` is an encrypted bucket. These are the policies that I have assigned to the Lambda. ``` [ { "Action": [ "s3:Get*", "s3:List*", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::athena-query-results/*", "Effect": "Allow", "Sid": "AllowS3AccessToSaveAndReadQueryResults" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::bkt_logs/*", "Effect": "Allow", "Sid": "AllowS3AccessForGlueToReadLogs" }, { "Action": [ "athena:GetQueryExecution", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:GetWorkGroup", "athena:GetDatabase", "athena:BatchGetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowAthenaAccess" }, { "Action": [ "glue:GetTable", "glue:GetDatabase", "glue:GetPartitions" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowGlueAccess" }, { "Action": [ "kms:CreateGrant", "kms:DescribeKey", "kms:Decrypt" ], "Resource": [ "*" ], "Effect": "Allow", "Sid": "AllowKMSAccess" } ] ``` What seems to be wrong here? What should I do to resolve this issue?
1
answers
0
votes
161
views
asked 2 months ago

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
79
views
asked 2 months ago

Cannot access Secrets Manager from Lightsail

I have a Lightsail instance with a very small Python script for testing. The script looks like: ``` import boto3 import json region_name = "us-east-1" secret_name = "arn:aws:secretsmanager:us-east-1:XXXXXX:XXXX" client = boto3.client(service_name='secretsmanager',region_name=region_name) response = client.get_secret_value(SecretId=secret_name) secrets1 = json.loads(response['SecretString']) print(secrets1['Password']) ``` When I run the above code, I get the following error: ``` An error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:sts::XXXXXXXX:assumed-role/AmazonLightsailInstanceRole/XXXXXXX is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-east-1:XXXXXXXX:secret:XXXXXX because no resource-based policy allows the secretsmanager:GetSecretValue action ``` I have tried: * creating a Lightsail role in IAM with "SecretsManagerReadWrite" policy attached. One problem with this approach is that I didn't see a Lightsail option when selecting an AWS Service, so I selected ec2. * running the code as root user * creating another IAM user with proper permissions (full access to Lightsail and SecretsManagerReadWrite) * scouring several forums looking for answers. I find some cases that are similar to mine, but haven't found a solution I can use fully (although I have used bits and pieces with no luck). None of the above worked (although I can't guarantee I put all the pieces together correctly). So my question is: How can I access a secret in my Secrets Manager service and use it in my Python code in Lightsail? This is all done within a single AWS account. I am very new to the AWS framework and am admittedly confused by the IAM roles and users and how I provision permission for a Lightsail instance to access Secrets Manager. Thanks for any help.
1
answers
0
votes
58
views
asked 2 months ago

S3 Static Website Objects 403 Forbidden when Uploaded from Different Account

### Quick Summary: If objects are put into a bucket owned by "Account A" from a different account ("Account B"), you cannot access files via S3 static website (http) from "Account A" (bucket owner). This is true regardless of the bucket policy granting GetObject on all objects, and regardless of if bucket-owner-full-control ACL is enabled on the object. - If trying to download a file from Account A via S3 API (console/cli), it works fine. - If trying to download a file from Account A via S3 static website (http), S3 responds HTTP 403 Forbidden if the file was uploaded by Account B. Files uploaded by Account A download fine. - Disabling Object ACL's fixes the problem but is not feasible (explained below) ### OVERVIEW I have a unique setup where I need to publish files to an S3 bucket from an account that does not own the bucket. The upload actions work fine. My problem is that I cannot access files from the bucket-owner account over the S3 static website *if the files were published from another account* (403 Forbidden response). **The problem only exists if the files were pushed to S3 FROM a different account.** Because the issue is only for those files, the problem seems like it would be in the Object Ownership ACL configuration. I've confirmed I can access other files (that weren't uploaded by the other acct) in the bucket through the S3 static website endpoint, so I know my bucket policy and VPC endpoint config is correct. If I completely disable Object ACL's completely **it works fine**, however I cannot do that because of two issues: - Ansible does not support publishing files to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Ansible doesn't support it) - The primary utility I'm using to publish files (Aptly) also doesn't support publishing to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Aptly doesn't support it) Because of these above constraints, I must use Object ACL's enabled on the bucket. I've tried both settings "Object Writer" and "Bucket owner preferred", neither are working. All files are uploaded with the `bucket-owner-full-control` object ACL. SCREENSHOT: https://i.stack.imgur.com/G1FxK.png As mentioned, disabling ACL fixes everything, but since my client tools (Ansible and Aptly) cannot upload to S3 without an ACL set, ACL's must remain enabled. SCREENSHOT: https://i.stack.imgur.com/NcKOd.png ### ENVIRONMENT EXPLAINED: - Bucket `test-bucket-a` is in "Account A", it's not a "private" bucket but it does not allow public access. Access is granted via policies (snippet below). - Bucket objects (files) are pushed to `test-bucket-a` from an "Account B" role. - Access from "Account B" to put files into the bucket is granted via policies (not shown here). Files upload without issue. - Objects are given the `bucket-owner-full-control` ACL when uploading. - I have verified that the ACL's look correct and both "Account A" and "Account B" have object access. (screenshot at bottom of question) - I am trying to access the files from the bucket-owner account (Account A) over the S3 static website access (over http). I can access files that were not uploaded by "Account B" but files uploaded by "Account B" return 403 Forbidden I am using VPC Endpoint to access (files cannot be public facing), and this is added to the bucket policy. All the needed routes and endpoint config are in-place. I know my policy config is good because everything works perfectly for files uploaded within the same account or if I disable object ACL. ``` { "Sid": "AllowGetThroughVPCEndpoint", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::test-bucket-a/*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-0bfb94<scrubbed>" } } }, ``` **Here is an example of how this file is uploaded using Ansible:** Reminder: the role doing the uploading is NOT part of the bucket owner account. ``` - name: "publish gpg pubkey to s3 from Account B" aws_s3: bucket: "test-bucket-a" object: "/files/pubkey.gpg" src: "/home/file/pubkey.gpg" mode: "put" permission: "bucket-owner-full-control" ``` **Some key troubleshooting notes:** - From "Account A" when logged into the console, **I can download the file.** This is very strange and shows that API requests to GetObject are working. Does the S3 website config follow some different rule structure?? - From "Account A" when accessing the file from an HTTP endpoint (S3 website) it returns **HTTP 403 Forbidden** - I have tried deleting and re-uploading the file multiple times. - I have tried manually setting object ACL via the aws cli (ex: `aws s3api put-object-acl --acl bucket-owner-full-control ...`) - When viewing the "object" ACL, I have confirmed that both "Account A" and "Account B" have access. See below screenshot. Note that it confirms the object owner is an external account. SCREENSHOT: https://i.stack.imgur.com/TCYvv.png
0
answers
0
votes
7
views
asked 2 months ago

IAM Policy To Create Domain in OpenSearch

I am trying to create Domain in open search, I used the Below IAM permission but everytime it is giving me this error-: Before you can proceed, you must enable a service-linked role to give Amazon OpenSearch Service permissions to create and manage resources on your behalf I have also attached the Service Linked Role but still I am facing the Issue I am using this IAM policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "es:ESHttpDelete", "es:ESHttpGet", "es:ESHttpHead", "es:ESHttpPost", "es:ESHttpPut", "es:ESHttpPatch", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateNetworkInterface", "ec2:CreateSecurityGroup", "ec2:DeleteNetworkInterface", "ec2:DeleteSecurityGroup", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaces", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:ModifyNetworkInterfaceAttribute", "ec2:RevokeSecurityGroupIngress", "elasticloadbalancing:AddListenerCertificates", "elasticloadbalancing:RemoveListenerCertificates" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "es:AddTags", "es:AssociatePackage", "es:CreateDomain", "es:CreateOutboundConnection", "es:DeleteDomain", "es:DescribeDomain", "es:DescribeDomainAutoTunes", "es:DescribeDomainConfig", "es:DescribeDomains", "es:DissociatePackage", "es:ESCrossClusterGet", "es:GetCompatibleVersions", "es:GetUpgradeHistory", "es:GetUpgradeStatus", "es:ListPackagesForDomain", "es:ListTags", "es:RemoveTags", "es:StartServiceSoftwareUpdate", "es:UpdateDomainConfig", "es:UpdateNotificationStatus", "es:UpgradeDomain" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "es:AcceptInboundConnection", "es:CancelServiceSoftwareUpdate", "es:CreatePackage", "es:CreateServiceRole", "es:DeletePackage", "es:DescribeInboundConnections", "es:DescribeInstanceTypeLimits", "es:DescribeOutboundConnections", "es:DescribePackages", "es:DescribeReservedInstanceOfferings", "es:DescribeReservedInstances", "es:GetPackageVersionHistory", "es:ListDomainNames", "es:ListDomainsForPackage", "es:ListInstanceTypeDetails", "es:ListInstanceTypes", "es:ListNotifications", "es:ListVersions", "es:PurchaseReservedInstanceOffering", "es:RejectInboundConnection", "es:UpdatePackage" ], "Resource": "*" }, { "Sid": "AllowCreationOfServiceLinkedRoleForOpenSearch", "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:PassRole" ], "Resource": [ "arn:aws:iam::*:role/aws-service-role/opensearchservice.amazonaws.com/AWSServiceRoleForAmazonOpenSearchService*", "arn:aws:iam::*:role/aws-service-role/es.amazonaws.com/AWSServiceRoleForAmazonOpenSearchService*" ], "Condition": { "StringLike":{ "iam:AWSServiceName": [ "opensearchservice.amazonaws.com", "es.amazonaws.com" ] } } } ] }
0
answers
1
votes
22
views
asked 2 months ago

Role chaining problem

Hi, Im trying to achieve the "role chaining" as in the https://aws.plainenglish.io/aws-iam-role-chaining-df41b1101068 i have an user `admin-user-01` with policy assigned: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/admin_group_role" } } ``` I have a role, which is meant for `admin-user-01`, with `role_name = admin_group_role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:user/admin-user-01" }, "Action": "sts:AssumeRole" } ] } ``` And it also has a policy: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/test-role" } } ``` Then, i have another role, which is assigned for the role above (`admin_group_role`), with `role_name = test-role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:role/admin_group_role" }, "Action": "sts:AssumeRole" } ] } ``` But when i login as `admin-user-01` into account, then switch to the role `admin_group_role` and then try to switch to role `test-role` i get : `Invalid information in one or more fields. Check your information or contact your administrator.` P.S everywhere <accountid> is the same, all of the roles,users,permissions are created in the same account ( what, i suppose might be the reason why i face the error ) What am i doing wrongly?
2
answers
0
votes
25
views
asked 2 months ago

ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials / cloudwatch log agent, vpc endpoints

I got error: "ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials" in cloudwatch log agent on an ec2 instance that has: 1. CloudWatchAgentServerRole -- this is default AWS managed role attached to the instance, this default role already allow ""ec2:DescribeTags"," in its policy. <---- NOTE this 2. Its NACL allowed all outbound and allowed all vpc's CIDR network range inbound 3. Cloudwatch log agent config file's region is correct 4. telnet ec2.us-east-2.amazonaws.com 443 or telnet monitoring.us-east-2.amazonaws.com 443 or telnet logs.us-east-2.amazonaws.com 443 under the ec2 instance all return successful connection (Connected <..> Escape character is '^]') I also create three vpc endpoints: logs (com.amazonaws.us-east-2.logs), monitoring (com.amazonaws.us-east-2.monitoring), ec2 (com.amazonaws.us-east-2.ec2) interface endpoints. They have SG that allowed all VPC's CIDR network range inbound. The idea is to expose metrics to cloudwatch via vpc endpoints. Despite all above setup, I can't make cloudwatch agent to work and it keeps echo above error complain about credentials is not valid even though the REGION in config file is correct and traffic between instance and cloudwatch is allowed.
1
answers
0
votes
270
views
asked 3 months ago

Restrict IOT publish topic policy

I'm using flutter/dart (mqtt_client / https://pub.dev/packages/mqtt_client) to send an AWS IOT MQTT messages over websockets and I'd like to restrict an IAM user to only specific topics that a user is allowed to Publish messages only to their specific topic. I've attempted to add some restricted policies, but the application will fail with little information on the client side. Also, in Cloud Watch, I don't see any specific errors. Here's some example topics: `arn:aws:iot:us-east-2:1234567890:topic/action_request/ASDF1234` `arn:aws:iot:us-east-2:1234567890:topic/action_request/ASDF5678` So, I want to add the proper JSON policy attached to the IAM user and they only have access to ASDF1234 All of my publish topics are patterned like the above. For now, I'm focusing on restricting the Publish endpoints and then working others like Subscribe. I've tried numerous different policies like below. Also with adding some wildcards to no success on the client side. They look right, but I'm not sure if there's indirectly other publish topics that are used internally within MQTT that's causing the failures or maybe just my syntax. Another thought is if I add a condition that would allow only the above endpoint and no others. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iot:Receive", "iot:ListNamedShadowsForThing", "iot:Subscribe", "iot:Connect", "iot:GetThingShadow", "iot:DeleteThingShadow", "iot:UpdateThingShadow" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "iot:Publish", "Resource": "arn:aws:iot:us-east-2:1234567890:topic/*/ASDF1234*" } ] } ```
1
answers
0
votes
16
views
asked 4 months ago

Enforce Tags SCP for DynamoDB is not working

Hi, I followed this official guide from aws in order to implement a tagging strategy for resources in my AWS Organization https://aws.amazon.com/de/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tag-policies-and-service-control-policies-scps/ The example is for EC2 instances, I followed all steps and this worked, however when I wanted to replicate the steps for S3, RDS and DynamoDB it did not work. The following is the SCP I want to use in order to enforce the tag *test* to be on every created dynamodb table. This is exactly how it is done in the Guide for EC2. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Deny", "Action": [ "dynamodb:CreateTable" ], "Resource": [ "arn:aws:dynamodb:*:*:table/*" ], "Condition": { "Null": { "aws:RequestTag/test": "true" } } } ] } ``` However when I try to create a DynamoDB Table with the tag *test* I get the following error message. I am passing the tag test, however I still get a deny. ``` User: arn:aws:sts::<account>:assumed-role/<role>/<email> is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-central-1:<table>:<table> with an explicit deny. ``` I tried creating this SCP for the Services RDS, S3 and DynamoDB, only EC2 seems to work. Do you have an idea what the error could be or is anyone using this tagging strategy in their AWS Organization/AWS Control Tower. Would be interested to hear what your experience is as this seems really complicated to me to implement and does not work so far. Looking forward to hear form you people :)
0
answers
0
votes
13
views
asked 4 months ago

How to dynamically update the policy of user(Cognito identity) from backend/lambda?

I am building an IoT solution using the IoT Core. The end-user will be using Mobile App and will be authenticated and authorized using Cognito. I want to authorize users to allow iot:Publish and iot:Subscribe action only on the devices that the user owns. The IAM Role attached to the Cognito Identity pool has only iot:Connect permission when the user is created. The User won't have any additional permission at this point. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": "arn:aws:iot:us-east-1:1234567890:client/${cognito-identity.amazonaws.com:sub}" } ] } ``` Now, when the user finishes the device provisioning, I want to attach the inline Policy to Cognito identity of that user to authorize him to publish and subscribe to the shadow of that device. Let's assume the ThingName is Thing1 so the policy should be as below: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": "arn:aws:iot:us-east-1:1234567890:client/${cognito-identity.amazonaws.com:sub}" }, { "Effect": "Allow", "Action": [ "iot:Publish", "iot:Subscribe" ], "Resource": "arn:aws:iot:region:account-id:topic/$aws/things/Thing1/shadow/*" } ] } ``` The user may keep adding new devices and I want to scale this policy to include resource ARNs of those devices. This is an example of IoT Core, but my question is very generic to IAM policies. (e.g. the same can be applied to dynamically allow access to the S3 bucket folders) So, here is my question: 1. What is the best approach for dynamically adding or removing the inline policy granted to the Cognito identity? 2. Can I use the STS service for updating/attaching the policy on my backend/Lambda when new Things are added or removed? Note: 1. I can use the Customer Managed Policy, but it is not the right approach for granting policies to federated users as per my knowledge. 2. I know I can use the intelligent naming of the device as mentioned in this approach. But, I have a very basic requirement. https://aws.amazon.com/blogs/iot/scaling-authorization-policies-with-aws-iot-core/
0
answers
0
votes
10
views
asked 4 months ago

Accessing S3 across accounts I can do it if logged in the origin account but not if assuming a role from another account

When I log directly in the origin account I have access to target account S3: > [cloudshell-user@ip-10-0-91-7 ~]$ aws sts get-caller-identity { "UserId": "AIDAxxxxxxxxJBLJ34", "Account": "178xxxxxx057", "Arn": "arn:aws:iam::178xxxxxx057:user/adminCustomer" } > [cloudshell-user@ip-10-0-91-7 ~]$ aws s3 ls s3://target-account-bucket 2022-03-10 01:28:05 432 foobar.txx However if I do it after assuming a Role in that account I can't access the target account > [cloudshell-user@ip-10-1-12-136 ~]$ aws sts get-caller-identity { "UserId": "AROAxxxxxxF5HI7BI:test", "Account": "178xxxxxx057", "Arn": "arn:aws:sts::178xxxxxx4057:assumed-role/ReadAnalysis/test" } > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://targer-account-bucket > An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied [cloudshell-user@ip-10-1-12-136 ~]$ however I do have access to buckets in the origin account > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://origin-account > 2022-03-09 21:19:36 432 cli_script.txt the policy in the target-account-bucket is as follows: > { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::178xxxxxx057:root" }, "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::targer-account-bucket/*", "arn:aws:s3:::targer-account-bucket" ] }, there are no any explicit Deny policies that may apply thank you for any advice you can provide
1
answers
0
votes
16
views
asked 4 months ago

Error connecting to Aurora PostgreSQL dB in .NET Core Lambda function.

I'm attempting to create a Lambda where I can make calls to various stored procedures and functions in my Aurora PostgreSQL dB instance. I'm following the guide on this page: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Connecting.NET.html Eventually I want to connect this with Dapper, but for now I'm just trying to get the code from the above example to work. I am using the npgsql package and can successfully retrieve the RDSAuthToken via the RDSAuthTokenGenerator.GenerateAuthToken() function using the appropriate region endpoint, cluster endpoint, port number, and db user. The problem comes when I use the AuthToken I retrieved earlier to create a connection to the server: using NpgsqlConnection connection = new NpgsqlConnection($"Server=Cluster Endpoint;User Id=dB User;Password=AuthToken;Database=dB Instance name"); I am now getting this error: "28000: pg_hba.conf rejects connection for host \"172.31.30.255\", user \"dB User\", database \"dB Instance Name\", SSL off I'm not sure what I need to do to get this to work. As far as I can tell, I've done everything exactly as I was supposed to according to the guide in the documentation. I also created a user role with the specific permission for rds-db:connect for my specific dB user and dB instance id. My only guess is that I have failed to connect that authorization in some way to the actual dB user. I assigned that permission to a role with the same name, and then I created a dB user with that name in the dB and then granted it the rds_iam role, but it's not clear to me that the IAM user and the dB user would be connected yet. And I haven't been able to find examples online for how to connect them. It would be great to get a little help with this one. Thanks! Edit: I realized that my issue might be with the SSL Certificate path that is required at the end of the connection string in the example I linked above. I will keep looking into this, but I'm wondering if this will work to use in a Lambda if I have to reference a path to a certificate that I install on my computer. Although, I might not be understanding how this works.
1
answers
0
votes
36
views
asked 4 months ago
  • 1
  • 90 / page