By using AWS re:Post, you agree to the Terms of Use
/IAM Policies/

Questions tagged with IAM Policies

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
8
views
AWS-User-4882383
asked 5 days ago

Cannot access Secrets Manager from Lightsail

I have a Lightsail instance with a very small Python script for testing. The script looks like: ``` import boto3 import json region_name = "us-east-1" secret_name = "arn:aws:secretsmanager:us-east-1:XXXXXX:XXXX" client = boto3.client(service_name='secretsmanager',region_name=region_name) response = client.get_secret_value(SecretId=secret_name) secrets1 = json.loads(response['SecretString']) print(secrets1['Password']) ``` When I run the above code, I get the following error: ``` An error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:sts::XXXXXXXX:assumed-role/AmazonLightsailInstanceRole/XXXXXXX is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:us-east-1:XXXXXXXX:secret:XXXXXX because no resource-based policy allows the secretsmanager:GetSecretValue action ``` I have tried: * creating a Lightsail role in IAM with "SecretsManagerReadWrite" policy attached. One problem with this approach is that I didn't see a Lightsail option when selecting an AWS Service, so I selected ec2. * running the code as root user * creating another IAM user with proper permissions (full access to Lightsail and SecretsManagerReadWrite) * scouring several forums looking for answers. I find some cases that are similar to mine, but haven't found a solution I can use fully (although I have used bits and pieces with no luck). None of the above worked (although I can't guarantee I put all the pieces together correctly). So my question is: How can I access a secret in my Secrets Manager service and use it in my Python code in Lightsail? This is all done within a single AWS account. I am very new to the AWS framework and am admittedly confused by the IAM roles and users and how I provision permission for a Lightsail instance to access Secrets Manager. Thanks for any help.
1
answers
0
votes
3
views
AWS-User-3252598
asked 12 days ago

S3 Static Website Objects 403 Forbidden when Uploaded from Different Account

### Quick Summary: If objects are put into a bucket owned by "Account A" from a different account ("Account B"), you cannot access files via S3 static website (http) from "Account A" (bucket owner). This is true regardless of the bucket policy granting GetObject on all objects, and regardless of if bucket-owner-full-control ACL is enabled on the object. - If trying to download a file from Account A via S3 API (console/cli), it works fine. - If trying to download a file from Account A via S3 static website (http), S3 responds HTTP 403 Forbidden if the file was uploaded by Account B. Files uploaded by Account A download fine. - Disabling Object ACL's fixes the problem but is not feasible (explained below) ### OVERVIEW I have a unique setup where I need to publish files to an S3 bucket from an account that does not own the bucket. The upload actions work fine. My problem is that I cannot access files from the bucket-owner account over the S3 static website *if the files were published from another account* (403 Forbidden response). **The problem only exists if the files were pushed to S3 FROM a different account.** Because the issue is only for those files, the problem seems like it would be in the Object Ownership ACL configuration. I've confirmed I can access other files (that weren't uploaded by the other acct) in the bucket through the S3 static website endpoint, so I know my bucket policy and VPC endpoint config is correct. If I completely disable Object ACL's completely **it works fine**, however I cannot do that because of two issues: - Ansible does not support publishing files to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Ansible doesn't support it) - The primary utility I'm using to publish files (Aptly) also doesn't support publishing to buckets with ACL's disabled. (Disabling ACL is a relatively new S3 feature and Aptly doesn't support it) Because of these above constraints, I must use Object ACL's enabled on the bucket. I've tried both settings "Object Writer" and "Bucket owner preferred", neither are working. All files are uploaded with the `bucket-owner-full-control` object ACL. SCREENSHOT: https://i.stack.imgur.com/G1FxK.png As mentioned, disabling ACL fixes everything, but since my client tools (Ansible and Aptly) cannot upload to S3 without an ACL set, ACL's must remain enabled. SCREENSHOT: https://i.stack.imgur.com/NcKOd.png ### ENVIRONMENT EXPLAINED: - Bucket `test-bucket-a` is in "Account A", it's not a "private" bucket but it does not allow public access. Access is granted via policies (snippet below). - Bucket objects (files) are pushed to `test-bucket-a` from an "Account B" role. - Access from "Account B" to put files into the bucket is granted via policies (not shown here). Files upload without issue. - Objects are given the `bucket-owner-full-control` ACL when uploading. - I have verified that the ACL's look correct and both "Account A" and "Account B" have object access. (screenshot at bottom of question) - I am trying to access the files from the bucket-owner account (Account A) over the S3 static website access (over http). I can access files that were not uploaded by "Account B" but files uploaded by "Account B" return 403 Forbidden I am using VPC Endpoint to access (files cannot be public facing), and this is added to the bucket policy. All the needed routes and endpoint config are in-place. I know my policy config is good because everything works perfectly for files uploaded within the same account or if I disable object ACL. ``` { "Sid": "AllowGetThroughVPCEndpoint", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::test-bucket-a/*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-0bfb94<scrubbed>" } } }, ``` **Here is an example of how this file is uploaded using Ansible:** Reminder: the role doing the uploading is NOT part of the bucket owner account. ``` - name: "publish gpg pubkey to s3 from Account B" aws_s3: bucket: "test-bucket-a" object: "/files/pubkey.gpg" src: "/home/file/pubkey.gpg" mode: "put" permission: "bucket-owner-full-control" ``` **Some key troubleshooting notes:** - From "Account A" when logged into the console, **I can download the file.** This is very strange and shows that API requests to GetObject are working. Does the S3 website config follow some different rule structure?? - From "Account A" when accessing the file from an HTTP endpoint (S3 website) it returns **HTTP 403 Forbidden** - I have tried deleting and re-uploading the file multiple times. - I have tried manually setting object ACL via the aws cli (ex: `aws s3api put-object-acl --acl bucket-owner-full-control ...`) - When viewing the "object" ACL, I have confirmed that both "Account A" and "Account B" have access. See below screenshot. Note that it confirms the object owner is an external account. SCREENSHOT: https://i.stack.imgur.com/TCYvv.png
0
answers
0
votes
2
views
total_snooze
asked 18 days ago

IAM Policy To Create Domain in OpenSearch

I am trying to create Domain in open search, I used the Below IAM permission but everytime it is giving me this error-: Before you can proceed, you must enable a service-linked role to give Amazon OpenSearch Service permissions to create and manage resources on your behalf I have also attached the Service Linked Role but still I am facing the Issue I am using this IAM policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "es:ESHttpDelete", "es:ESHttpGet", "es:ESHttpHead", "es:ESHttpPost", "es:ESHttpPut", "es:ESHttpPatch", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateNetworkInterface", "ec2:CreateSecurityGroup", "ec2:DeleteNetworkInterface", "ec2:DeleteSecurityGroup", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaces", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:ModifyNetworkInterfaceAttribute", "ec2:RevokeSecurityGroupIngress", "elasticloadbalancing:AddListenerCertificates", "elasticloadbalancing:RemoveListenerCertificates" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "es:AddTags", "es:AssociatePackage", "es:CreateDomain", "es:CreateOutboundConnection", "es:DeleteDomain", "es:DescribeDomain", "es:DescribeDomainAutoTunes", "es:DescribeDomainConfig", "es:DescribeDomains", "es:DissociatePackage", "es:ESCrossClusterGet", "es:GetCompatibleVersions", "es:GetUpgradeHistory", "es:GetUpgradeStatus", "es:ListPackagesForDomain", "es:ListTags", "es:RemoveTags", "es:StartServiceSoftwareUpdate", "es:UpdateDomainConfig", "es:UpdateNotificationStatus", "es:UpgradeDomain" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "es:AcceptInboundConnection", "es:CancelServiceSoftwareUpdate", "es:CreatePackage", "es:CreateServiceRole", "es:DeletePackage", "es:DescribeInboundConnections", "es:DescribeInstanceTypeLimits", "es:DescribeOutboundConnections", "es:DescribePackages", "es:DescribeReservedInstanceOfferings", "es:DescribeReservedInstances", "es:GetPackageVersionHistory", "es:ListDomainNames", "es:ListDomainsForPackage", "es:ListInstanceTypeDetails", "es:ListInstanceTypes", "es:ListNotifications", "es:ListVersions", "es:PurchaseReservedInstanceOffering", "es:RejectInboundConnection", "es:UpdatePackage" ], "Resource": "*" }, { "Sid": "AllowCreationOfServiceLinkedRoleForOpenSearch", "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:PassRole" ], "Resource": [ "arn:aws:iam::*:role/aws-service-role/opensearchservice.amazonaws.com/AWSServiceRoleForAmazonOpenSearchService*", "arn:aws:iam::*:role/aws-service-role/es.amazonaws.com/AWSServiceRoleForAmazonOpenSearchService*" ], "Condition": { "StringLike":{ "iam:AWSServiceName": [ "opensearchservice.amazonaws.com", "es.amazonaws.com" ] } } } ] }
0
answers
0
votes
2
views
AWS-User-2955114
asked 21 days ago

Role chaining problem

Hi, Im trying to achieve the "role chaining" as in the https://aws.plainenglish.io/aws-iam-role-chaining-df41b1101068 i have an user `admin-user-01` with policy assigned: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/admin_group_role" } } ``` I have a role, which is meant for `admin-user-01`, with `role_name = admin_group_role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:user/admin-user-01" }, "Action": "sts:AssumeRole" } ] } ``` And it also has a policy: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/test-role" } } ``` Then, i have another role, which is assigned for the role above (`admin_group_role`), with `role_name = test-role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:role/admin_group_role" }, "Action": "sts:AssumeRole" } ] } ``` But when i login as `admin-user-01` into account, then switch to the role `admin_group_role` and then try to switch to role `test-role` i get : `Invalid information in one or more fields. Check your information or contact your administrator.` P.S everywhere <accountid> is the same, all of the roles,users,permissions are created in the same account ( what, i suppose might be the reason why i face the error ) What am i doing wrongly?
2
answers
0
votes
3
views
Joann Babak
asked 24 days ago

ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials / cloudwatch log agent, vpc endpoints

I got error: "ec2tagger: Unable to describe ec2 tags for initial retrieval: AuthFailure: AWS was not able to validate the provided access credentials" in cloudwatch log agent on an ec2 instance that has: 1. CloudWatchAgentServerRole -- this is default AWS managed role attached to the instance, this default role already allow ""ec2:DescribeTags"," in its policy. <---- NOTE this 2. Its NACL allowed all outbound and allowed all vpc's CIDR network range inbound 3. Cloudwatch log agent config file's region is correct 4. telnet ec2.us-east-2.amazonaws.com 443 or telnet monitoring.us-east-2.amazonaws.com 443 or telnet logs.us-east-2.amazonaws.com 443 under the ec2 instance all return successful connection (Connected <..> Escape character is '^]') I also create three vpc endpoints: logs (com.amazonaws.us-east-2.logs), monitoring (com.amazonaws.us-east-2.monitoring), ec2 (com.amazonaws.us-east-2.ec2) interface endpoints. They have SG that allowed all VPC's CIDR network range inbound. The idea is to expose metrics to cloudwatch via vpc endpoints. Despite all above setup, I can't make cloudwatch agent to work and it keeps echo above error complain about credentials is not valid even though the REGION in config file is correct and traffic between instance and cloudwatch is allowed.
1
answers
0
votes
3
views
AWS-User-4033111
asked 25 days ago

Restrict IOT publish topic policy

I'm using flutter/dart (mqtt_client / https://pub.dev/packages/mqtt_client) to send an AWS IOT MQTT messages over websockets and I'd like to restrict an IAM user to only specific topics that a user is allowed to Publish messages only to their specific topic. I've attempted to add some restricted policies, but the application will fail with little information on the client side. Also, in Cloud Watch, I don't see any specific errors. Here's some example topics: `arn:aws:iot:us-east-2:1234567890:topic/action_request/ASDF1234` `arn:aws:iot:us-east-2:1234567890:topic/action_request/ASDF5678` So, I want to add the proper JSON policy attached to the IAM user and they only have access to ASDF1234 All of my publish topics are patterned like the above. For now, I'm focusing on restricting the Publish endpoints and then working others like Subscribe. I've tried numerous different policies like below. Also with adding some wildcards to no success on the client side. They look right, but I'm not sure if there's indirectly other publish topics that are used internally within MQTT that's causing the failures or maybe just my syntax. Another thought is if I add a condition that would allow only the above endpoint and no others. https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iot:Receive", "iot:ListNamedShadowsForThing", "iot:Subscribe", "iot:Connect", "iot:GetThingShadow", "iot:DeleteThingShadow", "iot:UpdateThingShadow" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "iot:Publish", "Resource": "arn:aws:iot:us-east-2:1234567890:topic/*/ASDF1234*" } ] } ```
1
answers
0
votes
4
views
nickernet
asked 2 months ago

Enforce Tags SCP for DynamoDB is not working

Hi, I followed this official guide from aws in order to implement a tagging strategy for resources in my AWS Organization https://aws.amazon.com/de/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tag-policies-and-service-control-policies-scps/ The example is for EC2 instances, I followed all steps and this worked, however when I wanted to replicate the steps for S3, RDS and DynamoDB it did not work. The following is the SCP I want to use in order to enforce the tag *test* to be on every created dynamodb table. This is exactly how it is done in the Guide for EC2. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Deny", "Action": [ "dynamodb:CreateTable" ], "Resource": [ "arn:aws:dynamodb:*:*:table/*" ], "Condition": { "Null": { "aws:RequestTag/test": "true" } } } ] } ``` However when I try to create a DynamoDB Table with the tag *test* I get the following error message. I am passing the tag test, however I still get a deny. ``` User: arn:aws:sts::<account>:assumed-role/<role>/<email> is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-central-1:<table>:<table> with an explicit deny. ``` I tried creating this SCP for the Services RDS, S3 and DynamoDB, only EC2 seems to work. Do you have an idea what the error could be or is anyone using this tagging strategy in their AWS Organization/AWS Control Tower. Would be interested to hear what your experience is as this seems really complicated to me to implement and does not work so far. Looking forward to hear form you people :)
0
answers
0
votes
8
views
Lukonjun
asked 2 months ago

How to dynamically update the policy of user(Cognito identity) from backend/lambda?

I am building an IoT solution using the IoT Core. The end-user will be using Mobile App and will be authenticated and authorized using Cognito. I want to authorize users to allow iot:Publish and iot:Subscribe action only on the devices that the user owns. The IAM Role attached to the Cognito Identity pool has only iot:Connect permission when the user is created. The User won't have any additional permission at this point. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": "arn:aws:iot:us-east-1:1234567890:client/${cognito-identity.amazonaws.com:sub}" } ] } ``` Now, when the user finishes the device provisioning, I want to attach the inline Policy to Cognito identity of that user to authorize him to publish and subscribe to the shadow of that device. Let's assume the ThingName is Thing1 so the policy should be as below: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": "arn:aws:iot:us-east-1:1234567890:client/${cognito-identity.amazonaws.com:sub}" }, { "Effect": "Allow", "Action": [ "iot:Publish", "iot:Subscribe" ], "Resource": "arn:aws:iot:region:account-id:topic/$aws/things/Thing1/shadow/*" } ] } ``` The user may keep adding new devices and I want to scale this policy to include resource ARNs of those devices. This is an example of IoT Core, but my question is very generic to IAM policies. (e.g. the same can be applied to dynamically allow access to the S3 bucket folders) So, here is my question: 1. What is the best approach for dynamically adding or removing the inline policy granted to the Cognito identity? 2. Can I use the STS service for updating/attaching the policy on my backend/Lambda when new Things are added or removed? Note: 1. I can use the Customer Managed Policy, but it is not the right approach for granting policies to federated users as per my knowledge. 2. I know I can use the intelligent naming of the device as mentioned in this approach. But, I have a very basic requirement. https://aws.amazon.com/blogs/iot/scaling-authorization-policies-with-aws-iot-core/
0
answers
0
votes
2
views
Narendra
asked 2 months ago

Accessing S3 across accounts I can do it if logged in the origin account but not if assuming a role from another account

When I log directly in the origin account I have access to target account S3: > [cloudshell-user@ip-10-0-91-7 ~]$ aws sts get-caller-identity { "UserId": "AIDAxxxxxxxxJBLJ34", "Account": "178xxxxxx057", "Arn": "arn:aws:iam::178xxxxxx057:user/adminCustomer" } > [cloudshell-user@ip-10-0-91-7 ~]$ aws s3 ls s3://target-account-bucket 2022-03-10 01:28:05 432 foobar.txx However if I do it after assuming a Role in that account I can't access the target account > [cloudshell-user@ip-10-1-12-136 ~]$ aws sts get-caller-identity { "UserId": "AROAxxxxxxF5HI7BI:test", "Account": "178xxxxxx057", "Arn": "arn:aws:sts::178xxxxxx4057:assumed-role/ReadAnalysis/test" } > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://targer-account-bucket > An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied [cloudshell-user@ip-10-1-12-136 ~]$ however I do have access to buckets in the origin account > [cloudshell-user@ip-10-1-12-136 ~]$ aws s3 ls s3://origin-account > 2022-03-09 21:19:36 432 cli_script.txt the policy in the target-account-bucket is as follows: > { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::178xxxxxx057:root" }, "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::targer-account-bucket/*", "arn:aws:s3:::targer-account-bucket" ] }, there are no any explicit Deny policies that may apply thank you for any advice you can provide
1
answers
0
votes
2
views
AWS-User-5995037
asked 2 months ago

Error connecting to Aurora PostgreSQL dB in .NET Core Lambda function.

I'm attempting to create a Lambda where I can make calls to various stored procedures and functions in my Aurora PostgreSQL dB instance. I'm following the guide on this page: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Connecting.NET.html Eventually I want to connect this with Dapper, but for now I'm just trying to get the code from the above example to work. I am using the npgsql package and can successfully retrieve the RDSAuthToken via the RDSAuthTokenGenerator.GenerateAuthToken() function using the appropriate region endpoint, cluster endpoint, port number, and db user. The problem comes when I use the AuthToken I retrieved earlier to create a connection to the server: using NpgsqlConnection connection = new NpgsqlConnection($"Server=Cluster Endpoint;User Id=dB User;Password=AuthToken;Database=dB Instance name"); I am now getting this error: "28000: pg_hba.conf rejects connection for host \"172.31.30.255\", user \"dB User\", database \"dB Instance Name\", SSL off I'm not sure what I need to do to get this to work. As far as I can tell, I've done everything exactly as I was supposed to according to the guide in the documentation. I also created a user role with the specific permission for rds-db:connect for my specific dB user and dB instance id. My only guess is that I have failed to connect that authorization in some way to the actual dB user. I assigned that permission to a role with the same name, and then I created a dB user with that name in the dB and then granted it the rds_iam role, but it's not clear to me that the IAM user and the dB user would be connected yet. And I haven't been able to find examples online for how to connect them. It would be great to get a little help with this one. Thanks! Edit: I realized that my issue might be with the SSL Certificate path that is required at the end of the connection string in the example I linked above. I will keep looking into this, but I'm wondering if this will work to use in a Lambda if I have to reference a path to a certificate that I install on my computer. Although, I might not be understanding how this works.
1
answers
0
votes
3
views
Josh
asked 2 months ago

MalformedPolicyDocument error on PutUserPolicy while running ansible script to generate IAM user along with policy

I am trying to run an ansible script to generate an IAM user along with an attached policy allowing access to an S3 bucket. I am able to create a policy on the console using the same policy document, and have confirmed that the document is valid json. However I still see the error below. The document itself looks like this. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "SAImgBucket", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::<s3-bucket-name>" ] } ] } ``` The ansible task is as below. ``` - name: create sa IAM user permissions community.aws.iam_policy: iam_type: user iam_name: "{{ sa_app_username }}" policy_name: "{{ sa_app_username }}-policy" state: present policy_json: " {{ lookup( 'template', 'template/sa_iam_policy.json.j2') | to_json }} " ``` Any suggestions on how to further debug or address this are greatly appreciated. ``` botocore.errorfactory.MalformedPolicyDocumentException: An error occurred (MalformedPolicyDocument) when calling the PutUserPolicy operation: Syntax errors in policy. [DEPRECATION WARNING]: The skip_duplicates behaviour has caused confusion and will be disabled by default in Ansible 2.14. This feature will be removed from community.aws in a release after 2022-06-01. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. fatal: [localhost]: FAILED! => changed=false boto3_version: 1.18.18 botocore_version: 1.21.18 error: code: MalformedPolicyDocument message: Syntax errors in policy. type: Sender invocation: module_args: aws_access_key: null aws_ca_bundle: null aws_config: null aws_secret_key: null debug_botocore_endpoint_logs: false ec2_url: null iam_name: sa-103 iam_type: user policy_document: null policy_json: '"{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"SAImgBucket\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:*\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::<bucket-name>\"\n ]\n }\n ]\n}\n"' policy_name: sa-103-policy profile: null region: null security_token: null skip_duplicates: null state: present validate_certs: true msg: 'An error occurred (MalformedPolicyDocument) when calling the PutUserPolicy operation: Syntax errors in policy.' response_metadata: http_headers: connection: close content-length: '279' content-type: text/xml date: Tue, 15 Feb 2022 17:45:05 GMT x-amzn-requestid: fce8efa1-7a86-468f-9481-264db52db33d http_status_code: 400 request_id: fce8efa1-7a86-468f-9481-264db52db33d retry_attempts: 0 ```
1
answers
0
votes
2
views
AWS-User-9282743
asked 3 months ago

IAM permissions required for rds:RestoreDBClusterToPointInTime

Hi there, I am trying to figure out the required permissions for a role to call rds:RestoreDBClusterToPointInTime. https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html gives me some clue but I am not sure what I came up with is safe. I am trying to clone an Aurora MySQL 2 cluster. Via RDS API, I use rds:RestoreDBClusterToPointInTime and then rds:CreateDBInstance. By try and fail, I got it working with the policy expcert below: { Effect = "Allow" Action = [ "rds:AddTagsToResource", "rds:CreateDBInstance", "rds:DeleteDBInstance", "rds:DeleteDBCluster", "rds:DescribeDBClusters", "rds:DescribeDBInstances", "rds:RestoreDBClusterToPointInTime" ] Resource = [ "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster:${var.destination_cluster_identifier}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster:${var.source_cluster_identifier}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:cluster-pg:${aws_rds_cluster_parameter_group.this.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:subgrp:${aws_db_subnet_group.this.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:secgrp:${aws_security_group.rds.name}", "arn:aws:rds:${data.aws_region.this.name}:${data.aws_caller_identity.this.account_id}:db:${local.rds_instance_name}" ] } Where I am uncertain is how can we make rds:RestoreDBClusterToPointInTime one way. That is, being able to limit what is the source and what is the destination. It looks like both source and destination clusters must be in the Resource block. Therefore, we can't limit what cluster is source and what cluster is destination. Is there a way to do so?
1
answers
0
votes
6
views
ohmer
asked 3 months ago
1
answers
0
votes
4
views
BobTheMighty
asked 3 months ago
  • 1
  • 90 / page