By using AWS re:Post, you agree to the Terms of Use

Questions tagged with IAM Policies

Sort by most recent
  • 1
  • 12 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Allowing permission to Generate a policy based on CloudTrail events where the selected Trail logs events in an S3 bucket in another account

I have an AWS account (Account A) with CloudTrail enabled and logging management events to an S3 'logs' bucket in another, dedicated logs account (Account B, which I also own). The logging part works fine, but I'm now trying (and failing) to use the 'Generate policy based on CloudTrail events' tool in the IAM console (under the Users > Permissions tab) in Account A. This is supposed to read the CloudTrail logs for a given user/region/no. of days, identify all of the actions the user performed, then generate a sample IAM security policy to allow only those actions, which is great for setting up least privilege policies etc. When I first ran the generator, it created a new service role to assume in the same account (Account A): AccessAnalyzerMonitorServiceRole_ABCDEFGHI When I selected the CloudTrail trail to analyse, it (correctly) identified that the trail logs are stored in an S3 bucket in another account, and displayed this warning messsage: > Important: Verify cross-account access is configured for the selected trail The selected trail logs events in an S3 bucket in another account. The role you choose or create must have read access to the bucket in that account to generate a policy. Learn more. Attempting to run the generator at this stage fails after a short amount of time, and if you hover over the 'Failed' status in the console you see the message: > Incorrect permissions assigned to access CloudTrail S3 bucket. Please fix before trying again. Makes sense, but actually giving read access to the S3 bucket to the automatically generated AccessAnalyzerMonitorServiceRole_ABCDEFGHI is where I'm now stuck! I'm relatively new to AWS so I might have done something dumb or be missing something obvious, but I'm trying to give the automatically generated role in Account A permission to the S3 bucket by adding to the 'Bucket Policy' attached to the S3 logs bucket in our Account B. I've added the below extract to the existing bucket policy (which is just the standard policy for a CloudTrail logs bucket, extended to allow CloudTrail in Account A to write logs to it as well), but my attempts to run the policy generator still fail with the same error message. ``` { "Sid": "IAMPolicyGeneratorRead", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::1234567890:role/service-role/AccessAnalyzerMonitorServiceRole_ABCDEFGHI" }, "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI", "arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI/*" ] } ``` Any suggestions how I can get this working?
0
answers
0
votes
7
views
asked 21 hours ago

Data Mesh on AWS Lake Formation

Hi, I'm building a data mesh in AWS Lake Formation. The idea is to have 4 accounts: account 0: main account account 1: central data governance account 2: data producer account 3: data consumer I have been looking for information about how to implement the mesh in AWS and I'm following some tutorials that are very similar to what I'm doing: https://catalog.us-east-1.prod.workshops.aws/workshops/78572df7-d2ee-4f78-b698-7cafdb55135d/en-US/lakeformation-basics/cross-account-data-mesh https://aws.amazon.com/blogs/big-data/design-a-data-mesh-architecture-using-aws-lake-formation-and-aws-glue/ https://aws.amazon.com/blogs/big-data/build-a-data-sharing-workflow-with-aws-lake-formation-for-your-data-mesh/ However, after having created the bucket and uploaded some csv data to it (in the producer account), I don't know if I have to register first to the glue catalog in the producer account or I just do it in the lake formation like it says here: https://catalog.us-east-1.prod.workshops.aws/workshops/78572df7-d2ee-4f78-b698-7cafdb55135d/en-US/lakeformation-basics/databases (is this dependant on if one uses glue permissions or lake formation permissions in lake formation configuration?) Indeed I have done it first the database and the table in glue and then when I go to lake formation in the database and table sections the database and table created from glue appear there without doing anything. Even if I disable there the options: "Use only IAM access control for new databases" "Use only IAM access control for new tables in new databases" both the database and table appear there do you know if glue and lake formations share the data catalog? and I'm doing it correctly? thanks, John
0
answers
0
votes
14
views
asked 2 days ago

Issue with pushing an EC2 instance's Docker container logs into CloudWatch

I have a working EC2 instance in free tier, with a responding **java-based** grpc server in a docker container inside the instance.\ I'd like to send the logs of the container into the CloudWatch.\ I created the suggested policy, the EC2 role, and the role is attached to the instance.\ The container is started from the bash of the linux instance with this command:\ `docker run -d -p 9092:9092 -t <<my-container-name>> --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group="gRPC-POC" --log-opt awslogs-stream="gRPC-POC-log" --log-opt awslogs-create-group=true --log-opt awslogs-create-stream=true` \ I tried to run the container with different users, with different options of the log-driver, omitting parts and almost everything.\ The policy I created to use the CloudWatch looks like this:\ ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents", "logs:DescribeLogStreams" ], "Effect": "Allow", "Resource": "arn:aws:logs:us-east-1:<<my-account-number>>:log-group:*:*" } ] } ``` So far,no sign of the gathered logs in CloudWatch even if I create a log-group and/or log-stream or I don't.\ Maybe I'm missing a step or a vital information somewhere?\ Do You have any suggestions, please? #EDIT The command `aws sts get-caller-identity` gives this result: ![Enter image description here](/media/postImages/original/IM2OUiCy6OTyi-RAGhLS-C1g) The command was used from the bash of the running instance. (This is what You meant, @Roberto? Anyways, thanks.)\ It looks like the instance has the proper right, 'GrpcPocAccessLogs'.
2
answers
0
votes
37
views
asked 3 days ago

Restriction on CloudFormation StackSet with IAM condition cloudformation:TemplateUrl

I'm trying to restrict the S3 bucket used for **StackSet** templates with the IAM condition **cloudformation:TemplateUrl**, but it's does not work as expected: the IAM Policy applied always deny the CreateStackSet. See below the tested policy. The [doc page](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#using-iam-template-conditions) explains that you can use the condition as usual, but there is a Note that is not clear for me: ![Enter image description here](/media/postImages/original/IMUjPviuTuSAaoxl5HvXktBQ) For allowed CreateStackSet calls, the CloudTrail event included the TemplateUrl in the context, so I don't understand why the condition does not work with Stack Set. Thank for your help! ``` { "eventVersion": "1.08", [...] "eventTime": "2022-08-09T15:42:50Z", "eventSource": "cloudformation.amazonaws.com", "eventName": "CreateStackSet", "awsRegion": "us-east-1", "sourceIPAddress": "AWS Internal", "userAgent": "AWS Internal", "requestParameters": { "stackSetName": "test-deny1", "templateURL": "https://s3.amazonaws.com/trusted-bucket/EnableAWSCloudtrail.yml", "description": "Enable AWS CloudTrail. This template creates a CloudTrail trail, an Amazon S3 bucket where logs are published, and an Amazon SNS topic where notifications are sent.", "clientRequestToken": "1bd60a6d-f9dc-76a9-020a-f5a45f1bdf1e", "capabilities": [ "CAPABILITY_IAM" ] }, "responseElements": { "stackSetId": "test-deny1:97054f39-3925-47eb-92fd-09779f32bcf6" }, [...] } ``` For reference my IAM Policy: ``` { "Sid": "TemplateFromTrustedBucket", "Effect": "Allow", "Action": [ "cloudformation:CreateStackSet", "cloudformation:UpdateStackSet" ], "Resource": "*", "Condition": { "StringLike": { "cloudformation:TemplateURL": "https://s3.amazonaws.com/trusted-bucket/*" } } } ```
0
answers
0
votes
36
views
profile picture
asked 6 days ago

not authorized to perform: sagemaker:CreateModel on resource

I have been given AmazonSagemakerFullAccess by my companie's AWS admin. No one at our company can figure out why I can't get this line to run to launch the model. ***** CODE PRODUCING ERROR ***** lang_id = sagemaker.Model( image_uri=container, model_data=model_location, role=role, sagemaker_session=sess ) lang_id.deploy(initial_instance_count=1, instance_type="ml.t2.medium") ***** ERROR MESSAGE ***** --------------------------------------------------------------------------- ClientError Traceback (most recent call last) <ipython-input-5-4c80ec284a4b> in <module> 2 image_uri=container, model_data=model_location, role=role, sagemaker_session=sess 3 ) ----> 4 lang_id.deploy(initial_instance_count=1, instance_type="ml.t2.medium") 5 6 from sagemaker.deserializers import JSONDeserializer ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sagemaker/model.py in deploy(self, initial_instance_count, instance_type, serializer, deserializer, accelerator_type, endpoint_name, tags, kms_key, wait, data_capture_config, async_inference_config, serverless_inference_config, **kwargs) 1132 1133 self._create_sagemaker_model( -> 1134 instance_type, accelerator_type, tags, serverless_inference_config 1135 ) 1136 ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sagemaker/model.py in _create_sagemaker_model(self, instance_type, accelerator_type, tags, serverless_inference_config) 671 tags=tags, 672 ) --> 673 self.sagemaker_session.create_model(**create_model_args) 674 675 def _ensure_base_name_if_needed(self, image_uri, script_uri, model_uri): ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sagemaker/session.py in create_model(self, name, role, container_defs, vpc_config, enable_network_isolation, primary_container, tags) 2715 raise 2716 -> 2717 self._intercept_create_request(create_model_request, submit, self.create_model.__name__) 2718 return name 2719 ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sagemaker/session.py in _intercept_create_request(self, request, create, func_name) 4294 func_name (str): the name of the function needed intercepting 4295 """ -> 4296 return create(request) 4297 4298 ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/sagemaker/session.py in submit(request) 2703 LOGGER.debug("CreateModel request: %s", json.dumps(request, indent=4)) 2704 try: -> 2705 self.sagemaker_client.create_model(**request) 2706 except ClientError as e: 2707 error_code = e.response["Error"]["Code"] ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs) 506 ) 507 # The "self" in this scope is referring to the BaseClient. --> 508 return self._make_api_call(operation_name, kwargs) 509 510 _api_call.__name__ = str(py_operation_name) ~/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params) 909 error_code = parsed_response.get("Error", {}).get("Code") 910 error_class = self.exceptions.from_code(error_code) --> 911 raise error_class(parsed_response, operation_name) 912 else: 913 return parsed_response ClientError: An error occurred (AccessDeniedException) when calling the CreateModel operation: User: arn:aws:sts::XXXXXXXXXX:assumed-role/sagemakeraccesstoservices/SageMaker is not authorized to perform: sagemaker:CreateModel on resource: arn:aws:sagemaker:us-east-2:XXXXXXXXXX:model/blazingtext-2022-08-09-13-58-21-739 because no identity-based policy allows the sagemaker:CreateModel action
1
answers
0
votes
38
views
asked 7 days ago

Renaming object in S3 console fails if ListAllMyBuckets permission is not provided

Hi, I have had a problem with a user not being able to rename an S3 object through the AWS console, despite having the all the permissions over the bucket and the bucket objects. The associated IAM policy for the user is this: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::s3-bucket-name", "arn:aws:s3:::s3-bucket-name/*" ] }, { "Sid": "VisualEditor3", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::s3-bucket-name" } ] } ``` When the user tries to rename a file in the S3 bucket, the console complains about *s3:PutObject* permission, which is granted, and sees an "Access denied" error in the AWS console. ![Access denied when renaming S3 object](https://repost.aws/media/postImages/original/IMX4V3P7N4TxiGZDcqeKXZPg) The weirdest thing of all is that the problem is solved by adding the *ListAllMyBuckets* permission, and once added to the user's IAM policy, the user is able to rename objects without a problem. This behavior is also documented on StackOverflow, in [this](https://stackoverflow.com/questions/33926553/aws-rename-permissions/63348973#63348973) and [this](https://stackoverflow.com/questions/42984344/renaming-object-from-in-aws-s3-console-with-iam-user/42996548#42996548) answers. In addition, a StackOverflow user comments that this operation only fails through the AWS console, and that it works using the CLI. To me, fixing it through adding *ListAllMyBucket* permission doesn't make any sense, and allows the user to see other bucket names.
0
answers
0
votes
28
views
profile picture
asked 8 days ago
  • 1
  • 12 / page