By using AWS re:Post, you agree to the Terms of Use
/AWS Identity and Access Management/

Questions tagged with AWS Identity and Access Management

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

What is the suggested method to track user's actions after assuming a cross-account role

I need to be able to guarantee that a user's actions can always be traced back to their account regardless of which role they have assumed in another account. What methods are required to guarantee this for? * Assuming a cross-account role in the console * Assuming a cross-account role via the cli I have run tests and can see that when a user assumes a role in the CLI, temporary credentials are generated. These credentials are seen in CloudTrail logs under responseElements.credentials for the assumeRole event. All future events generated by actions taken in the session include the accessKeyId and I can therefore track all of the actions in this case. Using the web console, the same assumeRole event is generated, also including an accessKeyId. Unfortunately, future actions taken by the user don't include the same accessKeyId. At some point a different access key is generated and the session makes use of this new key. I can't find any way to link the two and therefore am not sure of how to attribute actions taken by the role to the user that assumed the role. I can see that when assuming a role in the console, the user can't change the sts:sessionName and this is always set to their username. Is this the suggested method for tracking actions? Whilst this seems appropriate for roles within the same account, as usernames are not globally unique I am concerned about using this for cross account attribution. It seems placing restrictions on the value of sts:sourceIdentity is not supported when assuming roles in the web console.
0
answers
1
votes
35
views
asked 5 days ago

_temp AWS lake formation blueprint pipeline tables appears to IAM user in athena editor although I didn't give this user permission on them

_temp lake formation blueprint pipeline tables appears to IAM user in Athena editor, although I didn't give this user permission on them below the policy granted to this IAM user,also in lake formation permsissions ,I didnt give this user any permissions on _temp tables: { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1652364721496", "Action": [ "athena:BatchGetNamedQuery", "athena:BatchGetQueryExecution", "athena:GetDataCatalog", "athena:GetDatabase", "athena:GetNamedQuery", "athena:GetPreparedStatement", "athena:GetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata", "athena:GetWorkGroup", "athena:ListDataCatalogs", "athena:ListDatabases", "athena:ListEngineVersions", "athena:ListNamedQueries", "athena:ListPreparedStatements", "athena:ListQueryExecutions", "athena:ListTableMetadata", "athena:ListTagsForResource", "athena:ListWorkGroups", "athena:StartQueryExecution", "athena:StopQueryExecution" ], "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": [ "glue:GetDatabase", "glue:GetDatabases", "glue:BatchDeleteTable", "glue:GetTable", "glue:GetTables", "glue:GetPartition", "glue:GetPartitions", "glue:BatchGetPartition" ], "Resource": [ "*" ] }, { "Sid": "Stmt1652365282568", "Action": "s3:*", "Effect": "Allow", "Resource": [ "arn:aws:s3:::queryresults-all", "arn:aws:s3:::queryresults-all/*" ] }, { "Effect": "Allow", "Action": [ "lakeformation:GetDataAccess" ], "Resource": [ "*" ] } ] }
1
answers
0
votes
8
views
asked 9 days ago

Unable to override taskRoleArn when running ECS task from Lambda

I have a Lambda function that is supposed to pass its own permissions to the code running in an ECS task. It looks like this: ``` ecs_parameters = { "cluster": ..., "launchType": "FARGATE", "networkConfiguration": ..., "overrides": { "taskRoleArn": boto3.client("sts").get_caller_identity().get("Arn"), ... }, "platformVersion": "LATEST", "taskDefinition": f"my-task-definition-{STAGE}", } response = ecs.run_task(**ecs_parameters) ``` When I run this in Lambda, i get this error: ``` "errorMessage": "An error occurred (ClientException) when calling the RunTask operation: ECS was unable to assume the role 'arn:aws:sts::787364832896:assumed-role/my-lambda-role...' that was provided for this task. Please verify that the role being passed has the proper trust relationship and permissions and that your IAM user has permissions to pass this role." ``` If I change the task definition in ECS to use `my-lambda-role` as the task role, it works. It's specifically when I try to override the task role from Lambda that it breaks. The Lambda role has the `AWSLambdaBasicExecutionRole` policy and also an inline policy that grants it `ecs:runTask` and `iam:PassRole`. It has a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": [ "ecs.amazonaws.com", "lambda.amazonaws.com", "ecs-tasks.amazonaws.com" ] }, "Action": "sts:AssumeRole" ``` The task definition has a policy that grants it `sts:AssumeRole` and `iam:PassRole`, and a trust relationship that looks like: ``` "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", "AWS": "arn:aws:iam::account-ID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" }, "Action": "sts:AssumeRole" ``` How do I allow the Lambda function to pass the role to ECS, and ECS to assume the role it's been given? P.S. - I know a lot of these permissions are overkill, so let me know if there are any I can get rid of :) Thanks!
2
answers
1
votes
16
views
asked 10 days ago

Error with creating Cloudformation stack during creating resources and have a role specified

I am exploring how to delegate Cloudformation permission to other users by testing specifying a role when creating a stack. I notice that some resources like VPC, IGW and EIP can be created but error was prompted. The created resources cannot be deleted by the stack also during rollback or stack deletion. For example, the following simple template create a VPC: ``` Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.3.9.0/24 ``` I have actually created a role to specify during creation with policy which allow a lot of actions that I collected by querying the cloudtrail using athena. The following are already included: `"ec2:CreateVpc","ec2:DeleteVpc","ec2:ModifyVpcAttribute"` However, the following occur during creation: > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: bf28db5b-461e-48ff-9430-91cc05be77ef)" (RequestToken: bc6c6c87-a616-2e94-65eb-d4e5488a499a, HandlerErrorCode: AccessDenied) Looks like some callback mechanisms are used? The VPC was actually created. The deletion was also failed but it did not succeeded. > Resource handler returned message: "You are not authorized to perform this operation. (Service: Ec2, Status Code: 403, Request ID: f1e43bf1-eb08-462a-9788-f183db2683ab)" (RequestToken: 80cc5412-ba28-772b-396e-37b12dbf8066, HandlerErrorCode: AccessDenied) Any hint about this issue? Thanks.
1
answers
0
votes
5
views
asked 11 days ago

I'd like to request to S3 as a cognito certification qualification.

I'd like to request to S3 as a cognito certification qualification. S3 is using sdk Cognito is using amplify. Use an angular typescript. I would like to replace the secret key with the cognito authentication information when creating S3. I want to access s3 with the user I received from Auth.signIn, but the credentials are missing. I need your help. ``` public signIn(user: IUser): Promise<any> { return Auth.signIn(user.email, user.password).then((user) => { AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', }); const userSession = Auth.userSession(user); const idToken = userSession['__zone_symbol__value']['idToken']['jwtToken']; AWS.config.region = 'ap-northeast-2'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', RoleArn: 'arn:aws:iam::111111111111:role/Cognito_role', Logins: { CognitoIdentityPool: 'ap-northeast-2:aaaaaaaa-bbbb-dddd-eeee-ffffffff', idToken: idToken, }, })); const s3 = new AWS.S3({ apiVersion: '2012-10-17', region: 'ap-northeast-2', params: { Bucket: 'Bucketname', }, }); s3.config.credentials.sessionToken = user.signInUserSession['accessToken']['jwtToken']; s3.listObjects(function (err, data) { if (err) { return alert( 'There was an error: ' + err.message ); } else { console.log('***********s3List***********', data); } }); } ``` bucket policy ``` { "Version": "2012-10-17", "Id": "Policy", "Statement": [ { "Sid": "AllowIPmix", "Effect": "Allow", "Principal": "*", "Action": "*", "Resource": "arn:aws:s3:::s3name/*", } ] } ``` cognito Role Policies - AmazonS3FullAccess ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*", ], "Resource": "*" } ] } ```
0
answers
0
votes
5
views
asked 16 days ago

IAM Docs Feedback: Wrong condition operator modifier?

I am trying to provide feedback on this [IAM docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html) page. When I click the feedback link, it takes me to [here](https://docs-feedback.aws.amazon.com/feedback.jsp?hidden_service_name=IAM&topic_url=https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html) which fails when I submit with this error: ``` HTTP Status 400 – Bad Request Type Exception Report Message Request header is too large Description The server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing). Exception java.lang.IllegalArgumentException: Request header is too large org.apache.coyote.http11.Http11InputBuffer.parseHeaders(Http11InputBuffer.java:629) org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:535) org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:847) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1680) org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:750) Note The full stack trace of the root cause is available in the server logs. Apache Tomcat/8.5.75 ``` Consequently, I'll provide my feedback here. Reading over the docs [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) it would appear you are using the wrong condition operator modifier on [this page](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html) Wouldn't this require that `SourceIdentity` be set to both `Saanvi` and `Diego`, not either or: ``` "StringLike": { "sts:SourceIdentity": [ "Saanvi", "Diego" ] } ``` Shouldn't it be?: ``` "ForAnyValue:StringEquals": { "sts:SourceIdentity": [ "Saanvi", "Diego" ] } ``` Also you appear to arbitrarily be using `StringLike` instead of `StringEquals` throughout: ``` "Condition": { "StringLike": { "sts:SourceIdentity": "${aws:username}" } } ``` Although there are no wildcards in this if you want an exact match, wouldn't it be more clear to use `StringEquals`?
1
answers
0
votes
4
views
asked 18 days ago
1
answers
0
votes
12
views
asked 19 days ago

Should I use Cognito Identity Pool OIDC JWT Connect Tokens in the AWS API Gateway?

I noticed this question from 4 years ago: https://repost.aws/questions/QUjjIB-M4VT4WfOnqwik0l0w/verify-open-id-connect-token-generated-by-cognito-identity-pool So I was curious and I looked at the JWT token being returned from the Cognito Identity Pool. Its `aud` field was my identity pool id and its `iss` field was "https://cognito-identity.amazonaws.com", and it turns out that you can see the oidc config at "https://cognito-identity.amazonaws.com/.well-known/openid-configuration" and grab the public keys at "https://cognito-identity.amazonaws.com/.well-known/jwks_uri". Since I have access to the keys, that means I can freely validate OIDC tokens produced by the Cognito Identity Pool. Moreso, I should be also able to pass them into an API Gateway with a JWT authorizer. This would allow me to effectively gate my API Gateway behind a Cognito Identity Pool without any extra lambda authorizers or needing IAM Authentication. Use Case: I want to create a serverless lambda app that's blocked behind some SAML authentication using Okta. Okta does not allow you to use their JWT authorizer without purchasing extra add-ons for some reason. I could use IAM Authentication onto the gateway instead but I'm afraid of losing formation such as the user's id, group, name, email, etc. Using the JWT directly preserves this information and passes it to the lambda. Is this a valid approach? Is there something I'm missing? Or is there a better way? Does the IAM method preserve user attributes...?
0
answers
0
votes
2
views
asked 25 days ago

Role chaining problem

Hi, Im trying to achieve the "role chaining" as in the https://aws.plainenglish.io/aws-iam-role-chaining-df41b1101068 i have an user `admin-user-01` with policy assigned: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/admin_group_role" } } ``` I have a role, which is meant for `admin-user-01`, with `role_name = admin_group_role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:user/admin-user-01" }, "Action": "sts:AssumeRole" } ] } ``` And it also has a policy: ``` { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<accountid>:role/test-role" } } ``` Then, i have another role, which is assigned for the role above (`admin_group_role`), with `role_name = test-role` and trust policy = ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<accountid>:role/admin_group_role" }, "Action": "sts:AssumeRole" } ] } ``` But when i login as `admin-user-01` into account, then switch to the role `admin_group_role` and then try to switch to role `test-role` i get : `Invalid information in one or more fields. Check your information or contact your administrator.` P.S everywhere <accountid> is the same, all of the roles,users,permissions are created in the same account ( what, i suppose might be the reason why i face the error ) What am i doing wrongly?
2
answers
0
votes
6
views
asked a month ago

Can't see EBS Snapshot tags from other accounts

Hi, I have private snapshots in one account (source) that I have shared with another account (target). I am able to see the snapshots themselves from the target account, but the tags are not available, neither on the console nor via the cli. This makes it impossible to filter for a desired snapshot from the target account. For background, the user in the target account has the following policy in effect: ``` "Effect": "Allow", "Action": "ec2:*", "Resource": "*" ``` Here's an example of what I'm seeing; from the source account: ``` $ aws --region us-east-2 ec2 describe-snapshots --snapshot-ids snap-XXXXX { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Tags": [ { "Value": "test-snapshot", "Key": "Name" } ], "Encrypted": true, "VolumeId": "vol-XXXXX", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:XXXXX:key/mrk-XXXXX", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "XXXXX", "SnapshotId": "snap-XXXXX" } ] } ``` but from the target account ``` $ aws --region us-east-2 ec2 describe-snapshots --owner-ids 012345678900 --snapshot-ids snap-11111111111111111 { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Encrypted": true, "VolumeId": "vol-22222222222222222", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:012345678900:key/mrk-00000000000000000000000000000000", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "012345678900", "SnapshotId": "snap-11111111111111111" } ] } ``` Any ideas on what's going on here? Cheers!
1
answers
0
votes
4
views
asked a month ago

EC2 Instance Status Check fails when created by CloudFormation template

I have created a CloudFormation Stack using the below template in the **us-east-1** and **ap-south-1** region AWSTemplateFormatVersion: "2010-09-09" Description: Template for node-aws-ec2-github-actions tutorial Resources: InstanceSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Sample Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 443 ToPort: 443 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 0.0.0.0/0 EC2Instance: Type: "AWS::EC2::Instance" Properties: ImageId: "ami-0d2986f2e8c0f7d01" #Another comment -- This is a Linux AMI InstanceType: t2.micro KeyName: node-ec2-github-actions-key SecurityGroups: - Ref: InstanceSecurityGroup BlockDeviceMappings: - DeviceName: /dev/sda1 Ebs: VolumeSize: 8 DeleteOnTermination: true Tags: - Key: Name Value: Node-Ec2-Github-Actions EIP: Type: AWS::EC2::EIP Properties: InstanceId: !Ref EC2Instance Outputs: InstanceId: Description: InstanceId of the newly created EC2 instance Value: Ref: EC2Instance PublicIP: Description: Elastic IP Value: Ref: EIP The Stack is executed successfully and all the resources are created. But unfortunately, once the EC2 status checks are initialized the Instance status check fails and I am not able to reach the instance using SSH. I have tried creating an Instance manually by the same IAM user, and that works perfectly. These are the Policies I have attached to the IAM user. Managed Policies * AmazonEC2FullAccess * AWSCloudFormationFullAccess InLine Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetRole", "iam:GetInstanceProfile", "iam:DeleteRolePolicy", "iam:RemoveRoleFromInstanceProfile", "iam:CreateRole", "iam:DeleteRole", "iam:UpdateRole", "iam:PutRolePolicy", "iam:AddRoleToInstanceProfile" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListAllMyBuckets", "s3:CreateBucket", "s3:DeleteObject", "s3:DeleteBucket" ], "Resource": "*" } ] } Thanks in advance for helping out. Have a good day
1
answers
0
votes
5
views
asked a month ago

Access S3 files from Unity for mobile development

I'm trying to configure the AWS S3 service to download the included files in a bucket using Unity for mobile. I downloaded the SDK package and I got it installed. From AWS console I set up a IAM policy and roles for unauth users I created a Cognito IdentityPool and got the relative id I set up the S3 bucket and its policy using the generator, including the **arn:aws:iam::{id}:role/{cognito unauth role}** and the resource **arn:aws:s3:::{bucket name}/***. In code I set credentials and region and create CognitoAWSCredentials (C# used) ```C# _credentials = new CognitoAWSCredentials(IdentityPoolId, _CognitoIdentityRegion); ``` then I create the client: ```C# _s3Client = new AmazonS3Client(_credentials, RegionEndpoint.EUCentral1); // the region is the same in _CognitoIdentityRegion ``` I then try to use the s3Client to get my files (in bucketname subfolders) ``` private void GetAWSObject(string S3BucketName, string folder, string sampleFileName, IAmazonS3 s3Client) { string message = string.Format("fetching {0} from bucket {1}", sampleFileName, S3BucketName); Debug.LogWarning(message); s3Client.GetObjectAsync(S3BucketName, folder + "/" + sampleFileName, (responseObj) => { var response = responseObj.Response; if (response.ResponseStream != null) { string path = Application.persistentDataPath + "/" + folder + "/" + sampleFileName; Debug.LogWarning("\nDownload path AWS: " + path); using (var fs = System.IO.File.Create(path)) { byte[] buffer = new byte[81920]; int count; while ((count = response.ResponseStream.Read(buffer, 0, buffer.Length)) != 0) fs.Write(buffer, 0, count); fs.Flush(); } } else { Debug.LogWarning("-----> response.ResponseStream is null"); } }); } ``` At this point I cannot debug into the Async method, I don't get any kind of error, I don't get any file downloaded and I even cannot check is connection to AWS S3 has worked in some part of the script. What am I doing wrong? Thanks for help a lot!
0
answers
0
votes
3
views
asked a month ago
1
answers
0
votes
26
views
asked a month ago

AWS Backup for AWS Organizations IAM Configuration Issue

I am having issues setting up the required IAM access for cross account backups. As I understand the requirements there are four places to configure IAM access: Source Account (management account) Backup Vault Source Account (management account) Resource Assignment Target Account Backup Vault Target Account IAM access role From the AWS Backup Developer Guide p162 I understand that the IAM roles in the Source and Target accounts, Backup Vaults, and the Backup Vault permissions need to match. I have the following configured: Source Account Backup Vault Access – “Allow Access to Backup Vault from Organisation” Source Account Resource Assignment – Role with default policy called “AWSBackupOrganizationAdminAccess” Target Account Backup Vault Access - “Allow Access to Backup Vault from Organisation” Target Account IAM access role - Role with default policy called “AWSBackupOrganizationAdminAccess” I have followed the setup guide to enable cross account backups for my AWS organization. When I run a backup job for an EC2 server in the target account I get the following error: Your backup job failed as AWS Backup does not have permission to describe resource <aws ec2 arn> I assume that somewhere I do not have the IAM access configured correctly. As there are four places where I can configure IAM access how do I track down where the issue is?
1
answers
0
votes
7
views
asked a month ago

Required Capabilities Cloudformation Template

I am getting an exception when Deploying a cloud formation template regarding Requires capabilities : [CAPABILITY_IAM]. I have done some research and found out that when using IAM resources in the template we have to explicitly tell AWS that we are aware of IAM resources in the template. I have done that. Below is my command $ ./update.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM $ ./update.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM $ ./create.sh ScalableAppCore AppServers.yml AppParameterCore.json --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND Tried all 3 commands but still, the output shows: An error occurred (InsufficientCapabilitiesException) when calling the UpdateStack operation: Requires capabilities : [CAPABILITY_IAM] Here is the actual code : This is the Role I have created for S3 ``` IamS3Role: Type: AWS::IAM::Role Properties: ManagedPolicyArns: - "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - ec2.amazonaws.com Action: - 'sts:AssumeRole' Path: / ``` Instance Profile attachment ``` ProfileWithRolesForApp: Type: AWS::IAM::InstanceProfile Properties: Path: "/" Roles: - !Ref IamS3Role ``` Please let me know where I am wrong . Thanks in advance
2
answers
0
votes
23
views
asked 2 months ago

Access error when going to S3 console - 403 Forbidden error for all the s3 bucket

Hi, Today - without any specific operation which I have made - I got the following error when accessing to the S3 Console at: https://s3.console.aws.amazon.com/ > Thanks for signing up with Amazon Web Services. Your services may take up to 24 hours to fully activate. If you’re unable to access AWS services after that time, here are a few things you can do to expedite the process: > Make sure you provided all necessary information during signup. Complete your AWS registration. Check your email to see if you have received any requests for additional information. If you have, please respond to those emails with the information requested. > Verify your credit card information is correct. Also, check your credit card activity to see if there’s a $1 authorization (this is not a charge). You may need to contact your card issuer to approve the authorization. If the problem persists, please contact Support: Furthermore when trying to accessing to any S3 buckets which belong to the same organisation and they were public (Static web sites) we got: > 403 Forbidden >Code: AllAccessDisabled >Message: All access to this object has been disabled >RequestId: 4AWKPXHEKK4R23B4 >HostId: yP4BnTua4EXv2MjpPpSZip2gIrifx2xZ7ckCkMNGKjFjujJzuMMQUlgKxQi9GXMPEGdjnPrR6G0= At the moment I cannot see the S3 console, and all the public websites inside that S3 static folder are under 403 Forbidden error. Do you have any advice of what could have been done. Thanks
1
answers
0
votes
4
views
asked 2 months ago

How does an EC2 instance assume an IAM Role?

I am working through the Security Learning Plan and was recently watching the video about AWS Secrets Manager. The real question that concerns me is why using AWS Secrets Manager is better than say, store encrypted credentials in a config file. I know that there are a couple of aspects where Secrets Manager is obviously a lot better than a config file (credential rotation, central point of maintenance of credentials) but these don't answer my question. To clarify, let me give you an example. Let's say I have an EC2 instance running Tomcat and Tomcat needs credentials to connect to a DB. I store the creds in Secrets Manager and use the Java API to let Tomcat retrieve them. I guess (I didn't try it out yet) I need to grant the required permissions to the EC2 instance by assigning it an IAM role with appropriate permissions (maybe this assumption is incorrect, if so, how would I do it instead?). And now come my questions: 1. How does Secrets Manager know the credential request is legimitate? 2. How will the HTTP Query Tomcat sends authenticate against Secrets Manager? 3. Will Secrets Manager see that a Java process is querying or will it only see a request coming from instance i-something? 4. If the answer of 3 is "Secrets Manager sees only the instance querying" then how can I prevent another process on the same box querying the DB creds? If I chose the solution with the config file one of the major security drawbacks is that any intruder on the box can read the files, reverse the code the encrypts the creds and decrypt them. I would like to know if Secrets Manager provides a good solution for this fundamental problem. I didn't find any posts discussing this in the required detail (e.g. [https://repost.aws/questions/QUAsOpdhR-QAKVZEL0nRGTkw/aws-secrets-manager-with-boto-3-in-python](https://repost.aws/questions/QUAsOpdhR-QAKVZEL0nRGTkw/aws-secrets-manager-with-boto-3-in-python)). I hope I explained the problem well enough. Thanks for every answer.
2
answers
0
votes
9
views
asked 2 months ago
1
answers
0
votes
15
views
asked 2 months ago
1
answers
0
votes
18
views
asked 2 months ago
  • 1
  • 90 / page