Questions tagged with AWS Transfer Family

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Logical Directories not working with multiple users

Good day I've implemented the custom IDP using the template (aws-transfer-custom-idp-secrets-manager-apig.template.yml) provided. I've created a user in secrets manager and attached the role containing the below policy in which I explicitly specify the users username as directory, indicated as "user1" for demonstration purposes. I am then able to successfully authenticate via SSH or Username/Password methods. I then created a new role/policy for a new user and specify the new user directory as "user2" in the policy. The problem is with the new user it authenticates fine however upon login it generates an "access denied" error and does not seem to place the user in the logical directory specified in secrets manager. This error persists with each new user I've attempted to create using the same details as the initial user1.Please assist, I've attached the user format as inserted to Secrets Manager as well as the policy below for your perusal. Thanks Secrets Manager User PLAINTEXT stored as "SFTP/user2" : { "Password": "password", "Role": "arn:aws:iam::111111111111:role/rolename", "PublicKey": "ssh-rsa AAAA", "HomeDirectoryType": "LOGICAL", "HomeDirectoryDetails": "\[{\"Entry\": \"/\", \"Target\": \"/bucketname/user2\"}]" } POLICY : { "Version": "2012-10-17", "Statement": \[ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": \[ "s3:ListBucket" ], "Resource": "arn:aws:s3:::bucketname" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": \[ "arn:aws:s3:::bucketname/user2/in/*", "arn:aws:s3:::bucketname/user2/out/*" ] }, { "Sid": "VisualEditor2", "Effect": "Deny", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::bucketname/user2/" } ] } Note, this policy works for our use case in that it allows a user to GET/PUT to the in/out folders however denies them from PUT at their logical root. The s3 structure is as follows: bucketname/user2/folders and again it works with the first user created as user1. Thanks
2
answers
0
votes
30
views
zayneR
asked 2 years ago

FTPS - support for scope down policy?

Hello, I am looking for guidance on setting up scope down policy for FTPS users on the transfer family service. Within the lambda function that does the user authentication, i am attempting to add the policy JSON to the response body as described in the documentation. ..... response = { Role: 'arn:aws:iam::xxxxxxx:role/assumedRoleForTransferService', Policy: myPolicyJSON, HomeDirectory: '' }; ....... The scope down policy looks similar to what SFTP scope down users would use except I am not using the transfer variables (eg. ${transfer:HomeDirectory}) as I suspect they don't work because with FTPS there are no "managed" users to map the variables to. Instead my lambda will dynamically replace variables in the policy dependent on logic within the lambda. Adding the scope down policy to the lambda response creates an error when connecting to the server. Removing the scope down policy from the lambda allows me to connect and upload but then I am not restricted within the bucket. My user scope down policy JSON looks like this prior to replacing the dyanmic variables with the appropriate user paths. { "Version": "2012-10-17", "Statement": \[ { "Sid": "AllowListingOfUserFolder", "Action": \[ "s3:ListBucket" ], "Effect": "Allow", "Resource": "arn:aws:s3:::mybucket" , "Condition": { "StringLike": { "s3:prefix": \[ "DYNAMIC_USER_VARIABLE/*", "DYNAMIC_USER_VARIABLE" ] } } }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion", "s3:GetObjectACL", "s3:PutObjectACL" ], "Resource": "arn:aws:s3:::mybucket/DYNAMIC_USER_VARIABLE/*" } ] } Is scope down policies a part of the FTPS service? If so is there any glaring issue in my policy JSON above? thanks in advance!
1
answers
0
votes
28
views
awsAMDR
asked 3 years ago

Unable to specify bucket with custom identity provider

I've customized my identity provider using the template and instructions available here: https://docs.aws.amazon.com/transfer/latest/userguide/authenticating-users.html I'm able to get a correct response from my API and successfully log while testing in AWS Transfer and with FileZilla. However, it's not actually allowing a user to view existing files or upload new files. Here is the response from the identity provider API: ``` { "Policy": "<policy granting full access to bucket>", "Role": "<role with full access to S3>", "HomeDirectory": "/<my bucket>/test" } ``` I'm assuming this is acceptable based off the information on these pages: https://aws.amazon.com/blogs/storage/simplify-your-aws-sftp-structure-with-chroot-and-logical-directories/ https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-user.html However, FileZilla gives me the following log: ``` Status: Connecting to sftp.mydomain.com... Status: Using username "test". Status: Connected to 123456.server.transfer.us-east-1.amazonaws.com Status: Retrieving directory listing... Status: Listing directory /<my bucket>/test Error: Unknown eventType 37 Error: Failed to retrieve directory listing ``` So I tried using logical directories instead using the information in the previous links. This is an example response from the API: ``` { "Policy": "<policy granting full access to bucket>", "Role": "<role with full access to S3>", "HomeDirectoryType": "LOGICAL", "HomeDirectoryDetails": [ { "Entry": "/", "Target": "/<my bucket>/test" } ] } ``` I updated my UserConfigResponseModel in the API Gateway to this: ``` { "$schema":"http://json-schema.org/draft-04/schema#", "title":"UserUserConfig", "type":"object", "properties": { "Role":{"type":"string"}, "Policy":{"type":"string"}, "HomeDirectory":{"type":"string"}, "HomeDirectoryType":{"type":"string"}, "HomeDirectoryDetails": { "type":"array", "items": { "type":"object", "properties": { "Entry":{"type":"string"}, "Target":{"type":"string"} } } }, "PublicKeys": { "type":"array", "items":{"type":"string"} } } } ``` When I test this in AWS Transfer, I get the following response: ``` Unable to call identity provider: Unable to unmarshall response (We expected a VALUE token but got: START_ARRAY). Response Code: 200, Response Text: OK ``` All of this is very frustrating because the responses I am getting do not match what I would expect to see after reading the documentation. My question is this: how do I specify a bucket when using a custom identity provider in AWS Transfer. Edited by: paul_hatcher on May 19, 2020 9:26 AM
1
answers
0
votes
205
views
asked 3 years ago