By using AWS re:Post, you agree to the Terms of Use
/IAM Policy - AWS Transfer Family/

IAM Policy - AWS Transfer Family

0

Hello,

This question may seem a bit long-winded since I will be describing the relevant background information to hopefully avoid back and forth, and ultimately arrive at a resolution. I appreciate your patience.

I have a Lambda function that is authenticating users via Okta for SFTP file transfers, and the Lambda function is called through an API Gateway. My company has many different clients, so we chose this route for authentication rather than creating user accounts for them in AWS. Everything has been working fine during my testing process except for one key piece of functionality.

Since we have many customers, we don't want them to be able to interact or even see another customer's folder within the dedicated S3 bucket. The directory structure has the main S3 bucket at the top level and within that bucket resides each customer's folder. From there, they can create subfolders, upload files, etc. I have created the IAM policy - which is an inline policy as part of an assumed role - as described in this document: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html. My IAM policy looks exactly like the one shown in the "Creating a session policy for an Amazon S3 bucket" section of the documentation.

The "transfer" variables are defined in the Lambda function. Unfortunately, those "transfer" variables do not seem to be getting passed to the IAM policy. When I look at the Transfer Family endpoint log, it is showing access denied after successfully connecting (confidential information is redacted):

<user>.39e979320fffb078 CONNECTED SourceIP=<source_ip> User=<user> HomeDir=/<s3_bucket>/<customer_folder>/ Client="SSH-2.0-Cyberduck/8.3.3.37544 (Mac OS X/12.4) (x86_64)" Role=arn:aws:iam::<account_id>:role/TransferS3AccessRole Kex=diffie-hellman-group-exchange-sha256 Ciphers=aes128-ctr,aes128-ctr

<user>.39e979320fffb078 ERROR Message="Access denied"

However, if I change the "transfer" variables in the Lambda function to include the actual bucket name and update the IAM policy accordingly, everything works as expected; well, almost everything. With this change, I am not able to restrict access and, thus, any customer could interact with any other customer's folders and files. Having the ability to restrict access by using the "transfer" variables is an integral piece of functionality. I've searched around the internet - including this forum - and cannot seem to find the answer to this problem.

Likely, I have overlooked something and hopefully it is an easy fix. Looking forward to getting this resolved. Thank you very much in advance!

5 Answers
0

Hello,

Thank you for sharing all details. To your question on Transfer variables not substituting, I have a few follow up questions -

  • Do your transfer variables reside within the policy attached to the IAM Role associated with the Transfer User?

    • If so, this is not supported. The Policy attached to the IAM Role should be explicit in terms of defining resources. What you want to do is to create a stand-alone policy with the variables and associate it directly with the User rather than the IAM role.
    • There are 2 policies within the documentation that you linked - [1]. The Access policy is what defines access to the back-end storage - S3 in your case. This policy is required and without it, your Users wouldn't be able to access S3 resources. Here, you have to define permissions to the bucket resources based on your requirements and attach it to the IAM Role. You cannot use Transfer variables for this policy.
      The second policy is the Session policy which is associated to the Transfer user directly and is evaluated real-time when the User logs in. This is where you can assign Transfer variables such as ${transfer:HomeBucket}, ${transfer:HomeDirectory} and others to restrict further access on top of the Access policy.
  • Does your setup potentially include double substitution? As in - You have defined your HomeDirectory value as - /bucket/${transfer:Username}. And then within the Policy field, you define permissions to the resource as arn:aws:s3:::${transfer:HomeDirectory}.

    • In this scenario, when policy evaluation takes place, the Resource section is substituted to - arn:aws:s3:::bucket/${transfer:UserName}, however it then doesn't go ahead and substitute it further to the Username used during the session (ex - arn:aws:s3:::bucket/abc). As the policy is now incorrect, S3 will then expectedly throw errors when the User tries to access objects.
    • The resolution to this would be to avoid the double substitution scenario. It would be to use a combination of variables to achieve the use-case. Example - arn:aws:s3:::${transfer:HomeBucket}/${transfer:UserName}. Therefore, when this is evaluated, it translates to - arn:aws:s3:::bucket/abc.

And, adding to your ask of restricting Users to their HomeDirectory, I would suggest using Logical Directories as the way to move forward. Logical Directories provide chroot functionality which allows you to map your Users to a particular directory and keeps them isolated. Also, utilizing Logical Directories would eliminate the requirement of using Session policies. [2]

References:

[1] https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html

[2] https://docs.aws.amazon.com/transfer/latest/userguide/logical-dir-mappings.html

Let me know if you have questions.

-- Sagar

answered 14 days ago
0

Hi Sagar. Thank you very much for responding! Yes, I currently have the transfer variables residing within the policy attached of the IAM Role. I also tried this with Logical Directories and ran into the same issue; my apologies, I should have mentioned that in the original post.

Based on your response, I am not exactly sure how this can be scalable. Essentially, I need the directories - or folders within the S3 bucket - to be dynamic for each user that is associated with a particular company based on the domain in their email address (and the folder name will match their domain). As an example, a user with domain 'xyz' in their email address will have access only to the 'xyz' folder within the S3 bucket.

We are leaving it up to the clients to set up their own users for SFTP transfers, so we don't know who those users will be ahead of time, and they can also add/subtract users as time goes on. That is why I was hoping to rely on the IAM Role with the attached inline Policy to decide which users should have access to which folders based on the Transfer Family variables within the Policy.

So I guess the question now is, how can I make this scalable for our situation? It looks like one method is to utilize ${transfer:UserName}, but I really don't care about their username since I don't want each user to have their own folder; instead, I would just like all of the users within that company to have access to their own company folder and no other company's folder.

I hope this clarifies the problem that I am facing, and please let me know if there are any additional details that I can provide. Hopefully there is a sensible solution to this. Thanks again for all your help!

answered 13 days ago
0

Hello,

Thank you for sharing additional details.

I would like to mention ${transfer:*} variables are not supported within policies attached to IAM Roles. Therefore, if you have them there, I would suggest you remove them as they won't be substituted. And as they are not substituted, S3 considers them as is which leads to Access Denied errors. Those variables are only applicable to Session Policies associated directly to the Transfer User or as part of their HomeDirectory configuration (Only ${transfer:Username} for this case).

To your use-case, the ask is to have some sort of dynamic mapping based on the UserName, specifically the domain section to a similar named prefix in S3. In this case, I would suggest the Logical Directories approach and eliminate Session Policy troubles. Please refer the following workflow -

  • Client connects to the server and Lambda eventually receives a request for the same.
  • Lambda performs the necessary authentication checks with Okta and then starts to build up the response back to the server. [Assuming authentication succeeded]
  • Within this response, you build the Logical Directory based off the domain within the Username section. Lambda should have access to the Username as it received authentication request. [Attaching a reference snippet below]
  • Server receives the User configuration and maps the User to the Logical Directory.

Sample reference code snippet:

# parsing domain from Username
username = 'abc@xyz.com'
domain = username.split('@')[-1]

# building logical directory structure
# e = Entry - represents what the client sees when they login 
# t = Target - represents the actual map in S3

e = '/' + domain  
t = '/bucket/' + domain
h = '[{"Entry":"' + e + '", "Target":"' + t + '"}]'

print(h)
'[{"Entry":"/xyz.com", "Target":"/bucket/xyz.com"}]'

# Include h as HomeDirectoryDetails within User Configuration as part of response to the server

Considering permissions with above use-case, the IAM Role associated to the User would need permissions with explicitly defined resources i.e - actual bucket names and paths (Only the Access Policy section from this document). With Logical Directories in place, even if you define permissions for the entire bucket, the User would only have visibility for the Logical Directory defined.

References:

[1] https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html

[2] https://docs.aws.amazon.com/transfer/latest/userguide/logical-dir-mappings.html

[3] https://aws.amazon.com/blogs/storage/simplify-your-aws-sftp-structure-with-chroot-and-logical-directories/

Let me know if you have further questions.

-- Sagar

answered 13 days ago
0

Thanks again, Sagar! I will give your suggestion a try and get back to you. Please allow me a few days to respond, as I am currently out of the office. Your responses are greatly appreciated, and I will follow up in the near future with the outcome!

answered 11 days ago
0

Sagar, I haven't forgotten about this. I got pulled onto another project, so this one is on hold for the moment (hopefully not for too much longer). I will definitely circle back as soon as I can. Thank you!

answered 6 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions