Questions tagged with AWS Transfer Family
Content language: English
Sort by most recent
Hi, I find that this is the second month I've been charged over $200 for AWS Transfer Family SFTP:S3. My question is what specifically contributes to that cost, and is there a way to use storage without SFTP?
My account is currently using just a few things on AWS. First, a web app on Amplify that generates images and saves them to S3. Would that contribute to that cost? Is there a better way to handle that?
The other thing that uses S3 is uploading a couple large (~2-4GB) files to S3 via the console, and then a third-part service downloading that, currently via a presigned URL. I think at one point I had looked into creating a script to upload those files automatically, but I don't know if that would have contributed to it.
I don't have (or don't believe I have) any kind of SFTP server set up. I just want to understand what falls under that higher priced SFTP service. Any guidance would be appreciated. Thank you!
Hi,
I am trying to setup AWS File transfer SFTP server.
Here is my requirement:
1. User must be authenticated via third part identity provider which in Azure Authentication in our case.
2. Once user logged in they should two folder in their homedirectory i.e. {transfer:user}/folder1 and {transfer:user}/folder2
3. User should be restricted to put files in either folder1 or folder2, not in their home directory.
4. User should be able download the files only if specific tag is set on object/files in S3
So far, I am able to achieve Step 1 and Step 2 --
Step 1 -- custom authentication using lambda.
Step 2 -- Once user authenticated successfully, Lambda will create folder1 and folder2 in their homedirectory.
But when user logged into their home-directory they are not able to see folder1 and folder2 in their homedirectory but I can see folders were created successfully in S3 bucket.
Here is IAM role attached to Transfer server and not able to figure out what's wrong with it. Any help would be appreciate.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadWriteS3",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::s3-bucket"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::s3-bucket/*"
]
},
{
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/allowdownload": "yes"
}
},
"Resource": [
"arn:aws:s3:::s3-bucket/*"
],
"Effect": "Allow",
"Sid": "DownloadAllowed"
},
{
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/allowdownload": "no"
}
},
"Resource": [
"arn:aws:s3:::s3-bucket/*"
],
"Effect": "Deny",
"Sid": "DownloadNotAllowed"
},
{
"Sid": "DenyMkdir",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::s3-bucket/*/*/"
}
]
}
```
Within lambda where user authentication happens, I am returning user's homedirectory
```
HomeDirectoryDetails = [{"Entry":"/","Target":"/s3-bucket/${transfer:UserName}"}]
```
also tried below but no luck
```
HomeDirectoryDetails = = [{"Entry":"/folder1","Target":"/s3-bucket/${transfer:UserName}/folder1"},{"Entry":"/folder2","Target":"/s3-bucket/${transfer:UserName}/folder2"}]
```
User gets permission denied error when try to do "ls" in their home directory
```
sftp> ls
Couldn't read directory: Permission denied
```
When creating the Transfer Family server for us-east-1, the elastic IP address can't be assigned to the subnet created when access is set to "Internet Facing". It's completely greyed out. However, when creating the VPC, the subnet and its components were created together automatically. The AZs were set and it is connected to a public network gateway. The elastic IP address has the type "Public IP".
What's weirder is that I've used the same method to create a server in both us-east-2 and us-west-1 successfully. What else should I be checking?
'm trying to build an SFTP server for an EFS that uses a lambda function to check username and password provided against a Secret in AWS.
I followed [this article](https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-for-sftp-using-aws-secrets-manager/) but changed it a bit, I'm not using an API Gateway, I use the lambda function directly as identity provider which fetches the following data from secret Manager :
```
"Role" : "arn:aws:iam::xxxxxxxxxxx:role/my-transfer-role",
"PosixProfile": {
"Uid": 1001,
"Gid": 1001,
"SecondaryGids": []
},
"HomeDirectory": "/"
```
so far I can only connect to the SFTP server, but can't read or write what's on the EFS `Message="Unable to list directory: permission denied for /"`
I created a role and a policy attached to Transfer with permissions on my EFS as explained in [this guide](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-roles.html)
Is there something I'm missing in this configuration please? Thanks
Hi All,
Can anyone help on how to setup custom identity provider for file transfer family using Lambda or API Gateway. We have PingFederated Identity management and Azure Identity management. I have no idea how these can work with File Transfer server.
Please details if anyone already have implemented similar or same use case.
Thank You
Hi All,
We have setup AWS file transfer server with AWS directory service (connected to Microsoft AD) authentication. As per use case, once user login to sftp, user should be able to see two directory within their own folder.
{username}/folder1
{username}/folder2
I have setup below Access policy and IAM policy (attached to S3)
create-access CLI:
```
aws transfer create-access \
--home-directory-type LOGICAL \
--home-directory-mappings '[{"Entry":"/folder1","Target":"/bucket_name/${transfer:UserName}/folder1" },{ "Entry": "/folder2", "Target":"/bucket_name/${transfer:UserName}/folder2"}]' \
--role arn:aws:iam::account_id:role/iam_role \
--server-id s-1234567876454ert \
--external-id S-1-2-34-56789123-12345678-1234567898-1234
```
access policy was created successfully.
Below IAM role is attached to S3 bucket and file-transfer server.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::bucket_name"
],
"Effect": "Allow",
"Sid": "ReadWriteS3"
},
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObjectVersion",
"s3:GetObjectACL",
"s3:PutObjectACL"
],
"Resource": [
"arn:aws:s3:::bucket_name/${transfer:UserName}/*"
],
"Effect": "Allow",
"Sid": ""
}
]
}
```
When user login to sftp, they do not see folder1 & folder2 in their own directory. Can anyone help if anything missing in IAM policy?
Thank You
Hi,
I am trying to create aws file transfer access using CLI. Trying to add two folder permissions but getting below error.
```
aws transfer create-access --home-directory-type LOGICAL --home-directory-mappings [{"Entry":"/","Target":"/bucket_name/${transfer:Username}/folder1" },{ "Entry": "/", "Target":"/bucket_name/${transfer:Username}/folder2"}] --role arn:aws:iam::account_id:role/iam-role --server-id s-123456789ert43 --external-id S-1-2-34-123456789-1234567-123456789-1234
```
Error:
```
Error parsing parameter '--home-directory-mappings': Invalid JSON:
[{Entry:/,Target:/bucket_name//folder1
```
Any idea, what wrong with CLI command? Thanks in advance.
Hi All,
We are trying to setup simple directory structure in S3 bucket for each user when they login to AWS file transfer SFTP server.
1. ${transfer:UserName}/folder1
2. ${transfer:UserName}/folder2
We have Active directory group A access added to File Transfer server. So only group A users will able to access file transfer server.
As soon as user login to SFTP, user should be able to see both child directory under his/her home directory and transfer files to respective directory.
Please advise how to achieve this?
I have set up an **FTPS** Server using **AWS Server Family** but cannot connect.
My Identity Provider is Custom Lambda and **Endpoint Type is VPC/Internet Facing**. During configuration, I selected Public Subnet and Elastic IP.
I probably misconfigured the network components:
- VPC
- Subnets (1 public and 1 private)
- Elastic IP
I am using the WINSCP client configured this way:
- File Protocol: FTP
- Encryption: TLS/SSL Explicit encryption
- Port number: 21
- Username / Password
```
Connection failed.
Login with USER first
```
Thanks
L
I've set up AWS Transfer Family servers in two different regions to test the sending functionality. However, even though the VPC is created, sending messages fail with either UNABLE_TO_CONNECT_TO_REMOTE_HOST_OR_IP or "File path not found". I'm using S3 for the document to send.
I've checked the IP address with a different program (Mendelson AS2) and it's able to connect fine. It even was able to send a test document. Despite that, when sending through a lambda function, it fails.
A few things tried:
* Checking permissions: I'm able to connect and describe the server, the connectors, etc with no problem so it's not that
* Connector with the wrong URL: I used the same URL as the URL in Mendelson with the port attached at the end (http:/s-xxx:5080 in the format specified in [1] with the region). I also tried the URL without the port specified and that didn't work either
* Region issue: I thought the mismatch between the region could be an issue since the lambda was set in us-west-1 while the as2 server I was sending to is in us-east2 so I created a different connector and had it send to itself in the same region. Still the same error with being unable to connect
* Checked the cloudwatch logs: It actually reports that everything sent successfully with a 200 code
Weird things noticed:
* After the lambda is triggered, it creates the expected failed and processing folder but after the first few times, it no longer saves the results. I get a .cms file and a .json file sometimes but not every time, even though the cloudwatch logs are correctly created every time.
* The failed and processed folders somehow got created a folder above rather than the folder the file was uploaded to. (e.g. the folder structure is bucket/folder 1/folder2/folder 3 and the uploaded file was in folder3. However, the failed and processing folders were created in folder2 instead of the expected folder3. This happened just once though.
Additional question:
I can upload this as a different question if needed but since it's related to my issue, I figured I'd put it here as well
* What's the transfer id for? Is that supposed to be the execution id? There doesn't seem to be an option to view the results of the transfer in the documentation [2].
References:
[1] https://docs.aws.amazon.com/transfer/latest/userguide/as2-end-to-end-example.html#as2-create-connector-example
[2] https://docs.aws.amazon.com/transfer/latest/userguide/API_StartFileTransfer.html
After creating a VPC, when trying to create the server, two issues come up. First, when choosing Internet Facing for access, the issue "No Address Allocation Ids are currently available." and second, "At least one subnet must be specified" pops up when the VPC is selected. However, the option to select the availability zones are all greyed out and I can't choose a subnet.
I checked the VPC and there are 2 subnets already attached and the inbound/outbound rules set to all traffic. What else am I missing?
Hi, I would like to host ftp server using transfer family.
Able to create a server and do a test. Now, I am trying to test it from FileZilla.
Where do I find the host name?