Questions tagged with AWS Transfer Family
Content language: English
Sort by most recent
Hi,
Is SFTP gateway supports SSE-C (server-side encryption with customer key)?
Thanks,
Srikar.
I'm coding a python script to backup my computer to my S3 bucket. However, I just saw restrictions on how many steps. etc. in a workflow for the transfer family as follows:
How do I backup my data using python to my S3 bucket on my account? I'd also like to learn how to code in python to access AWS services so that's the main reason I'd like to do this. I've created some code already to create a transfer server, start it, stop it and delete it.
Limitations
Additionally, the following functional limits apply to workflows for Transfer Family:
The number of workflows per account is limited to 10.
The maximum timeout for custom steps is 30 minutes.
The maximum number of steps in a workflow is 8.
Hi all:
We are using the AWS Transfer Family service as our SFTP service.
Currently, we are generating a ssh key pairs for our vendors and add the public keys to the vendor account while creating the vendor accounts.
We are transferring the private keys to the vendors and with this they are able to log onto the account.
Let me know if this is the right approach or not.
One of the vendor says that transferring private key is not safe and asking us for the public key.
If I provide him the public key and have the public key attached to the account within AWS Transfer family, he is getting authentication error.
Should we send them the public key or private key? Is it safe to send them the private key?
Also, if he has key pair generated, is it okay if I have his public key attached to the account?
Can someone who is an expert in this area clear my confusion.
Appreciate any help in this.
Regards!
Venkata
As the subject asks. I've successfully used ed25519 keys over in EC2, but however I try to enter an ed25519 ssh key to a user at "SFTP, FTPS, & FTP Servers > server-id > username > Add key", I keep getting the error message "Enter a valid SSH public key". Just wondering if I'm doing something wrong or its not supported. (yet?)
Hi,
We have an FTP(s) and SFTP set up on AWS Transfer Family, in our own VPC with Cognito as the custom identity provider (APIGW + lambda). We have configured it to accept usernames and passwords and are successfully using it in production.
We want to enable SSH key for the SFTP where clients access and send data with the private key. We have the client's public key but are unclear on how the connection and data transfer flow is for our scenario.
Right now, we are totally in the dark on how to do this. How do we allow clients access via a private key without a username/password configured using our custom identity provider for AWS Transfer Family?
When the client tries to connect to the server, I'm guessing it will go through the custom identity provider (APIGW+Lambda), but we're unsure how to allow the client to proceed to AWS Transfer Family, and where we should be storing and sending the Public Key?
Any help or pointing us in the right direction would help. Thank you!
I am getting access denied for user when WINSCP tries to list the directory structure, "Error listing directory '/.'"
I have the following policy for user
{
"Version": "2012-10-17",
"Statement": \[
{
"Sid": "AllowListingOfUserFolder",
"Action": \[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": \[
"arn:aws:s3:::BUCKET234"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": \[
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::BUCKET234/*"
}
]
}
This is the trust relationship
{
"Version": "2012-10-17",
"Statement": \[
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
There is no scope down policy, what am i missing ?
I'm following the cloud formation template provided the below URL to create AWS SFTP service with custom Identity Provider as APi Gateway and Secret Manager to store the user credentials. The API gateway to integrate between SFTP Transfer server and lambda function that processes the gateway request and queries the Secret Manager.
Is the password authentication with custom Identity Provider as API Gateway and EFS specifically supported in AWS? If so, can someone hint me as to how to configure the store in Secret Manager to configure the UID, GID, Secondary GID? I'm specifically looking for help on this.
Most of the documentation talks only about Transfer family with S3 as backend storage including the examples on scope down policy etc.. Any help on this requirement is highly appreciated.
https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-for-sftp-using-aws-secrets-manager/
Good day
I've implemented the custom IDP using the template (aws-transfer-custom-idp-secrets-manager-apig.template.yml) provided.
I've created a user in secrets manager and attached the role containing the below policy in which I explicitly specify the users username as directory, indicated as "user1" for demonstration purposes. I am then able to successfully authenticate via SSH or Username/Password methods. I then created a new role/policy for a new user and specify the new user directory as "user2" in the policy. The problem is with the new user it authenticates fine however upon login it generates an "access denied" error and does not seem to place the user in the logical directory specified in secrets manager. This error persists with each new user I've attempted to create using the same details as the initial user1.Please assist, I've attached the user format as inserted to Secrets Manager as well as the policy below for your perusal. Thanks
Secrets Manager User PLAINTEXT stored as "SFTP/user2" :
{
"Password": "password",
"Role": "arn:aws:iam::111111111111:role/rolename",
"PublicKey": "ssh-rsa AAAA",
"HomeDirectoryType": "LOGICAL",
"HomeDirectoryDetails": "\[{\"Entry\": \"/\", \"Target\": \"/bucketname/user2\"}]"
}
POLICY :
{
"Version": "2012-10-17",
"Statement": \[
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": \[
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucketname"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": \[
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": \[
"arn:aws:s3:::bucketname/user2/in/*",
"arn:aws:s3:::bucketname/user2/out/*"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Deny",
"Action": \[
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::bucketname/user2/"
}
]
}
Note, this policy works for our use case in that it allows a user to GET/PUT to the in/out folders however denies them from PUT at their logical root. The s3 structure is as follows: bucketname/user2/folders and again it works with the first user created as user1.
Thanks
If using a custom identity provider, can the lambda return a value in the user authentication response that indicates the user should be operating in 'restricted' mode? The built-in provider has a checkbox, but the custom identity provider documentation doesn't mention any return values that communicate that the user was stored as 'restricted' and therefore should only be allowed to access the home folder.
I have yet to be able to create a working scope-down policy that performs the 'restricted' mode. All the examples continue to fail with 'Access Denied'. Setting the policy to allow read/write to the S3 directly works, but obviously gives the user access to navigate throughout the S3 bucket.
Allowing the custom identity provider to specify 'Restricted' would eliminate the scope-down policy complexity.
Hi all,
I have enabled the s3 versioning on the bucket connected with AWS Transfer Family since I wanted to use the replication feature for certain folder in the sftp bucket.
Unfortunately, since I've enabled it now I cannot download any file from the bucket using an sftp connection. The upload works fine but the download fails with access denied.
I have a custom identity provider which return the policy below when the user authenticates:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${transfer:HomeBucket}"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"user-folder/*",
"user-folder"
]
}
}
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject*",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion",
"s3:GetObjectAcl",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
```
Edited by: sa-dem on Dec 8, 2020 2:05 PM
I read that there is a data transfer cost between two AZs in different accounts.
I wonder if there is a data transfer cost between two AZs in the same VPC same account. let's say one EC2 in AZ A is sending data to EC2 in AZ B. What is the charge like?
Hello,
I am looking for guidance on setting up scope down policy for FTPS users on the transfer family service.
Within the lambda function that does the user authentication, i am attempting to add the policy JSON to the response body as described in the documentation.
.....
response = {
Role: 'arn:aws:iam::xxxxxxx:role/assumedRoleForTransferService',
Policy: myPolicyJSON,
HomeDirectory: ''
};
.......
The scope down policy looks similar to what SFTP scope down users would use except I am not using the transfer variables (eg. ${transfer:HomeDirectory}) as I suspect they don't work because with FTPS there are no "managed" users to map the variables to. Instead my lambda will dynamically replace variables in the policy dependent on logic within the lambda.
Adding the scope down policy to the lambda response creates an error when connecting to the server. Removing the scope down policy from the lambda allows me to connect and upload but then I am not restricted within the bucket.
My user scope down policy JSON looks like this prior to replacing the dyanmic variables with the appropriate user paths.
{
"Version": "2012-10-17",
"Statement": \[
{
"Sid": "AllowListingOfUserFolder",
"Action": \[
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket" ,
"Condition": {
"StringLike": {
"s3:prefix": \[
"DYNAMIC_USER_VARIABLE/*",
"DYNAMIC_USER_VARIABLE"
]
}
}
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": \[
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion",
"s3:GetObjectACL",
"s3:PutObjectACL"
],
"Resource": "arn:aws:s3:::mybucket/DYNAMIC_USER_VARIABLE/*"
}
]
}
Is scope down policies a part of the FTPS service? If so is there any glaring issue in my policy JSON above?
thanks in advance!