Questions tagged with AWS Transfer for SFTP
Content language: English
Sort by most recent
How do i SFTP to my storage space on AWS. That is the main reason for my signup to AWS
Hello Team, I am working on a AWS Transfer Family Solution (SFTP) and need a confirmation that whether this service can support both password and ssh key based authentication at same time (i.e in one login attempt when user passes both using any sftp client like filezilla or winscp). I used lambda based identity provider and identified that when I pass both password and ssh key in Filezilla, password is never passed to lambda and so code logic have to assume it is ssh key based authentication. Can someone please provide any advise !!
Hi,
We notice that the logical mapping directories have a limit of 50 by a user we would like to have 1500, there is an instance where we can request an increase in this limit?
Thanks
Hi,
We notice that the logical mapping directories have a limit of 50 by a user, there is an instance where we can request an increase in this limit?
Thanks
First off I am very new to AWS cloudformation, been working on templates for a couple months
trying to create a cloudformation template that creates an SFTP transfer service and adds a custom hostname.
I was able to create the route 53 hostname and it all works fine with the exception the AWS Transfer Family dashboard does not show the Hostname for the server.
I suspect it has to do with tags as I found this [doc](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html#requirements-use-r53).
I am using a parameter to get the HostedZoneId and use it via HostedZoneId: !Ref HostedZoneIdParam in the SFTPServerDNSRecord resource.
is there a way to use t hat same parameter in a key/value as in Key: aws:transfer:route53HostedZoneId Value: /hostedzone/!Ref HostedZoneIdParam
Any assistance or guidance would be appreciated
AWS Transfer Family support AIX platform
How can i transfer data from Aws S3 to Hetzner Storage Box?
Hi, I was wanting to "securely" stream data from a rack i have in an ISP to my AWS Lambda instance. I was wondering what the best solution might be? I thought of something sort of vpn and perhaps kinesis to lambda but not sure how i would initiate that from the on premises rack and that was a shot in the dark. Would appreciate any input. Thank you.
Hi team,
I have an SFTP user that uses a Sftp_role to put s3 objects inside an encrypted S3 bucket (SSE-KMS / my own KMS key).
I modified the key policy to add another statement :
- sftp_role as principal
- actions =
```
[
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey",
"kms:DescribeKey"
]
```
- resource = `[myencryptedBucketArn, myencryptedBucketArn/*]`
the sftp user got an access denied when copying files into the bucket
when I moved this policy to put it directly on the sftp_role it worked and user was able to put files :
- new policy under sftp_role :
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:region:7sj14575037811:key/dafsf-ceasfasf4asf-asfaf-asfasfas123",
"Effect": "Allow"
}
]
}
```
I'm just wondering why it doesn't work when I put the role as principal on the key policy, but it worked when I added new policy to the sftp_role itself to give him permission to the key.
Kind Regards
Hi,
I run EC2 in Ireland region with GoAnyhwere Application that receives SFTP traffic on 922 port. Normally, on prem, I receive constant 5-10Mbps inbound, but in AWS instance I receive 1mbs and constant drops to 0mbps. The instance is c5.xlarge. Is there anyone who faces similar anomaly ? The same anomaly happens when I transfer to the attached S3
Hi,
I'm curently facing a problem trying to create a private SFTP Server (deployed in a VPC) using AWS Transfer Family.
So here are the steps I followed:
- I started an EC2 in one of three subnets associated with the SFTP server (created in another step)
- Those subnets are private
- I connected to the EC2 instance using session manager
- I created an ssh key named sftp_key to connect to the SFTP server
- I Created an IAM role for the transfer service:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "<AccountId>"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:transfer:eu-west-1:<AccountId>:server/*"
}
}
}
]
}
```
- Attached an inline policy to this role:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<BucketName>"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<BucketName>/*"
}
]
}
```
- Created a Role for logging management. This role has the following inline policy:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateLogsForTransfer",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
}
]
}
```
- Created an SFTP Server using the CLI like this:
```
aws transfer create-server --identity-provider-type SERVICE_MANAGED --protocols SFTP --domain S3 --endpoint-type VPC --endpoint-details SubnetIds=$SUBNET_IDS,VpcId=$VPC_ID,SecurityGroupIds=$SG_ID --logging-role $LOGGINGROLEARN --security-policy-name $SECURITY_POLICY
```
SUBNET_IDS: list of 3 privates subnets ids
VPC_ID: the concerned VPC ID
SG_ID: ID of a security group. This group allows all access on port 22 (TCP) from the same subnets (SUBNET_IDS)
LOGGINGROLEARN: Arn of the logging role
SECURITY_POLICY=TransferSecurityPolicy-2020-06
- Created a user with the CLI:
```
aws transfer create-user --home-directory $DIRECTORY --policy file://sftp-scope-down-policy.json --role $ROLEARN --server-id $SERVERID --user-name $1 --ssh-public-key-body "$SSHKEYBODY"
```
DIRECTORY=/<BucketName>/<userName>
ROLEARN: Role created before
SSHKEYBODY: public key of the ssh key created on the EC2
sftp-scope-down-policy.json content:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${transfer:HomeBucket}"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"${transfer:UserName}/*",
"${transfer:UserName}"
]
}
}
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
```
- A VPC endpoint exists for the three subnets for the following services:
- com.amazonaws.eu-west-1.ec2
- com.amazonaws.eu-west-1.ssm
- com.amazonaws.eu-west-1.ssmmessages
***So here is the problem:***
I tried to connect to the SFTP server from the EC2 launched in the first step using this command:
```
sftp -vvv -i sftp_key <userName>@<ServerPrivateIp>
```
the ssh logs shows that the connection suceeded but after that the connection closed directly.
```
debug1: Authentication succeeded (publickey).
Authenticated to <ServerPrivateIp> ([<ServerPrivateIp>]:22).
```
No logs are created on CloudWatch Logs and I can see nothing special on CloudTrail logs.
Can someone explain me what I missed ?
Hi all,
I've setup an SFTP server with AWS Transfer Family with "sftp-server" S3 bucket as storage. I created "subfolder01", "subfolder02", "subfolder03", etc in the bucket.
I defined an SFTP user and set "sftp-server" as his restricted home folder. And I want to give him read/write permissions to "subfolder01" and "subfolder02" only, while no access to all the other subfolders. But when the user connects, he sees an empty list of his home folder, and he can only access the two subfolders if he manually types the "subfolder01/" or "subfolder02/" path, in Filezilla. I would like him to see the list of all the subfolders when he connects, or better, to see only the two subfolders that he has access to.
This is the policy assigned to the role of the user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::sftp-server"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::sftp-server/subfolder01/*",
"arn:aws:s3:::sftp-server/subfolder02/*"
]
}
]
}
and this is Trusted Entities of his role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Can you please help me?