Questions tagged with AWS Transfer Family
Content language: English
Sort by most recent
Hello I created this instance https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#InstanceDetails:instanceId=i-0ca9edd25728c4f62
The goal is trying to create AD user from that instance, therefore both the AD and EC2 are under the same VPC.
Question 1: I couldn't connect to the EC2 instance with RDP. I configured both the subnet and EC2 to accept RPC calls with ACL, no effect.
Question 2: eventually, I'd like to use this https://us-east-1.console.aws.amazon.com/transfer/home?region=us-east-1#/servers/s-d0e008162fc04aa1a to recieve FTP file drops from the AD's user.
Is the network correctly configured?
Thank you!
Hello, I was trying to build FTPS server using Transfer family, But I couldn't able to successfully build one. Could some one explain in details how to build one in detail.
I tried browsing online for guidance all I could find is for building SFTP server. I need help in building "custom identity provider" using rest API and lambda function. I couldn't find the code for the lambda function.
Hi all,
I've recently started trying out AWS Transfer Family with AS2. According to the documentation, when sending AS2 messages or asynchronous MDNs to a trading partner's HTTPS endpoint, I must use a valid SSL certificate signed by a certificate authority (CA) that's trusted by AWS Transfer Family. Self-signed certificates are not supported. The list of trusted CAs can be found at https://www.amazontrust.com/repository/.
I am not sure which certificate to get and how to obtain it. Can someone guide me through the process of choosing the right SSL certificate and obtaining it from a trusted CA for AWS Transfer Family with AS2 HTTPS endpoints?
Thank you in advance!
From [customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows](https://aws.amazon.com/blogs/storage/customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows/) blog, it reads AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services.

Does this mean Transfer family support to transfer files from S3 to external servers outside of AWS?
Providing my use case for better understanding: I need to transfer large files like 70-80 Gb to external server using Akamai NetStorage.
Hi all,
I'm using the AWS Transfer Family service to transfer files using the AS2 protocol, and I'm having trouble whitelisting an IP or URL for the connector used by the service. Specifically, the connector does not have a static IP address, so I'm not sure what IP or URL I should whitelist on my partner's AS2 server.
I found a list of all the IP ranges used by AWS services at https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html, but I'm not sure which IP ranges I should put on the whitelist for the Transfer Family AS2 service connector. Is there a specific IP range or URL that I should whitellist for this purpose? Or is there a different approach I should take to configure network security rules for the connector?
Any help or guidance would be greatly appreciated!
Thanks in advance for your help!
Hello,
I just want to enable FIPS protocol using cloudfomration template.
Can anyone please suggest me.

Thanks,
Hi,
Under Vpc hosted category i want to choose internet-facing using cloud formation template.
I have yaml file for internal but can't code for internet-facing.
I want to transfer my domain to AWS but the extension of the domain - @group is not in the list. Any solutions?
Hi,
I have AWS Transfer Family configured on a private S3 backend storage with a default encryption enabled.
I'm able to download the files from S3, but uploading files throws an access denied error.
We have precisely the same problem for buckets with default encryption set to either SSE-S3, SSE-KMS (s3 alias) or CMK.
The role policy associated with a Transfer Family user grants full access to s3 and the CMK key as well.
To verify the policy associated with the role, I assumed a role with the same policy and executed 'aws s3 cp' command, and this completes successfully, as long as I provide '--sse' server site encryption arguments.
Could it be that AWS Transfer Family does not pass along correct server site encryption information while uploading files?
Regards,
Chris
Hello, I have an existing gpg key exported to public and private keys. The public key is shared with many customers and is in use on on-prem servers. I want to store these keys in AWS Secret Manager to be used by AWS Transfer Family. Please assist with how to store/receive keys.
Thanks,
I'm attempting to set up permissions for a user account on AWS Transfer Service with SFTP protocol. I have a use case where a user should be able to add a file to a directory but not list the files in it.
When I tweak the IAM role to deny 's3:ListBucket' for a specific subdirectory the put operation fails as well. Theoretically s3 does allow to Put object without having the ability to list the prefixes. AWS transfer service however seems to be implicitly using the list bucket operation before put. Has anyone managed to deny listing ability while still being able to upload.
IAM policy :
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:ListBucket",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<my-bucket>"
],
"Sid": "AllowListDirectories",
"Condition": {
"StringLike": {
"s3:prefix": [
"data/partner_2/*"
]
}
}
},
{
"Sid": "DenyMkdir",
"Action": [
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::<my-bucket>/*/"
},
{
"Sid": "DenyListFilesInSubDirectory",
"Action": [
"s3:ListBucket"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::<my-bucket>",
"Condition": {
"StringLike": {
"s3:prefix": [
"data/partner_2/data/incoming/*"
]
}
}
},
{
"Effect": "AllowReadWirteInSubDirectory",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging"
],
"Resource": "arn:aws:s3:::<my-bucket>/data/partner_2/data/incoming/*"
},
{
"Effect": "AllowOnlyReadInADifferentDirectory",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<my-bucket>/data/partner_2/data/outgoing/*"
}
]
}
```
The output from SFTP client:
```
sftp> cd data/incoming
sftp> ls
Couldn't read directory: Permission denied
sftp> put /Users/foo/Downloads/test.log
Uploading /Users/foo/Downloads/test.log to /data/incoming/test.log
remote open("/data/incoming/test.log"): Permission denied
sftp> get test-one.txt
Fetching /data/incoming/test-one.txt to test-one.txt
sftp> exit
```
I have a requirement to SFTP ".csv" files from corporate on-premise linux box to S3 bucket.
The Current Setup is as follows:
1. The on-premise linux box is NOT connected to internet.
2. Corporate Network is connected with AWS with Direct Connect.
3. There are several VPCs for different purposes. Only One VPC has IGW and Public Subnet (to accept requests coming from Public Internet), all other VPCs do not have IGW and Public Subnets.
4. Corporate Network and several AWS VPCs (those having no IGW) are connected with each other through Transit Gateway.
Can someone please advise whether I should use AWS Transfer or S3 VPC Interface Endpoints to transfer files to S3 bucket from on-premise (corporate network)? and why?
In which scenarion should I use AWS Transfer Family for S3 and which scenario should I use VPC Interface End Points for S3?
I appreciate your valuable advise in advance.