Browse through the questions and answers listed below or filter and sort to narrow down your results.
connect to an internal sftp server from outside private vpc (on prem)
Hi team, I have a private VPC with all private subnets, I create an sftp server: - Protocols = SFTP - Identity provider = Service managed - VPC = my private VPC - access = Internal - Domain = Amazon S3 the objective is to allow the other team from the same corporate to load files into my s3 bucket. when I finish creating the sftp server, it doesn't give me an endpoint ==> (Endpoint = '-' and Custom hostname = '_') I just want to know how the other team from the same corporate can interact with the sftp server to put files on my bucket as my sftp server is not publically accessible and I don't have an endpoint URL to give them. so how can they connect to my server to put files? can they use clients like FileZilla or putty or winSCP ... to transfer files? Thank you!
AWS Transfer Family now supports multiple host keys and key types per server
AWS Transfer Family now supports up to ten host keys per SFTP server. In addition, ED25519 and ECDSA key types are now supported for server host keys. Previously, AWS Transfer Family only supported one host key per server, and only the RSA key type. These enhancements allow you to move your existing SFTP servers with multiple host keys and host key types to AWS Transfer Family. You will also be able to add and tag host keys before rotating them, giving you more control over your managed file transfer environments. Multiple host keys and host key types are supported in [all Regions where AWS Transfer Family is available](https://aws-preview.aka.amazon.com/about-aws/global-infrastructure/regional-product-services/). You can configure server host keys using the AWS Management Console, AWS Transfer Family API, or AWS Command Line Interface (CLI). To learn more about how to add multiple host keys to an SFTP server, visit our [documentation](https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html).
Connection Stalled while putting multiple file say more than 4 files at a time to an S3 bucket but the same is working fine with EC2.
Hi , we have a mirroring setup ( get and put the file between two server by comparing on the either side wherever missing) by using the perl module . With Traditional Unix/linux server it is working good but when we try to use S3 for same file transfers it didn't worked at all. After multiple search got to know the below Change we have to do to make it work . my $sftp = Net::SFTP::Foreign->new('email@example.com', queue_size => 1); we have added the above queue_size parameter in our code but it is working for very few files not more than 4 . If we try to put more files the connection starts stalling, the exact error is "Connection to remote server stalled" . when we are using the EC2 username and URl for file transfer it is working fine with any number of files like as we are doing with Linux/unix Datacenter based server. I want to know why S3 is not working properly . what is the difference between S3 and AWS . we have Network Packet capture done but no issue found as no packet loss was there . please help.
Unsupported or invalid SSH public key format
I have deployed a Transfer Family sftp server (using an Amazon EFS ). I am having trouble configuring the user. I keep getting the error: Failed to create user (Unsupported or invalid SSH public key format) I have tried using the format according to AWS format but still, get the error. Has anyone had this issue and how did you solve it?
AWS - FTP Solution
I am new to AWS and looking for guidance to design a FTP solution Infrastructure : A zipped file (encrypted) plus a checksum file will be available on FTP server in Data Center 1 (once daily at around 1 am). Data Centre 1 cannot be reached via Public internet. But it has a connectivity to Data Centre 2 via MPLS. Datacentre 2 has Direct Connect Link Set up with AWS Ireland. Requirement: Get the zipped file from on premise server in DC!, and perform following : DEcrypting, perform check on checksum and DEcompression. Store the flat files (from zip file) in AWS London region in S3. These files will be required for 12 months and then deleted. These flat files wont be accessed frequently and will be saved for audit purposes. Only need to run the SFTP operation once on daily basis Pre Reqs Firewall ports will be opened No agent can be installed on any of the On Premise server Backup / DR solution required as well What is the best way to achieve this. I thought of using Lambda function but how will network side of things work. Can Lambda function be able to reach to FTP server in DC1 which is sitting behind a firewall. Can all the above operations (checksum, decrypt and decompression) be performed using Lambda function. We can create separate Lambda function for each operation. or to use EC2 instance and get node.js installed.
Does Transfer Family support Password and SSH key authentication together in one single login ?
Hello Team, I am working on a AWS Transfer Family Solution (SFTP) and need a confirmation that whether this service can support both password and ssh key based authentication at same time (i.e in one login attempt when user passes both using any sftp client like filezilla or winscp). I used lambda based identity provider and identified that when I pass both password and ssh key in Filezilla, password is never passed to lambda and so code logic have to assume it is ssh key based authentication. Can someone please provide any advise !!
cloudformation SFTP transfer service with custom hostname
First off I am very new to AWS cloudformation, been working on templates for a couple months trying to create a cloudformation template that creates an SFTP transfer service and adds a custom hostname. I was able to create the route 53 hostname and it all works fine with the exception the AWS Transfer Family dashboard does not show the Hostname for the server. I suspect it has to do with tags as I found this [doc](https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html#requirements-use-r53). I am using a parameter to get the HostedZoneId and use it via HostedZoneId: !Ref HostedZoneIdParam in the SFTPServerDNSRecord resource. is there a way to use t hat same parameter in a key/value as in Key: aws:transfer:route53HostedZoneId Value: /hostedzone/!Ref HostedZoneIdParam Any assistance or guidance would be appreciated