Questions tagged with AWS Transfer for SFTP

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, I am trying to setup AWS File transfer SFTP server. Here is my requirement: 1. User must be authenticated via third part identity provider which in Azure Authentication in our case. 2. Once user logged in they should two folder in their homedirectory i.e. {transfer:user}/folder1 and {transfer:user}/folder2 3. User should be restricted to put files in either folder1 or folder2, not in their home directory. 4. User should be able download the files only if specific tag is set on object/files in S3 So far, I am able to achieve Step 1 and Step 2 -- Step 1 -- custom authentication using lambda. Step 2 -- Once user authenticated successfully, Lambda will create folder1 and folder2 in their homedirectory. But when user logged into their home-directory they are not able to see folder1 and folder2 in their homedirectory but I can see folders were created successfully in S3 bucket. Here is IAM role attached to Transfer server and not able to figure out what's wrong with it. Any help would be appreciate. ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadWriteS3", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::s3-bucket" ] }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::s3-bucket/*" ] }, { "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Condition": { "StringEquals": { "s3:ExistingObjectTag/allowdownload": "yes" } }, "Resource": [ "arn:aws:s3:::s3-bucket/*" ], "Effect": "Allow", "Sid": "DownloadAllowed" }, { "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Condition": { "StringEquals": { "s3:ExistingObjectTag/allowdownload": "no" } }, "Resource": [ "arn:aws:s3:::s3-bucket/*" ], "Effect": "Deny", "Sid": "DownloadNotAllowed" }, { "Sid": "DenyMkdir", "Effect": "Deny", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::s3-bucket/*/*/" } ] } ``` Within lambda where user authentication happens, I am returning user's homedirectory ``` HomeDirectoryDetails = [{"Entry":"/","Target":"/s3-bucket/${transfer:UserName}"}] ``` also tried below but no luck ``` HomeDirectoryDetails = = [{"Entry":"/folder1","Target":"/s3-bucket/${transfer:UserName}/folder1"},{"Entry":"/folder2","Target":"/s3-bucket/${transfer:UserName}/folder2"}] ``` User gets permission denied error when try to do "ls" in their home directory ``` sftp> ls Couldn't read directory: Permission denied ```
1
answers
0
votes
74
views
asked 4 months ago
I cant login to sftp using my private key on my instance? ami-0fec1fb452e2ab3b0 with ubuntu as username do i need an sftp server or something?
2
answers
0
votes
71
views
Loot
asked 5 months ago
Hi All, We have setup AWS file transfer server with AWS directory service (connected to Microsoft AD) authentication. As per use case, once user login to sftp, user should be able to see two directory within their own folder. {username}/folder1 {username}/folder2 I have setup below Access policy and IAM policy (attached to S3) create-access CLI: ``` aws transfer create-access \ --home-directory-type LOGICAL \ --home-directory-mappings '[{"Entry":"/folder1","Target":"/bucket_name/${transfer:UserName}/folder1" },{ "Entry": "/folder2", "Target":"/bucket_name/${transfer:UserName}/folder2"}]' \ --role arn:aws:iam::account_id:role/iam_role \ --server-id s-1234567876454ert \ --external-id S-1-2-34-56789123-12345678-1234567898-1234 ``` access policy was created successfully. Below IAM role is attached to S3 bucket and file-transfer server. ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::bucket_name" ], "Effect": "Allow", "Sid": "ReadWriteS3" }, { "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetObjectVersion", "s3:GetObjectACL", "s3:PutObjectACL" ], "Resource": [ "arn:aws:s3:::bucket_name/${transfer:UserName}/*" ], "Effect": "Allow", "Sid": "" } ] } ``` When user login to sftp, they do not see folder1 & folder2 in their own directory. Can anyone help if anything missing in IAM policy? Thank You
3
answers
0
votes
132
views
profile picture
asked 5 months ago
Hi, I am trying to create aws file transfer access using CLI. Trying to add two folder permissions but getting below error. ``` aws transfer create-access --home-directory-type LOGICAL --home-directory-mappings [{"Entry":"/","Target":"/bucket_name/${transfer:Username}/folder1" },{ "Entry": "/", "Target":"/bucket_name/${transfer:Username}/folder2"}] --role arn:aws:iam::account_id:role/iam-role --server-id s-123456789ert43 --external-id S-1-2-34-123456789-1234567-123456789-1234 ``` Error: ``` Error parsing parameter '--home-directory-mappings': Invalid JSON: [{Entry:/,Target:/bucket_name//folder1 ``` Any idea, what wrong with CLI command? Thanks in advance.
2
answers
0
votes
215
views
profile picture
asked 5 months ago
Hi All, We are trying to setup simple directory structure in S3 bucket for each user when they login to AWS file transfer SFTP server. 1. ${transfer:UserName}/folder1 2. ${transfer:UserName}/folder2 We have Active directory group A access added to File Transfer server. So only group A users will able to access file transfer server. As soon as user login to SFTP, user should be able to see both child directory under his/her home directory and transfer files to respective directory. Please advise how to achieve this?
1
answers
0
votes
171
views
profile picture
asked 5 months ago
Hi all, I need to create my SFTP service using AWS Transfer Family and Lambda as Identity Provider and S3 as Storage. I created my Lambda function and authentication works but I can't show list of files. My Node.js lambda is: ``` exports.handler = async (event) => { return { "Role":"arn:aws:iam::356173882118:role/sftp-access-s3" } }; ``` Identity provider testing response is: ``` { "Response": "{\"HomeDirectoryType\":\"PATH\",\"Role\":\"arn:aws:iam::356173882118:role/sftp-access-s3\",\"UserName\":\"dasdasd\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ``` My role sftp-access-s3 has a policy and a trust relationship: ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::tecnoin-ftp-bucket" ], "Effect": "Allow", "Sid": "ReadWriteS3" }, { "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetObjectVersion", "s3:GetObjectACL", "s3:PutObjectACL" ], "Resource": [ "arn:aws:s3:::tecnoin-ftp-bucket/*" ], "Effect": "Allow", "Sid": "" } ] } ``` ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` I can connect successfully with my ftp client but then i can't see the files. I receive this error: ``` Permission denied. Error code: 3 Error message from server (US-ASCII): Access denied ``` On cloud Watch: ``` luca.1e5bad7f45e09f0b CONNECTED SourceIP=165.225.202.99 User=luca HomeDir=/ Client=SSH-2.0-WinSCP_release_5.17.10 Role=arn:aws:iam::356173882118:role/sftp-access-s3 UserPolicy="{\"Version\": \"2012-10-17\",\"Statement\": [ {\"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\"],\"Resource\": [ \"arn:aws:s3:::tecnoin-ftp-bucket\"],\"Effect\": \"Allow\",\"Sid\": \"ReadWriteS3\" }, {\"Action\": [ \"s3:PutObject\", \"s3:GetObject\", \"s3:DeleteObject\", \"s3:DeleteObjectVersion\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"],\"Resource\": [ \"arn:aws:s3:::tecnoin-ftp-bucket/*\"],\"Effect\": \"Allow\",\"Sid\": \"\" }]}" Kex=ecdh-sha2-nistp256 Ciphers=aes256-ctr,aes256-ctr luca.1e5bad7f45e09f0b ERROR Message="Access denied" ``` Could you please support me to solve the issue? Thanks
1
answers
0
votes
68
views
luk3tt0
asked 6 months ago
Hey, Hope you are doing well!! I have created an SFTP server for transferring files from AWS to celigo integration middleware. But when I am using the credentials hostname, username & password not able to set up the connection on Celigo.IO integration middleware. It throws an error like the host is not exist. Can you please help me to get the correct credentials for setup & transfer files from AWS to another system? Thanks, Madhuri
1
answers
0
votes
62
views
asked 6 months ago
[Transfer Family](https://aws.amazon.com/aws-transfer-family/) supports S3 and EFS storage targets, but a project's requirement specifies up/downloaded files be available to Windows EC2 instances via SMB (specifically a [FSx](https://aws.amazon.com/fsx/) share). I would let the EC2 instances mount the EFS volume used by Transfer Family, but [Microsoft](https://learn.microsoft.com/en-us/windows-server/storage/nfs/nfs-overview) says Windows NFS clients can only use NFSv2 or NFSv3. Since Transfer Family doesn't [natively](https://aws.amazon.com/aws-transfer-family/features/#Data_stored_natively_in_AWS_Storage_services) support FSx, is [DataSync](https://aws.amazon.com/datasync/) “between AWS storage services” the best way to support this workflow? As an added twist, we'll probably need to support both uploads (items that arrived via SFTP, consumed by the EC2 instances) and downloads (EC2 instance outputs a file, which will then be fetched by a remote user via SFTP) so we'll need to have [bidirectional](https://docs.aws.amazon.com/datasync/latest/userguide/other-use-cases.html#opposite-direction-tasks) DataSync between the EFS and FSx volumes.
1
answers
0
votes
226
views
profile picture
bobsut
asked 6 months ago
Hi team, I have a private VPC with all private subnets, I create an sftp server: - Protocols = SFTP - Identity provider = Service managed - VPC = my private VPC - access = Internal - Domain = Amazon S3 the objective is to allow the other team from the same corporate to load files into my s3 bucket. when I finish creating the sftp server, it doesn't give me an endpoint ==> (Endpoint = '-' and Custom hostname = '_') I just want to know how the other team from the same corporate can interact with the sftp server to put files on my bucket as my sftp server is not publically accessible and I don't have an endpoint URL to give them. so how can they connect to my server to put files? can they use clients like FileZilla or putty or winSCP ... to transfer files? Thank you!
1
answers
0
votes
375
views
Jess
asked 6 months ago
Hi , we have a mirroring setup ( get and put the file between two server by comparing on the either side wherever missing) by using the perl module . With Traditional Unix/linux server it is working good but when we try to use S3 for same file transfers it didn't worked at all. After multiple search got to know the below Change we have to do to make it work . my $sftp = Net::SFTP::Foreign->new('user@s-12345.server.transfer.us-east-2.amazonaws.com', queue_size => 1); we have added the above queue_size parameter in our code but it is working for very few files not more than 4 . If we try to put more files the connection starts stalling, the exact error is "Connection to remote server stalled" . when we are using the EC2 username and URl for file transfer it is working fine with any number of files like as we are doing with Linux/unix Datacenter based server. I want to know why S3 is not working properly . what is the difference between S3 and AWS . we have Network Packet capture done but no issue found as no packet loss was there . please help.
1
answers
0
votes
61
views
ikram
asked 7 months ago
I have deployed a Transfer Family sftp server (using an Amazon EFS ). I am having trouble configuring the user. I keep getting the error: Failed to create user (Unsupported or invalid SSH public key format) I have tried using the format according to AWS format but still, get the error. Has anyone had this issue and how did you solve it?
1
answers
0
votes
924
views
Dreski
asked 7 months ago
I am new to AWS and looking for guidance to design a FTP solution Infrastructure : A zipped file (encrypted) plus a checksum file will be available on FTP server in Data Center 1 (once daily at around 1 am). Data Centre 1 cannot be reached via Public internet. But it has a connectivity to Data Centre 2 via MPLS. Datacentre 2 has Direct Connect Link Set up with AWS Ireland. Requirement: Get the zipped file from on premise server in DC!, and perform following : DEcrypting, perform check on checksum and DEcompression. Store the flat files (from zip file) in AWS London region in S3. These files will be required for 12 months and then deleted. These flat files wont be accessed frequently and will be saved for audit purposes. Only need to run the SFTP operation once on daily basis Pre Reqs Firewall ports will be opened No agent can be installed on any of the On Premise server Backup / DR solution required as well What is the best way to achieve this. I thought of using Lambda function but how will network side of things work. Can Lambda function be able to reach to FTP server in DC1 which is sitting behind a firewall. Can all the above operations (checksum, decrypt and decompression) be performed using Lambda function. We can create separate Lambda function for each operation. or to use EC2 instance and get node.js installed.
1
answers
0
votes
115
views
ROberoi
asked 7 months ago