By using AWS re:Post, you agree to the Terms of Use
/AWS Transfer Family/

Questions tagged with AWS Transfer Family

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How do I transfer my AWS account to another person or business?

I am selling my site and need to transfer the AWS account to the buyer's business (the buyers do not use AWS for their other sites - but they want my site to continue with AWS). I cannot figure out how to do it. Do I need to pay for support and what level? This is Amazon's advice on transfering ownership of a site: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-aws-account/ "To assign ownership of an AWS account and its resources to another party or business, contact AWS Support for help: Sign in to the AWS Management Console as the root user. Open the AWS Support Center. Choose Create case. Enter the details of your case: Choose Account and billing support. For Type, choose Account. For Category, choose Ownership Transfer. For all other fields, enter the details for your case. For Preferred contact language, choose your preferred language. For Contact methods, choose your preferred contact method. Choose Submit. AWS Support will contact you with next steps and help you transfer your account ownership." I have done all this but have not yet been contacted (24 hours). The text seems to suggest that advice on transfering ownership is a necessary aspect of transfering an AWS root account to a company, and that such advice is provided free by Amazon, since nothing is said about pricing. If on the other hand AWS clients must pay for a support package to transfer ownership, which package? The $29 Developer package or the $100 Business package or some other package? How quickly does Amazon AWS respond? How quick is the transfer process? I am finding this very frustrating.
1
answers
0
votes
13
views
asked 7 days ago

FTP Transfer Family, FTPS, TLS resume failed

We have: - an AWS transfer family server with FTPS protocol - a custom hostname and a valid ACM certificate which is attached to the FTP server - a Lambda for the Identity provider The client is using: - EXPLICIT AUTH TLS - our custom hostname - port 21 The problem is: the client can connect, the authentication is successfully (see below for the auth test result), but during the communication with the FTP server a TLS_RESUME_FAILURE occurs. The error in the customer client is "522 Data connection must use cached TLS session", and the error in the CloudWatch LogGroup of the transfer server is just "TLS_RESUME_FAILURE" I have no clue why this is happen. Any ideas? Here is the auth test result ``` { "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/xxx/new\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::123456789:role/ftp-s3-access-role\",\"Policy\":\"{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"AllowListAccessToBucket\", \"Action\": [\"s3:ListBucket\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::xxx-prod\"]}, {\"Sid\": \"TransferDataBucketAccess\", \"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"], \"Resource\": [\"arn:aws:s3:::xxx-prod/xxx/new\", \"arn:aws:s3:::xxx-prod/xxx/new/*\"]}]}\",\"UserName\":\"test\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ```
1
answers
0
votes
7
views
asked 8 days ago

AWS SFTP Error "Too many open files in this session, maximum 100"

Since this monday we are experiencing a problem when we try to upload a large amount of files. (49 files to be exact). After around 20 files the upload fails. ``` s-262d99d7572942eca.server.transfer.eu-central-1.amazonaws.com /aws/transfer/s-262d99d7572942eca asdf.f7955cc50d0d1bc4 2022-05-09T23:02:59.165+02:00 asdf.f7955cc50d0d1bc4 CONNECTED SourceIP=77.0.176.252 User=asdf HomeDir=/beresa-test-komola/asdf Client=SSH-2.0-ssh2js0.4.10 Role=arn:aws:iam::747969442112:role/beresa-test UserPolicy="{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowListingOfUserFolder\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:s3:::beresa-test-komola\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"asdf/*\",\n \"asdf\"\n ]\n }\n }\n },\n {\n \"Sid\": \"HomeDirObjectAccess\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:DeleteObject\",\n \"s3:GetObjectVersion\"\n ],\n \"Resource\": \"arn:aws:s3:::beresa-test-komola/asdf*\"\n }\n ]\n}" Kex=ecdh-sha2-nistp256 Ciphers=aes128-ctr,aes128-ctr 2022-05-09T23:02:59.583+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:03.394+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg BytesIn=4226625 2022-05-09T23:03:04.005+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:07.215+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg BytesIn=4226625 2022-05-09T23:03:07.757+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:10.902+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg BytesIn=4226625 2022-05-09T23:03:11.433+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:14.579+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg BytesIn=4226625 2022-05-09T23:03:14.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:18.016+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg BytesIn=4226625 2022-05-09T23:03:18.403+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:21.463+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg BytesIn=4226625 2022-05-09T23:03:21.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:25.025+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg BytesIn=4199266 2022-05-09T23:03:25.431+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:28.497+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg BytesIn=4199266 2022-05-09T23:03:28.857+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:31.947+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg BytesIn=4199266 2022-05-09T23:03:32.374+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:35.504+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg BytesIn=4199266 2022-05-09T23:03:35.986+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:39.104+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg BytesIn=4226625 2022-05-09T23:03:39.691+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:42.816+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg BytesIn=4199266 2022-05-09T23:03:43.224+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:46.274+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg BytesIn=4199266 2022-05-09T23:03:46.649+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:49.757+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg BytesIn=4199266 2022-05-09T23:03:50.141+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:53.307+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg BytesIn=4199266 2022-05-09T23:03:53.849+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:56.933+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg BytesIn=4199266 2022-05-09T23:03:57.358+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:00.585+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg BytesIn=4199266 2022-05-09T23:04:00.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:04.174+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg BytesIn=4199266 2022-05-09T23:04:04.603+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:07.771+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg BytesIn=4199266 2022-05-09T23:04:08.179+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:11.279+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg BytesIn=4199266 2022-05-09T23:04:11.716+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:14.853+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg BytesIn=4199266 2022-05-09T23:04:15.316+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:18.435+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg BytesIn=4226625 2022-05-09T23:04:18.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:22.140+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg BytesIn=4199266 2022-05-09T23:04:22.565+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:25.752+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg BytesIn=4159129 2022-05-09T23:04:26.141+02:00 asdf.f7955cc50d0d1bc4 ERROR Message="Too many open files in this session, maximum 100" Operation=OPEN Path=/beresa-test-komola/asdf/x/bla/32_x_a_3.jpg Mode=CREATE|TRUNCATE|WRITE ``` As you can see in the logs we are closing each path after opening it - we are uploading one file after the other. What could cause this as we are not even trying to write 100 files during the scp session?
1
answers
0
votes
6
views
asked 9 days ago

AWS Transfer Family - Private SFTP server connection closed

Hi, I'm curently facing a problem trying to create a private SFTP Server (deployed in a VPC) using AWS Transfer Family. So here are the steps I followed: - I started an EC2 in one of three subnets associated with the SFTP server (created in another step) - Those subnets are private - I connected to the EC2 instance using session manager - I created an ssh key named sftp_key to connect to the SFTP server - I Created an IAM role for the transfer service: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "<AccountId>" }, "ArnLike": { "aws:SourceArn": "arn:aws:transfer:eu-west-1:<AccountId>:server/*" } } } ] } ``` - Attached an inline policy to this role: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::<BucketName>" ] }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::<BucketName>/*" } ] } ``` - Created a Role for logging management. This role has the following inline policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateLogsForTransfer", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*" } ] } ``` - Created an SFTP Server using the CLI like this: ``` aws transfer create-server --identity-provider-type SERVICE_MANAGED --protocols SFTP --domain S3 --endpoint-type VPC --endpoint-details SubnetIds=$SUBNET_IDS,VpcId=$VPC_ID,SecurityGroupIds=$SG_ID --logging-role $LOGGINGROLEARN --security-policy-name $SECURITY_POLICY ``` SUBNET_IDS: list of 3 privates subnets ids VPC_ID: the concerned VPC ID SG_ID: ID of a security group. This group allows all access on port 22 (TCP) from the same subnets (SUBNET_IDS) LOGGINGROLEARN: Arn of the logging role SECURITY_POLICY=TransferSecurityPolicy-2020-06 - Created a user with the CLI: ``` aws transfer create-user --home-directory $DIRECTORY --policy file://sftp-scope-down-policy.json --role $ROLEARN --server-id $SERVERID --user-name $1 --ssh-public-key-body "$SSHKEYBODY" ``` DIRECTORY=/<BucketName>/<userName> ROLEARN: Role created before SSHKEYBODY: public key of the ssh key created on the EC2 sftp-scope-down-policy.json content: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::${transfer:HomeBucket}" ], "Condition": { "StringLike": { "s3:prefix": [ "${transfer:UserName}/*", "${transfer:UserName}" ] } } }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*" } ] } ``` - A VPC endpoint exists for the three subnets for the following services: - com.amazonaws.eu-west-1.ec2 - com.amazonaws.eu-west-1.ssm - com.amazonaws.eu-west-1.ssmmessages ***So here is the problem:*** I tried to connect to the SFTP server from the EC2 launched in the first step using this command: ``` sftp -vvv -i sftp_key <userName>@<ServerPrivateIp> ``` the ssh logs shows that the connection suceeded but after that the connection closed directly. ``` debug1: Authentication succeeded (publickey). Authenticated to <ServerPrivateIp> ([<ServerPrivateIp>]:22). ``` No logs are created on CloudWatch Logs and I can see nothing special on CloudTrail logs. Can someone explain me what I missed ?
1
answers
0
votes
5
views
asked 2 months ago

Limit SFTP access to specific subfolders only

Hi all, I've setup an SFTP server with AWS Transfer Family with "sftp-server" S3 bucket as storage. I created "subfolder01", "subfolder02", "subfolder03", etc in the bucket. I defined an SFTP user and set "sftp-server" as his restricted home folder. And I want to give him read/write permissions to "subfolder01" and "subfolder02" only, while no access to all the other subfolders. But when the user connects, he sees an empty list of his home folder, and he can only access the two subfolders if he manually types the "subfolder01/" or "subfolder02/" path, in Filezilla. I would like him to see the list of all the subfolders when he connects, or better, to see only the two subfolders that he has access to. This is the policy assigned to the role of the user: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::sftp-server" }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:PutObjectAcl", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::sftp-server/subfolder01/*", "arn:aws:s3:::sftp-server/subfolder02/*" ] } ] } and this is Trusted Entities of his role: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Can you please help me?
1
answers
1
votes
9
views
asked 2 months ago

Logical Directories not working with multiple users

Good day I've implemented the custom IDP using the template (aws-transfer-custom-idp-secrets-manager-apig.template.yml) provided. I've created a user in secrets manager and attached the role containing the below policy in which I explicitly specify the users username as directory, indicated as "user1" for demonstration purposes. I am then able to successfully authenticate via SSH or Username/Password methods. I then created a new role/policy for a new user and specify the new user directory as "user2" in the policy. The problem is with the new user it authenticates fine however upon login it generates an "access denied" error and does not seem to place the user in the logical directory specified in secrets manager. This error persists with each new user I've attempted to create using the same details as the initial user1.Please assist, I've attached the user format as inserted to Secrets Manager as well as the policy below for your perusal. Thanks Secrets Manager User PLAINTEXT stored as "SFTP/user2" : { "Password": "password", "Role": "arn:aws:iam::111111111111:role/rolename", "PublicKey": "ssh-rsa AAAA", "HomeDirectoryType": "LOGICAL", "HomeDirectoryDetails": "\[{\"Entry\": \"/\", \"Target\": \"/bucketname/user2\"}]" } POLICY : { "Version": "2012-10-17", "Statement": \[ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": \[ "s3:ListBucket" ], "Resource": "arn:aws:s3:::bucketname" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": \[ "arn:aws:s3:::bucketname/user2/in/*", "arn:aws:s3:::bucketname/user2/out/*" ] }, { "Sid": "VisualEditor2", "Effect": "Deny", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::bucketname/user2/" } ] } Note, this policy works for our use case in that it allows a user to GET/PUT to the in/out folders however denies them from PUT at their logical root. The s3 structure is as follows: bucketname/user2/folders and again it works with the first user created as user1. Thanks
2
answers
0
votes
0
views
asked a year ago

FTPS - support for scope down policy?

Hello, I am looking for guidance on setting up scope down policy for FTPS users on the transfer family service. Within the lambda function that does the user authentication, i am attempting to add the policy JSON to the response body as described in the documentation. ..... response = { Role: 'arn:aws:iam::xxxxxxx:role/assumedRoleForTransferService', Policy: myPolicyJSON, HomeDirectory: '' }; ....... The scope down policy looks similar to what SFTP scope down users would use except I am not using the transfer variables (eg. ${transfer:HomeDirectory}) as I suspect they don't work because with FTPS there are no "managed" users to map the variables to. Instead my lambda will dynamically replace variables in the policy dependent on logic within the lambda. Adding the scope down policy to the lambda response creates an error when connecting to the server. Removing the scope down policy from the lambda allows me to connect and upload but then I am not restricted within the bucket. My user scope down policy JSON looks like this prior to replacing the dyanmic variables with the appropriate user paths. { "Version": "2012-10-17", "Statement": \[ { "Sid": "AllowListingOfUserFolder", "Action": \[ "s3:ListBucket" ], "Effect": "Allow", "Resource": "arn:aws:s3:::mybucket" , "Condition": { "StringLike": { "s3:prefix": \[ "DYNAMIC_USER_VARIABLE/*", "DYNAMIC_USER_VARIABLE" ] } } }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": \[ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion", "s3:GetObjectACL", "s3:PutObjectACL" ], "Resource": "arn:aws:s3:::mybucket/DYNAMIC_USER_VARIABLE/*" } ] } Is scope down policies a part of the FTPS service? If so is there any glaring issue in my policy JSON above? thanks in advance!
1
answers
0
votes
2
views
asked 2 years ago

Unable to specify bucket with custom identity provider

I've customized my identity provider using the template and instructions available here: https://docs.aws.amazon.com/transfer/latest/userguide/authenticating-users.html I'm able to get a correct response from my API and successfully log while testing in AWS Transfer and with FileZilla. However, it's not actually allowing a user to view existing files or upload new files. Here is the response from the identity provider API: ``` { "Policy": "<policy granting full access to bucket>", "Role": "<role with full access to S3>", "HomeDirectory": "/<my bucket>/test" } ``` I'm assuming this is acceptable based off the information on these pages: https://aws.amazon.com/blogs/storage/simplify-your-aws-sftp-structure-with-chroot-and-logical-directories/ https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-user.html However, FileZilla gives me the following log: ``` Status: Connecting to sftp.mydomain.com... Status: Using username "test". Status: Connected to 123456.server.transfer.us-east-1.amazonaws.com Status: Retrieving directory listing... Status: Listing directory /<my bucket>/test Error: Unknown eventType 37 Error: Failed to retrieve directory listing ``` So I tried using logical directories instead using the information in the previous links. This is an example response from the API: ``` { "Policy": "<policy granting full access to bucket>", "Role": "<role with full access to S3>", "HomeDirectoryType": "LOGICAL", "HomeDirectoryDetails": [ { "Entry": "/", "Target": "/<my bucket>/test" } ] } ``` I updated my UserConfigResponseModel in the API Gateway to this: ``` { "$schema":"http://json-schema.org/draft-04/schema#", "title":"UserUserConfig", "type":"object", "properties": { "Role":{"type":"string"}, "Policy":{"type":"string"}, "HomeDirectory":{"type":"string"}, "HomeDirectoryType":{"type":"string"}, "HomeDirectoryDetails": { "type":"array", "items": { "type":"object", "properties": { "Entry":{"type":"string"}, "Target":{"type":"string"} } } }, "PublicKeys": { "type":"array", "items":{"type":"string"} } } } ``` When I test this in AWS Transfer, I get the following response: ``` Unable to call identity provider: Unable to unmarshall response (We expected a VALUE token but got: START_ARRAY). Response Code: 200, Response Text: OK ``` All of this is very frustrating because the responses I am getting do not match what I would expect to see after reading the documentation. My question is this: how do I specify a bucket when using a custom identity provider in AWS Transfer. Edited by: paul_hatcher on May 19, 2020 9:26 AM
1
answers
0
votes
0
views
asked 2 years ago

Cannot login to a newly created SFTP server and cannot see server logs

I have created a SFTP server, gave it a logging role and created a user. As a result can neither log into the server with my private key neither see any log messages. Following are the exact steps: 1. Created the **xxxxxxxxxx-dev-import** S3 bucket and created a **test-user** folder in it. 2. Created a **DevImportSFTPReadWriteAccess** RW access policy to access the target bucket. 3. Created a **DevImportSFTPRole** role and attached the aforementioned **ImportSFTPReadWriteAccess** policy to it. 4. Created a role called **AWSTransferLoggingRole** and attached the AWS-managed **AWSTransferLoggingAccess** policy to it. Checked the trust relationship - transfer.amazonaws.com is trusted. 5. Created a public SFTP server with service managed identity provider and assigned the aforementioned **AWSTransferLoggingRole** as the logging role. Waited until the server started. **NOTE** After server was started the logs were not visible in CloudWatch. 6. After the server was started created a **test-user** user with the public key, assigned the **xxxxxxxxxx-dev-import** as the bucket and **test-user** as home folder. Following is the result I'm ending up with: ``` mymacbook:.ssh UXXXXXX$ telnet s-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com 22 Trying XXX.XXX.XXX.XXX... Connected to s-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com. Escape character is '^]'. SSH-2.0-AWS_SFTP_1.0 ^C Connection closed by foreign host. mymacbook:.ssh UXXXXXX$ ssh -i ~/.ssh/id_rsa_test_user test-user@s-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com The authenticity of host 's-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com (XXX.XXX.XXX.XXX)' can't be established. RSA key fingerprint is SHA256:u0HCsILNN4vTm367Wgyeh2ToHLbuZayQzbzt9GbF+v8. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 's-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com,XXX.XXX.XXX.XXX' (RSA) to the list of known hosts. Enter passphrase for key '/Users/UXXXXXX/.ssh/id_rsa_test_user': Connection to s-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com closed by remote host. Connection to s-xxxxxxxxxxxxxxxx.server.transfer.eu-central-1.amazonaws.com closed. mymacbook:.ssh UXXXXXX$ ``` And again - no logs in CloudWatch.
1
answers
0
votes
13
views
asked 3 years ago

Custom Identity Provider - SSH Key and/or Password Auth

Hello, I'm interested in using AWS Transfer for SFTP to replace a number of aging SFTP servers that have hundreds of users and rely on local linux account authentication and chrooting for security. I have spent a lot of time looking over this forum and the AWS documentation for the SFTP offering. I have a number of concerns I'm hoping can be addressed by the community: 1. Is there a custom identity provider I can plugin to today that allows a mixture of password authentication, SSH key authentication **and** allows end-users to perform self-service password resets? We have hundreds of users (password auth) as well as service/automated accounts (SSH key auth). Secrets Manager will allow both auth methods, but there doesn't seem to be a way for end users to have direct control over their passwords or perform self-service resets. Additionally, administrators with access to Secrets Manager would have access to the plaintext version of passwords, which is not a security best practice. https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-for-sftp-using-aws-secrets-manager/ Identity is one of the most important pieces of the solution and it happens to be more complex with AWS SFTP than any other solution on the market today when you factor in real-world use cases of mixed authentication, security requirements, and being forced to use API Gateway, Lambda functions, etc. 2. Is there any solution that will allow for whitelisting IP access to the server which doesn't add significantly to the complexity/cost of the solution? If not, then how are we supposed to address risks of having an internet-accessible server (bruteforce attempts)? Based on the documentation, to enable whitelisting, I would need: -a VPC -an NLB with an elastic IP -a firewall in front of all that There is no formal documentation on how to setup all the pieces above and have it work successfully, and I'm not sure anyone has done it yet who can demonstrate it will actually work. It would be great to have these addressed with a solution today, or see if AWS is working on functionality.
1
answers
0
votes
0
views
asked 3 years ago

Custom Identity Provider - works until Policy is defined?

Hi, I've got a server setup with a custom identity provider running a lambda function. With only a Role defined in the response, my user can log in (but of course has more access than is desired). When I add the Policy inline to the lambda response, the login fails. Testing with test-identity-provider yields 200 success when no Policy is defined. However, when a Policy is defined (it seems any policy, with or without variables) testing with test-identity-provider I get the following: "Message": "Unable to call identity provider: Unable to unmarshall response (We expected a VALUE token but got: START_OBJECT). Response Code: 200, Response Text: OK", "StatusCode": 500, The policy I'm using is not special, just an example found online: ``` const policy = { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::${transfer:HomeBucket}" ], "Condition": { "StringLike": { "s3:prefix": [ "in/${transfer:UserName}/*", "in/${transfer:UserName}" ] } } }, { "Sid": "AWSTransferRequirements", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "*" }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}/*" } ] }; ``` and later: ``` response = { Role: 'my_role_arn', Policy: policy, HomeDirectory: '/my-bucket/in/myuser', }; ``` Anybody got any hints about what I'm doing wrong? Thanks. Edited by: TTF2019 on Apr 13, 2019 5:10 AM
4
answers
0
votes
3
views
asked 3 years ago
  • 1
  • 90 / page