Questions tagged with AWS Transfer Family

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, We notice that the logical mapping directories have a limit of 50 by a user we would like to have 1500, there is an instance where we can request an increase in this limit? Thanks
1
answers
0
votes
81
views
asked 9 months ago
AWS Transfer Family support AIX platform
1
answers
0
votes
98
views
AWS
SUPPORT ENGINEER
Harsh_P
asked 9 months ago
Hello, This question may seem a bit long-winded since I will be describing the relevant background information to hopefully avoid back and forth, and ultimately arrive at a resolution. I appreciate your patience. I have a Lambda function that is authenticating users via Okta for SFTP file transfers, and the Lambda function is called through an API Gateway. My company has many different clients, so we chose this route for authentication rather than creating user accounts for them in AWS. Everything has been working fine during my testing process except for one key piece of functionality. Since we have many customers, we don't want them to be able to interact or even see another customer's folder within the dedicated S3 bucket. The directory structure has the main S3 bucket at the top level and within that bucket resides each customer's folder. From there, they can create subfolders, upload files, etc. I have created the IAM policy - which is an inline policy as part of an assumed role - as described in this document: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html. My IAM policy looks exactly like the one shown in the "Creating a session policy for an Amazon S3 bucket" section of the documentation. The "transfer" variables are defined in the Lambda function. Unfortunately, those "transfer" variables do not seem to be getting passed to the IAM policy. When I look at the Transfer Family endpoint log, it is showing access denied after successfully connecting (confidential information is redacted): <user>.39e979320fffb078 CONNECTED SourceIP=<source_ip> User=<user> HomeDir=/<s3_bucket>/<customer_folder>/ Client="SSH-2.0-Cyberduck/8.3.3.37544 (Mac OS X/12.4) (x86_64)" Role=arn:aws:iam::<account_id>:role/TransferS3AccessRole Kex=diffie-hellman-group-exchange-sha256 Ciphers=aes128-ctr,aes128-ctr <user>.39e979320fffb078 ERROR Message="Access denied" However, if I change the "transfer" variables in the Lambda function to include the actual bucket name and update the IAM policy accordingly, everything works as expected; well, almost everything. With this change, I am not able to restrict access and, thus, any customer could interact with any other customer's folders and files. Having the ability to restrict access by using the "transfer" variables is an integral piece of functionality. I've searched around the internet - including this forum - and cannot seem to find the answer to this problem. Likely, I have overlooked something and hopefully it is an easy fix. Looking forward to getting this resolved. Thank you very much in advance!
7
answers
0
votes
228
views
asked 9 months ago
Hello, I'm facing a concern trying to create a policy for some SFTP users, the thing that I'm trying to do is to give permission to see specific files inside the bucket but I don't find the right way to do it.
1
answers
0
votes
127
views
asked 10 months ago
I am selling my site and need to transfer the AWS account to the buyer's business (the buyers do not use AWS for their other sites - but they want my site to continue with AWS). I cannot figure out how to do it. Do I need to pay for support and what level? This is Amazon's advice on transfering ownership of a site: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-aws-account/ "To assign ownership of an AWS account and its resources to another party or business, contact AWS Support for help: Sign in to the AWS Management Console as the root user. Open the AWS Support Center. Choose Create case. Enter the details of your case: Choose Account and billing support. For Type, choose Account. For Category, choose Ownership Transfer. For all other fields, enter the details for your case. For Preferred contact language, choose your preferred language. For Contact methods, choose your preferred contact method. Choose Submit. AWS Support will contact you with next steps and help you transfer your account ownership." I have done all this but have not yet been contacted (24 hours). The text seems to suggest that advice on transfering ownership is a necessary aspect of transfering an AWS root account to a company, and that such advice is provided free by Amazon, since nothing is said about pricing. If on the other hand AWS clients must pay for a support package to transfer ownership, which package? The $29 Developer package or the $100 Business package or some other package? How quickly does Amazon AWS respond? How quick is the transfer process? I am finding this very frustrating.
1
answers
0
votes
1150
views
asked a year ago
We have: - an AWS transfer family server with FTPS protocol - a custom hostname and a valid ACM certificate which is attached to the FTP server - a Lambda for the Identity provider The client is using: - EXPLICIT AUTH TLS - our custom hostname - port 21 The problem is: the client can connect, the authentication is successfully (see below for the auth test result), but during the communication with the FTP server a TLS_RESUME_FAILURE occurs. The error in the customer client is "522 Data connection must use cached TLS session", and the error in the CloudWatch LogGroup of the transfer server is just "TLS_RESUME_FAILURE" I have no clue why this is happen. Any ideas? Here is the auth test result ``` { "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/xxx/new\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::123456789:role/ftp-s3-access-role\",\"Policy\":\"{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"AllowListAccessToBucket\", \"Action\": [\"s3:ListBucket\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::xxx-prod\"]}, {\"Sid\": \"TransferDataBucketAccess\", \"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"], \"Resource\": [\"arn:aws:s3:::xxx-prod/xxx/new\", \"arn:aws:s3:::xxx-prod/xxx/new/*\"]}]}\",\"UserName\":\"test\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ```
1
answers
0
votes
487
views
asked a year ago
Since this monday we are experiencing a problem when we try to upload a large amount of files. (49 files to be exact). After around 20 files the upload fails. ``` s-262d99d7572942eca.server.transfer.eu-central-1.amazonaws.com /aws/transfer/s-262d99d7572942eca asdf.f7955cc50d0d1bc4 2022-05-09T23:02:59.165+02:00 asdf.f7955cc50d0d1bc4 CONNECTED SourceIP=77.0.176.252 User=asdf HomeDir=/beresa-test-komola/asdf Client=SSH-2.0-ssh2js0.4.10 Role=arn:aws:iam::747969442112:role/beresa-test UserPolicy="{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowListingOfUserFolder\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:s3:::beresa-test-komola\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"asdf/*\",\n \"asdf\"\n ]\n }\n }\n },\n {\n \"Sid\": \"HomeDirObjectAccess\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:DeleteObject\",\n \"s3:GetObjectVersion\"\n ],\n \"Resource\": \"arn:aws:s3:::beresa-test-komola/asdf*\"\n }\n ]\n}" Kex=ecdh-sha2-nistp256 Ciphers=aes128-ctr,aes128-ctr 2022-05-09T23:02:59.583+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:03.394+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg BytesIn=4226625 2022-05-09T23:03:04.005+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:07.215+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg BytesIn=4226625 2022-05-09T23:03:07.757+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:10.902+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg BytesIn=4226625 2022-05-09T23:03:11.433+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:14.579+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg BytesIn=4226625 2022-05-09T23:03:14.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:18.016+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg BytesIn=4226625 2022-05-09T23:03:18.403+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:21.463+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg BytesIn=4226625 2022-05-09T23:03:21.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:25.025+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg BytesIn=4199266 2022-05-09T23:03:25.431+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:28.497+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg BytesIn=4199266 2022-05-09T23:03:28.857+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:31.947+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg BytesIn=4199266 2022-05-09T23:03:32.374+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:35.504+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg BytesIn=4199266 2022-05-09T23:03:35.986+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:39.104+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg BytesIn=4226625 2022-05-09T23:03:39.691+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:42.816+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg BytesIn=4199266 2022-05-09T23:03:43.224+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:46.274+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg BytesIn=4199266 2022-05-09T23:03:46.649+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:49.757+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg BytesIn=4199266 2022-05-09T23:03:50.141+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:53.307+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg BytesIn=4199266 2022-05-09T23:03:53.849+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:56.933+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg BytesIn=4199266 2022-05-09T23:03:57.358+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:00.585+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg BytesIn=4199266 2022-05-09T23:04:00.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:04.174+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg BytesIn=4199266 2022-05-09T23:04:04.603+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:07.771+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg BytesIn=4199266 2022-05-09T23:04:08.179+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:11.279+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg BytesIn=4199266 2022-05-09T23:04:11.716+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:14.853+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg BytesIn=4199266 2022-05-09T23:04:15.316+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:18.435+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg BytesIn=4226625 2022-05-09T23:04:18.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:22.140+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg BytesIn=4199266 2022-05-09T23:04:22.565+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:25.752+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg BytesIn=4159129 2022-05-09T23:04:26.141+02:00 asdf.f7955cc50d0d1bc4 ERROR Message="Too many open files in this session, maximum 100" Operation=OPEN Path=/beresa-test-komola/asdf/x/bla/32_x_a_3.jpg Mode=CREATE|TRUNCATE|WRITE ``` As you can see in the logs we are closing each path after opening it - we are uploading one file after the other. What could cause this as we are not even trying to write 100 files during the scp session?
Accepted AnswerAWS Transfer Family
1
answers
0
votes
368
views
asked a year ago
Hi, I'm curently facing a problem trying to create a private SFTP Server (deployed in a VPC) using AWS Transfer Family. So here are the steps I followed: - I started an EC2 in one of three subnets associated with the SFTP server (created in another step) - Those subnets are private - I connected to the EC2 instance using session manager - I created an ssh key named sftp_key to connect to the SFTP server - I Created an IAM role for the transfer service: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "<AccountId>" }, "ArnLike": { "aws:SourceArn": "arn:aws:transfer:eu-west-1:<AccountId>:server/*" } } } ] } ``` - Attached an inline policy to this role: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::<BucketName>" ] }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::<BucketName>/*" } ] } ``` - Created a Role for logging management. This role has the following inline policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateLogsForTransfer", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*" } ] } ``` - Created an SFTP Server using the CLI like this: ``` aws transfer create-server --identity-provider-type SERVICE_MANAGED --protocols SFTP --domain S3 --endpoint-type VPC --endpoint-details SubnetIds=$SUBNET_IDS,VpcId=$VPC_ID,SecurityGroupIds=$SG_ID --logging-role $LOGGINGROLEARN --security-policy-name $SECURITY_POLICY ``` SUBNET_IDS: list of 3 privates subnets ids VPC_ID: the concerned VPC ID SG_ID: ID of a security group. This group allows all access on port 22 (TCP) from the same subnets (SUBNET_IDS) LOGGINGROLEARN: Arn of the logging role SECURITY_POLICY=TransferSecurityPolicy-2020-06 - Created a user with the CLI: ``` aws transfer create-user --home-directory $DIRECTORY --policy file://sftp-scope-down-policy.json --role $ROLEARN --server-id $SERVERID --user-name $1 --ssh-public-key-body "$SSHKEYBODY" ``` DIRECTORY=/<BucketName>/<userName> ROLEARN: Role created before SSHKEYBODY: public key of the ssh key created on the EC2 sftp-scope-down-policy.json content: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::${transfer:HomeBucket}" ], "Condition": { "StringLike": { "s3:prefix": [ "${transfer:UserName}/*", "${transfer:UserName}" ] } } }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*" } ] } ``` - A VPC endpoint exists for the three subnets for the following services: - com.amazonaws.eu-west-1.ec2 - com.amazonaws.eu-west-1.ssm - com.amazonaws.eu-west-1.ssmmessages ***So here is the problem:*** I tried to connect to the SFTP server from the EC2 launched in the first step using this command: ``` sftp -vvv -i sftp_key <userName>@<ServerPrivateIp> ``` the ssh logs shows that the connection suceeded but after that the connection closed directly. ``` debug1: Authentication succeeded (publickey). Authenticated to <ServerPrivateIp> ([<ServerPrivateIp>]:22). ``` No logs are created on CloudWatch Logs and I can see nothing special on CloudTrail logs. Can someone explain me what I missed ?
1
answers
0
votes
501
views
wmegel
asked a year ago
Hello, I am migrating a SFTP Server from Public Endpoint to VPC Endpoint, and i would like to preserve the same HostKey in production so customers do not have to re-accept it. But i am struggling with the concepts: When looking into the existing SFTP Server at Console > Additional details > Server host key SHA256:Cv5TEDW8P3L+uqpAKtpzSWIfGcHwdrnaDyJd0wOGNx5= (example) I believe this is a PUBLIC host key, and the same that we accept as a client to the SFTP Server into the known hosts . When Editing > Server host key, it asks for a RSA PRIVATE key. (I tried to set the previous host key example) Where can i get this RSA private Key from our current running AWS SFTP server? (I tried to ssh but it does not allow) Would after creation of new AWS SFTP server be able to setup the host key with this command? aws transfer update-server --server-id "your-server-id" --host-key file://my-host-key my-host-key is the RSA Private Key? Thanks.
Accepted AnswerAWS Transfer Family
1
answers
0
votes
532
views
asked a year ago
Hi all, I've setup an SFTP server with AWS Transfer Family with "sftp-server" S3 bucket as storage. I created "subfolder01", "subfolder02", "subfolder03", etc in the bucket. I defined an SFTP user and set "sftp-server" as his restricted home folder. And I want to give him read/write permissions to "subfolder01" and "subfolder02" only, while no access to all the other subfolders. But when the user connects, he sees an empty list of his home folder, and he can only access the two subfolders if he manually types the "subfolder01/" or "subfolder02/" path, in Filezilla. I would like him to see the list of all the subfolders when he connects, or better, to see only the two subfolders that he has access to. This is the policy assigned to the role of the user: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::sftp-server" }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:PutObjectAcl", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::sftp-server/subfolder01/*", "arn:aws:s3:::sftp-server/subfolder02/*" ] } ] } and this is Trusted Entities of his role: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Can you please help me?
1
answers
1
votes
922
views
Mauro L
asked a year ago
I am using the AWS Transfer service with S3 buckets for storage and a chrooted dynamic path to create the user directory as the user is authenticated - "/mybucket/${Transfer:UserName}". This is working as expected except that when the ${Transfer:UserName} variable is used to create the user directory it matches the case that the user name was entered on the client. Authentication is via the AWS Directory service. As this is a windows system there is no distinction between lowercase and uppercase usernames. E.g. if the client specifies the username in lowercase the user directory is created in lowercase. If the client specifies the username in uppercase then the user directory is created in uppercase. This can result in multiple user directories. Is there a way to force the case of the user directory to lowercase when it is created using the ${Transfer:UserName} variable?
1
answers
0
votes
39
views
asked a year ago
Hi, I would expose the same SFTP (Transfer Family) endpoint (deployed in service mode, not into a VPC) under different domains to share the capabilities. It seems the component controls its proper declaration into an hosted zone (based on the given domain), meaning it could not be associated to several domains. If I want to share a SFTP server, I have to deploy it into a VPC, behind a NLB and target the NLB from the different zones. Am I right? Thanks in advance
1
answers
0
votes
493
views
Fred
asked a year ago