By using AWS re:Post, you agree to the Terms of Use
Questions in Migration & Transfer
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Using DMS and SCT for extracting/migrating data from Cassandra to S3

IHAC who is doing scoping with an Architecture using DMS and SCT. I had a few questions I was hoping you can get answered for me. 1. Does AWS DMS support data validation with Cassandra as a source? I don’t see it here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.DataValidation but I do see Cassandra as a valid source target here https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ 2. Does AWS DMS support ongoing replication with Cassandra as a source? Reading the docs it looks like if I wanted to extract data from Cassandra and write to s3 (Using DMS) then post process that data into a different format (Like json) and write to a different S3 bucket, I could so by attaching a Lamba to the original S3 event from the DMS extract and drop. Can you confirm my understanding? 3. How is incremental data loaded ongoing after initial load from Cassandra (with DMS)? In the docs it looks like its stored in s3 in csv form. Does it write 1 csv per source table and keep appending or updating the existing csv? does it create 1 csv per row, per batch...etc? I’m wondering how the event in step 3 would be triggered if I did want to continuously post process updates as they come in in real time and covert source data from Cassandra into Json data I store on s3.
0
answers
0
votes
3
views
asked 8 days ago

How do I transfer my AWS account to another person or business?

I am selling my site and need to transfer the AWS account to the buyer's business (the buyers do not use AWS for their other sites - but they want my site to continue with AWS). I cannot figure out how to do it. Do I need to pay for support and what level? This is Amazon's advice on transfering ownership of a site: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-aws-account/ "To assign ownership of an AWS account and its resources to another party or business, contact AWS Support for help: Sign in to the AWS Management Console as the root user. Open the AWS Support Center. Choose Create case. Enter the details of your case: Choose Account and billing support. For Type, choose Account. For Category, choose Ownership Transfer. For all other fields, enter the details for your case. For Preferred contact language, choose your preferred language. For Contact methods, choose your preferred contact method. Choose Submit. AWS Support will contact you with next steps and help you transfer your account ownership." I have done all this but have not yet been contacted (24 hours). The text seems to suggest that advice on transfering ownership is a necessary aspect of transfering an AWS root account to a company, and that such advice is provided free by Amazon, since nothing is said about pricing. If on the other hand AWS clients must pay for a support package to transfer ownership, which package? The $29 Developer package or the $100 Business package or some other package? How quickly does Amazon AWS respond? How quick is the transfer process? I am finding this very frustrating.
1
answers
0
votes
17
views
asked 10 days ago

FTP Transfer Family, FTPS, TLS resume failed

We have: - an AWS transfer family server with FTPS protocol - a custom hostname and a valid ACM certificate which is attached to the FTP server - a Lambda for the Identity provider The client is using: - EXPLICIT AUTH TLS - our custom hostname - port 21 The problem is: the client can connect, the authentication is successfully (see below for the auth test result), but during the communication with the FTP server a TLS_RESUME_FAILURE occurs. The error in the customer client is "522 Data connection must use cached TLS session", and the error in the CloudWatch LogGroup of the transfer server is just "TLS_RESUME_FAILURE" I have no clue why this is happen. Any ideas? Here is the auth test result ``` { "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/xxx/new\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::123456789:role/ftp-s3-access-role\",\"Policy\":\"{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"AllowListAccessToBucket\", \"Action\": [\"s3:ListBucket\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::xxx-prod\"]}, {\"Sid\": \"TransferDataBucketAccess\", \"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"], \"Resource\": [\"arn:aws:s3:::xxx-prod/xxx/new\", \"arn:aws:s3:::xxx-prod/xxx/new/*\"]}]}\",\"UserName\":\"test\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ```
1
answers
0
votes
9
views
asked 11 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
4
views
asked 11 days ago

AWS SFTP Error "Too many open files in this session, maximum 100"

Since this monday we are experiencing a problem when we try to upload a large amount of files. (49 files to be exact). After around 20 files the upload fails. ``` s-262d99d7572942eca.server.transfer.eu-central-1.amazonaws.com /aws/transfer/s-262d99d7572942eca asdf.f7955cc50d0d1bc4 2022-05-09T23:02:59.165+02:00 asdf.f7955cc50d0d1bc4 CONNECTED SourceIP=77.0.176.252 User=asdf HomeDir=/beresa-test-komola/asdf Client=SSH-2.0-ssh2js0.4.10 Role=arn:aws:iam::747969442112:role/beresa-test UserPolicy="{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowListingOfUserFolder\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:s3:::beresa-test-komola\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"asdf/*\",\n \"asdf\"\n ]\n }\n }\n },\n {\n \"Sid\": \"HomeDirObjectAccess\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:DeleteObject\",\n \"s3:GetObjectVersion\"\n ],\n \"Resource\": \"arn:aws:s3:::beresa-test-komola/asdf*\"\n }\n ]\n}" Kex=ecdh-sha2-nistp256 Ciphers=aes128-ctr,aes128-ctr 2022-05-09T23:02:59.583+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:03.394+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg BytesIn=4226625 2022-05-09T23:03:04.005+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:07.215+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg BytesIn=4226625 2022-05-09T23:03:07.757+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:10.902+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg BytesIn=4226625 2022-05-09T23:03:11.433+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:14.579+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg BytesIn=4226625 2022-05-09T23:03:14.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:18.016+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg BytesIn=4226625 2022-05-09T23:03:18.403+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:21.463+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg BytesIn=4226625 2022-05-09T23:03:21.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:25.025+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg BytesIn=4199266 2022-05-09T23:03:25.431+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:28.497+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg BytesIn=4199266 2022-05-09T23:03:28.857+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:31.947+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg BytesIn=4199266 2022-05-09T23:03:32.374+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:35.504+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg BytesIn=4199266 2022-05-09T23:03:35.986+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:39.104+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg BytesIn=4226625 2022-05-09T23:03:39.691+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:42.816+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg BytesIn=4199266 2022-05-09T23:03:43.224+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:46.274+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg BytesIn=4199266 2022-05-09T23:03:46.649+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:49.757+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg BytesIn=4199266 2022-05-09T23:03:50.141+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:53.307+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg BytesIn=4199266 2022-05-09T23:03:53.849+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:56.933+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg BytesIn=4199266 2022-05-09T23:03:57.358+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:00.585+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg BytesIn=4199266 2022-05-09T23:04:00.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:04.174+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg BytesIn=4199266 2022-05-09T23:04:04.603+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:07.771+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg BytesIn=4199266 2022-05-09T23:04:08.179+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:11.279+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg BytesIn=4199266 2022-05-09T23:04:11.716+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:14.853+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg BytesIn=4199266 2022-05-09T23:04:15.316+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:18.435+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg BytesIn=4226625 2022-05-09T23:04:18.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:22.140+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg BytesIn=4199266 2022-05-09T23:04:22.565+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:25.752+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg BytesIn=4159129 2022-05-09T23:04:26.141+02:00 asdf.f7955cc50d0d1bc4 ERROR Message="Too many open files in this session, maximum 100" Operation=OPEN Path=/beresa-test-komola/asdf/x/bla/32_x_a_3.jpg Mode=CREATE|TRUNCATE|WRITE ``` As you can see in the logs we are closing each path after opening it - we are uploading one file after the other. What could cause this as we are not even trying to write 100 files during the scp session?
1
answers
0
votes
15
views
asked 12 days ago

Not able to do one time load, from postgres to opensearch using DMS

Trying to migrate existing data from AWS RDS Postgres to AWS managed OpenSearch, but it is not working, no rows were migrated to opensearch, When checking the Cloudwatch log getting below error Bulk request failed. no retry. TotalRecordCount 4080, FailedRecordCount 4080 [1026400] (elasticsearch_bulk_utils.c:181) DMS has the following configuration: { "TargetMetadata": { "TargetSchema": "", "SupportLobs": false, "FullLobMode": false, "LobChunkSize": 0, "LimitedSizeLobMode": false, "LobMaxSize": 0, "InlineLobMaxSize": 0, "LoadMaxFileSize": 0, "ParallelLoadThreads": 5, "ParallelLoadBufferSize": 100, "BatchApplyEnabled": false, "TaskRecoveryTableEnabled": false, "ParallelLoadQueuesPerThread": 0, "ParallelApplyThreads": 0, "ParallelApplyBufferSize": 100, "ParallelApplyQueuesPerThread": 0 }, "FullLoadSettings": { "TargetTablePrepMode": "DO_NOTHING", "CreatePkAfterFullLoad": false, "StopTaskCachedChangesApplied": false, "StopTaskCachedChangesNotApplied": false, "MaxFullLoadSubTasks": 8, "TransactionConsistencyTimeout": 600, "CommitRate": 50000 }, "Logging": { "EnableLogging": true, "LogComponents": [ { "Id": "TRANSFORMATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_UNLOAD", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "IO", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_LOAD", "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG" }, { "Id": "PERFORMANCE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SOURCE_CAPTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "SORTER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "REST_SERVER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "VALIDATOR_EXT", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TARGET_APPLY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TASK_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "TABLES_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "METADATA_MANAGER", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_FACTORY", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMON", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "ADDONS", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "DATA_STRUCTURE", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "COMMUNICATION", "Severity": "LOGGER_SEVERITY_DEFAULT" }, { "Id": "FILE_TRANSFER", "Severity": "LOGGER_SEVERITY_DEFAULT" } ], "CloudWatchLogGroup": null, "CloudWatchLogStream": null }, "ControlTablesSettings": { "historyTimeslotInMinutes": 5, "ControlSchema": "", "HistoryTimeslotInMinutes": 5, "HistoryTableEnabled": true, "SuspendedTablesTableEnabled": false, "StatusTableEnabled": true, "FullLoadExceptionTableEnabled": false }, "StreamBufferSettings": { "StreamBufferCount": 3, "StreamBufferSizeInMB": 8, "CtrlStreamBufferSizeInMB": 5 }, "ChangeProcessingDdlHandlingPolicy": { "HandleSourceTableDropped": true, "HandleSourceTableTruncated": true, "HandleSourceTableAltered": true }, "ErrorBehavior": { "DataErrorPolicy": "LOG_ERROR", "EventErrorPolicy": null, "DataTruncationErrorPolicy": "LOG_ERROR", "DataErrorEscalationPolicy": "SUSPEND_TABLE", "DataErrorEscalationCount": 0, "TableErrorPolicy": "SUSPEND_TABLE", "TableErrorEscalationPolicy": "STOP_TASK", "TableErrorEscalationCount": 0, "RecoverableErrorCount": -1, "RecoverableErrorInterval": 5, "RecoverableErrorThrottling": true, "RecoverableErrorThrottlingMax": 1800, "RecoverableErrorStopRetryAfterThrottlingMax": true, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "ApplyErrorInsertPolicy": "LOG_ERROR", "ApplyErrorUpdatePolicy": "LOG_ERROR", "ApplyErrorEscalationPolicy": "LOG_ERROR", "ApplyErrorEscalationCount": 0, "ApplyErrorFailOnTruncationDdl": false, "FullLoadIgnoreConflicts": true, "FailOnTransactionConsistencyBreached": false, "FailOnNoTablesCaptured": true }, "ChangeProcessingTuning": { "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchApplyTimeoutMax": 30, "BatchApplyMemoryLimit": 500, "BatchSplitSize": 0, "MinTransactionSize": 1000, "CommitTimeout": 1, "MemoryLimitTotal": 1024, "MemoryKeepTime": 60, "StatementCacheSize": 50 }, "PostProcessingRules": null, "CharacterSetSettings": null, "LoopbackPreventionSettings": null, "BeforeImageSettings": null, "FailTaskWhenCleanTaskResourceFailed": false, "TTSettings": null } Opensearch have index with following settings { "settings": { "index.max_ngram_diff" :8, "analysis": { "analyzer": { "my_ngram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": [ "lowercase", "mynGram" ] } }, "filter": { "mynGram": { "type": "nGram", "min_gram": 6, "max_gram": 14, "token_chars": [ "letter", "digit", "whitespace", "symbol" ] } } }, "number_of_shards": 6, "number_of_replicas": 1 }, "mappings" : { "properties" : { "created_at" : { "type" : "date" }, "id" : { "type" : "long" }, "name" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "phone" : { "type" : "text", "analyzer":"my_ngram_analyzer" , "search_analyzer": "my_ngram_analyzer" }, "updated_at" : { "type" : "date" } } } } I have tried to insert a sample document using _bulk API on opensearch console and it worked, below is the thing I had tried over opensearch, which worked POST _bulk {"index":{"_index":"contacts"}} {"name": "name","phone" : "11111111","created_at" : "2021-12-21T12:12:59","updated_at" : "2021-12-21T12:12:59","id": 101}
1
answers
0
votes
10
views
asked 24 days ago

AWS Transfer Family - Private SFTP server connection closed

Hi, I'm curently facing a problem trying to create a private SFTP Server (deployed in a VPC) using AWS Transfer Family. So here are the steps I followed: - I started an EC2 in one of three subnets associated with the SFTP server (created in another step) - Those subnets are private - I connected to the EC2 instance using session manager - I created an ssh key named sftp_key to connect to the SFTP server - I Created an IAM role for the transfer service: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "<AccountId>" }, "ArnLike": { "aws:SourceArn": "arn:aws:transfer:eu-west-1:<AccountId>:server/*" } } } ] } ``` - Attached an inline policy to this role: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::<BucketName>" ] }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::<BucketName>/*" } ] } ``` - Created a Role for logging management. This role has the following inline policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateLogsForTransfer", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*" } ] } ``` - Created an SFTP Server using the CLI like this: ``` aws transfer create-server --identity-provider-type SERVICE_MANAGED --protocols SFTP --domain S3 --endpoint-type VPC --endpoint-details SubnetIds=$SUBNET_IDS,VpcId=$VPC_ID,SecurityGroupIds=$SG_ID --logging-role $LOGGINGROLEARN --security-policy-name $SECURITY_POLICY ``` SUBNET_IDS: list of 3 privates subnets ids VPC_ID: the concerned VPC ID SG_ID: ID of a security group. This group allows all access on port 22 (TCP) from the same subnets (SUBNET_IDS) LOGGINGROLEARN: Arn of the logging role SECURITY_POLICY=TransferSecurityPolicy-2020-06 - Created a user with the CLI: ``` aws transfer create-user --home-directory $DIRECTORY --policy file://sftp-scope-down-policy.json --role $ROLEARN --server-id $SERVERID --user-name $1 --ssh-public-key-body "$SSHKEYBODY" ``` DIRECTORY=/<BucketName>/<userName> ROLEARN: Role created before SSHKEYBODY: public key of the ssh key created on the EC2 sftp-scope-down-policy.json content: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::${transfer:HomeBucket}" ], "Condition": { "StringLike": { "s3:prefix": [ "${transfer:UserName}/*", "${transfer:UserName}" ] } } }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*" } ] } ``` - A VPC endpoint exists for the three subnets for the following services: - com.amazonaws.eu-west-1.ec2 - com.amazonaws.eu-west-1.ssm - com.amazonaws.eu-west-1.ssmmessages ***So here is the problem:*** I tried to connect to the SFTP server from the EC2 launched in the first step using this command: ``` sftp -vvv -i sftp_key <userName>@<ServerPrivateIp> ``` the ssh logs shows that the connection suceeded but after that the connection closed directly. ``` debug1: Authentication succeeded (publickey). Authenticated to <ServerPrivateIp> ([<ServerPrivateIp>]:22). ``` No logs are created on CloudWatch Logs and I can see nothing special on CloudTrail logs. Can someone explain me what I missed ?
1
answers
0
votes
8
views
asked 2 months ago

Limit SFTP access to specific subfolders only

Hi all, I've setup an SFTP server with AWS Transfer Family with "sftp-server" S3 bucket as storage. I created "subfolder01", "subfolder02", "subfolder03", etc in the bucket. I defined an SFTP user and set "sftp-server" as his restricted home folder. And I want to give him read/write permissions to "subfolder01" and "subfolder02" only, while no access to all the other subfolders. But when the user connects, he sees an empty list of his home folder, and he can only access the two subfolders if he manually types the "subfolder01/" or "subfolder02/" path, in Filezilla. I would like him to see the list of all the subfolders when he connects, or better, to see only the two subfolders that he has access to. This is the policy assigned to the role of the user: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::sftp-server" }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObjectAcl", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:PutObjectAcl", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::sftp-server/subfolder01/*", "arn:aws:s3:::sftp-server/subfolder02/*" ] } ] } and this is Trusted Entities of his role: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Can you please help me?
1
answers
1
votes
11
views
asked 2 months ago
  • 1
  • 90 / page