By using AWS re:Post, you agree to the Terms of Use
/Migration & Transfer/

Migration & Transfer

Easily migrate to AWS and see business results faster. We’ve taken our experience with migrations to AWS and developed a broad set of first and third party tools and services to help simplify and accelerate migrations. Our migration tool catalog includes an end-to-end set of tools to ensure your investment achieves your desired business outcomes.

Recent questions

see all
1/18

Using DMS and SCT for extracting/migrating data from Cassandra to S3

IHAC who is doing scoping with an Architecture using DMS and SCT. I had a few questions I was hoping you can get answered for me. 1. Does AWS DMS support data validation with Cassandra as a source? I don’t see it here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#CHAP_BestPractices.DataValidation but I do see Cassandra as a valid source target here https://aws.amazon.com/about-aws/whats-new/2018/09/aws-dms-aws-sct-now-support-the-migration-of-apache-cassandra-databases/ 2. Does AWS DMS support ongoing replication with Cassandra as a source? Reading the docs it looks like if I wanted to extract data from Cassandra and write to s3 (Using DMS) then post process that data into a different format (Like json) and write to a different S3 bucket, I could so by attaching a Lamba to the original S3 event from the DMS extract and drop. Can you confirm my understanding? 3. How is incremental data loaded ongoing after initial load from Cassandra (with DMS)? In the docs it looks like its stored in s3 in csv form. Does it write 1 csv per source table and keep appending or updating the existing csv? does it create 1 csv per row, per batch...etc? I’m wondering how the event in step 3 would be triggered if I did want to continuously post process updates as they come in in real time and covert source data from Cassandra into Json data I store on s3.
0
answers
0
votes
3
views
asked 5 days ago

How do I transfer my AWS account to another person or business?

I am selling my site and need to transfer the AWS account to the buyer's business (the buyers do not use AWS for their other sites - but they want my site to continue with AWS). I cannot figure out how to do it. Do I need to pay for support and what level? This is Amazon's advice on transfering ownership of a site: https://aws.amazon.com/premiumsupport/knowledge-center/transfer-aws-account/ "To assign ownership of an AWS account and its resources to another party or business, contact AWS Support for help: Sign in to the AWS Management Console as the root user. Open the AWS Support Center. Choose Create case. Enter the details of your case: Choose Account and billing support. For Type, choose Account. For Category, choose Ownership Transfer. For all other fields, enter the details for your case. For Preferred contact language, choose your preferred language. For Contact methods, choose your preferred contact method. Choose Submit. AWS Support will contact you with next steps and help you transfer your account ownership." I have done all this but have not yet been contacted (24 hours). The text seems to suggest that advice on transfering ownership is a necessary aspect of transfering an AWS root account to a company, and that such advice is provided free by Amazon, since nothing is said about pricing. If on the other hand AWS clients must pay for a support package to transfer ownership, which package? The $29 Developer package or the $100 Business package or some other package? How quickly does Amazon AWS respond? How quick is the transfer process? I am finding this very frustrating.
1
answers
0
votes
13
views
asked 7 days ago

FTP Transfer Family, FTPS, TLS resume failed

We have: - an AWS transfer family server with FTPS protocol - a custom hostname and a valid ACM certificate which is attached to the FTP server - a Lambda for the Identity provider The client is using: - EXPLICIT AUTH TLS - our custom hostname - port 21 The problem is: the client can connect, the authentication is successfully (see below for the auth test result), but during the communication with the FTP server a TLS_RESUME_FAILURE occurs. The error in the customer client is "522 Data connection must use cached TLS session", and the error in the CloudWatch LogGroup of the transfer server is just "TLS_RESUME_FAILURE" I have no clue why this is happen. Any ideas? Here is the auth test result ``` { "Response": "{\"HomeDirectoryDetails\":\"[{\\\"Entry\\\":\\\"/\\\",\\\"Target\\\":\\\"/xxx/new\\\"}]\",\"HomeDirectoryType\":\"LOGICAL\",\"Role\":\"arn:aws:iam::123456789:role/ftp-s3-access-role\",\"Policy\":\"{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"AllowListAccessToBucket\", \"Action\": [\"s3:ListBucket\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::xxx-prod\"]}, {\"Sid\": \"TransferDataBucketAccess\", \"Effect\": \"Allow\", \"Action\": [\"s3:PutObject\", \"s3:GetObject\", \"s3:GetObjectVersion\", \"s3:GetObjectACL\", \"s3:PutObjectACL\"], \"Resource\": [\"arn:aws:s3:::xxx-prod/xxx/new\", \"arn:aws:s3:::xxx-prod/xxx/new/*\"]}]}\",\"UserName\":\"test\",\"IdentityProviderType\":\"AWS_LAMBDA\"}", "StatusCode": 200, "Message": "" } ```
1
answers
0
votes
7
views
asked 8 days ago

Help with copying s3 bucket to another location missing objects

Hello All, Today I was trying to copy a directory from one location to another, and was using the following command to execute my copy. aws s3 s3://bucketname/directory/ s3://bucketname/directory/subdirectory --recursive The copy took overnight to complete because it was 16.4TB in size, but when I got into work the next day, it was done, or at least it had completed. But when I do a compare between the two locations I get the following bucketname/directory/ 103,690 objects - 16.4TB bucketname/directory/subdirectory/ 103,650 - 16.4TB So there is a 40 object difference between the source location and the destination location. I tried using the following command to copy over the files that were missing aws s3 sync s3://bucketname/directory/ s3://bucket/directory/subdirectory/ which returned no results. It sat for a while maybe like 2 minutes or so, and then just returned to the next line. I am at my wits end trying to copy of the missing objects, and my boss thinks that I lost the data, so I need to figure out a way to get the difference between the source and destination copied over. If anyone could help me with this, I would REALY appreciate it. I am a newbie with AWS, so I may not understand everything that I am told, but I will try anything to get this resolved. I am doing all the commands through an EC2 instance that I am ssh into, and then use AWS CLI commands. Thanks to anyone who might be able to help me. Take care, -Tired & Frustrated :)
1
answers
0
votes
2
views
asked 8 days ago

AWS SFTP Error "Too many open files in this session, maximum 100"

Since this monday we are experiencing a problem when we try to upload a large amount of files. (49 files to be exact). After around 20 files the upload fails. ``` s-262d99d7572942eca.server.transfer.eu-central-1.amazonaws.com /aws/transfer/s-262d99d7572942eca asdf.f7955cc50d0d1bc4 2022-05-09T23:02:59.165+02:00 asdf.f7955cc50d0d1bc4 CONNECTED SourceIP=77.0.176.252 User=asdf HomeDir=/beresa-test-komola/asdf Client=SSH-2.0-ssh2js0.4.10 Role=arn:aws:iam::747969442112:role/beresa-test UserPolicy="{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowListingOfUserFolder\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:s3:::beresa-test-komola\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"s3:prefix\": [\n \"asdf/*\",\n \"asdf\"\n ]\n }\n }\n },\n {\n \"Sid\": \"HomeDirObjectAccess\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:DeleteObject\",\n \"s3:GetObjectVersion\"\n ],\n \"Resource\": \"arn:aws:s3:::beresa-test-komola/asdf*\"\n }\n ]\n}" Kex=ecdh-sha2-nistp256 Ciphers=aes128-ctr,aes128-ctr 2022-05-09T23:02:59.583+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:03.394+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/10_x_a_10.jpg BytesIn=4226625 2022-05-09T23:03:04.005+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:07.215+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/11_x_a_1.jpg BytesIn=4226625 2022-05-09T23:03:07.757+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:10.902+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/12_x_a_37.jpg BytesIn=4226625 2022-05-09T23:03:11.433+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:14.579+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/13_x_a_13.jpg BytesIn=4226625 2022-05-09T23:03:14.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:18.016+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/14_x_a_43.jpg BytesIn=4226625 2022-05-09T23:03:18.403+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:21.463+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/15_x_a_34.jpg BytesIn=4226625 2022-05-09T23:03:21.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:25.025+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/16_x_a_44.jpg BytesIn=4199266 2022-05-09T23:03:25.431+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:28.497+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/17_x_a_2.jpg BytesIn=4199266 2022-05-09T23:03:28.857+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:31.947+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/18_x_a_5.jpg BytesIn=4199266 2022-05-09T23:03:32.374+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:35.504+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/19_x_a_8.jpg BytesIn=4199266 2022-05-09T23:03:35.986+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:39.104+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/1_x_a_16.jpg BytesIn=4226625 2022-05-09T23:03:39.691+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:42.816+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/20_x_a_11.jpg BytesIn=4199266 2022-05-09T23:03:43.224+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:46.274+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/21_x_a_14.jpg BytesIn=4199266 2022-05-09T23:03:46.649+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:49.757+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/22_x_a_17.jpg BytesIn=4199266 2022-05-09T23:03:50.141+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:53.307+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/23_x_a_20.jpg BytesIn=4199266 2022-05-09T23:03:53.849+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:03:56.933+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/24_x_a_23.jpg BytesIn=4199266 2022-05-09T23:03:57.358+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:00.585+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/25_x_a_26.jpg BytesIn=4199266 2022-05-09T23:04:00.942+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:04.174+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/26_x_a_29.jpg BytesIn=4199266 2022-05-09T23:04:04.603+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:07.771+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/27_x_a_32.jpg BytesIn=4199266 2022-05-09T23:04:08.179+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:11.279+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/28_x_a_35.jpg BytesIn=4199266 2022-05-09T23:04:11.716+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:14.853+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/29_x_a_38.jpg BytesIn=4199266 2022-05-09T23:04:15.316+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:18.435+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/2_x_a_7.jpg BytesIn=4226625 2022-05-09T23:04:18.906+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:22.140+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/30_x_a_41.jpg BytesIn=4199266 2022-05-09T23:04:22.565+02:00 asdf.f7955cc50d0d1bc4 OPEN Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg Mode=CREATE|TRUNCATE|WRITE 2022-05-09T23:04:25.752+02:00 asdf.f7955cc50d0d1bc4 CLOSE Path=/beresa-test-komola/asdf/x/bla/31_x_a_18.jpg BytesIn=4159129 2022-05-09T23:04:26.141+02:00 asdf.f7955cc50d0d1bc4 ERROR Message="Too many open files in this session, maximum 100" Operation=OPEN Path=/beresa-test-komola/asdf/x/bla/32_x_a_3.jpg Mode=CREATE|TRUNCATE|WRITE ``` As you can see in the logs we are closing each path after opening it - we are uploading one file after the other. What could cause this as we are not even trying to write 100 files during the scp session?
1
answers
0
votes
8
views
asked 9 days ago

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/1