How to offload historical data to Redshift or S3?

0

A customer has a MySQL database and want to offload historical data to another platform for reporting and analysis. Currently they are facing performance issues during normal operations.

What do I need to take into account to make the decision? I have seen that a possible way is to use DMS https://aws.amazon.com/blogs/database/archiving-data-from-relational-databases-to-amazon-glacier-via-aws-dms/. I wonder if Data Pipeline can also be used here. The destination is going to be s3 or Redshift depending on the data. Thanks!

2개 답변
0
수락된 답변

DMS and Kinesis data firehose can stream data changes from MySQL onto Redshift and S3. This is common pattern for transactional sources with primary key in source tables. Here is a blog that describes how to load ongoing changes from source using Glue and DMS https://aws.amazon.com/blogs/big-data/loading-ongoing-data-lake-changes-with-aws-dms-and-aws-glue/ .

For historical data load into S3 and Redshift use DMS for low to moderate amount of data given that the customer have significant network bandwidth to AWS. For significant size data such as 10's TB exporting the MySQL data into raw files and move to AWS Snowball and import to S3 can be more reasonable and time efficient.

If the customer wants to have seamless ETL experience with sourcing capability from MySQL into Redshift and S3 they can use 3rd party product like Snaplogic. Here is a blog https://aws.amazon.com/blogs/apn/migrating-data-warehouse-workloads-from-on-premises-databases-to-amazon-redshift-with-snaplogic/

AWS
답변함 4년 전
0

How to export data from RDS to S3 file SELECT * FROM users INTO OUTFILE S3 's3://some-bucket-name/users';

or you could use glue to load the data from RDS to S3 :https://www.mssqltips.com/sqlservertip/5918/serverless-etl-using-aws-glue-for-rds-databases/

AWS
답변함 7달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠