How to offload historical data to Redshift or S3?

0

A customer has a MySQL database and want to offload historical data to another platform for reporting and analysis. Currently they are facing performance issues during normal operations.

What do I need to take into account to make the decision? I have seen that a possible way is to use DMS https://aws.amazon.com/blogs/database/archiving-data-from-relational-databases-to-amazon-glacier-via-aws-dms/. I wonder if Data Pipeline can also be used here. The destination is going to be s3 or Redshift depending on the data. Thanks!

2 個答案
0
已接受的答案

DMS and Kinesis data firehose can stream data changes from MySQL onto Redshift and S3. This is common pattern for transactional sources with primary key in source tables. Here is a blog that describes how to load ongoing changes from source using Glue and DMS https://aws.amazon.com/blogs/big-data/loading-ongoing-data-lake-changes-with-aws-dms-and-aws-glue/ .

For historical data load into S3 and Redshift use DMS for low to moderate amount of data given that the customer have significant network bandwidth to AWS. For significant size data such as 10's TB exporting the MySQL data into raw files and move to AWS Snowball and import to S3 can be more reasonable and time efficient.

If the customer wants to have seamless ETL experience with sourcing capability from MySQL into Redshift and S3 they can use 3rd party product like Snaplogic. Here is a blog https://aws.amazon.com/blogs/apn/migrating-data-warehouse-workloads-from-on-premises-databases-to-amazon-redshift-with-snaplogic/

AWS
已回答 4 年前
0

How to export data from RDS to S3 file SELECT * FROM users INTO OUTFILE S3 's3://some-bucket-name/users';

or you could use glue to load the data from RDS to S3 :https://www.mssqltips.com/sqlservertip/5918/serverless-etl-using-aws-glue-for-rds-databases/

AWS
已回答 7 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南