How to copy a large dataset from on-premises Hadoop Cluster to S3?

0

A customer has a Hadoop Cluster in an engineered IBM box with internal InfiniBand connecting the data nodes to the master node. Only the master node (and the slave node) are on the IP network, the data nodes do not have IP addresses assigned and are not reachable from the network. The customer has 50TBs of data (individual files are upto 40GB each, stored in Hive) to be moved to S3. We have Direct Connect in place and we are looking at options to move this data. Time is not a constraint, however the use of Snowball devices has been ruled out for now.

Generally, we could have used DistCp for copying data from the Hadoop cluster S3. However, since the data nodes are not reachable , the DistCp utility will not work. What are the other options that can work?

  • WebHDFS?

  • HttpFS?

  • Any other option to transfer 50 TBs of data, that doesn't involve significant work on the customer side e.g. networking changes

已提问 5 年前485 查看次数
1 回答
0
已接受的回答

So I understand that the nodes don't have external connectivity except the master, so you cannot run DistCP even inside the cluster.
I think the easiest would be to create a script that runs on the master and takes files onto the local dist and uses the standard aws s3 command line client to uploading (tweaking a bit the bandwidth and parallelism).
The other option if you don't want to do the temporary local copy would be to run DistCp in local mode, so it runs only on the master but can access hdfs and s3 directly.

AFAIK, the web solutions you propose to access the cluster externally, would require the DataNodes to be reachable (the master doesn't actually have the data).
The workaround would be to use some proxy service like Knox but it's too much hassle to handle all the security compared with the option of running a script on the cluster master.

profile pictureAWS
专家
已回答 5 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则