How to transfer dataset (whole dataset at once) from Bigquery to S3 by AWS Glue?

0

Hi Dears

Hope all is great with you.

I have tried to migrate data from google big query to AWS S3. I have an issue from source side as below (console).

Connection options:

Enter additional key-value pairs for your data source connection:

**Key: parentProject value: serene-craft-3363XX

**Key: table value: bigquery-public-data:austin_bikeshare.bikeshare_stations

The question that if i want to put dataset instead of table, how that will be??

Noting that I tried write ( dataset ) but not working!

Can you advise please? Thanks in advance Basem

1 回答
0
已接受的回答

Hi,

If I guess correctly you are using AWS Glue Studio and the AWS Glue big Query connector.

Currently the Glue Big query connector is working at table level (as the BigQuery Spark Connector does).

If you want to export all the tables in a dataset you may edit the script generated by Glue Studio and customize it.

you would first need to add the google.cloud python library using the method mentioned here.

then before you read the table, you read the list of tables in the dataset as described here.

Finally you iterate on the tables and you read/write them to S3.

This is one possibility the other would be to use an orchestrator as StepFunctions (an alternative could be Airflow), to run a python script to read the list of tables, and then execute the your job (once parametrized by tablename) in parallel for each table.

hope this helps

AWS
专家
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则