Is there a reliable way to know that an aws_s3.query_export completed successfully?

0

I'm working on a feature to archive old data from our Aurora Postgres database to S3 using the aws_s3 extension. This operation takes 20-30 minutes, and sometimes my client gets disconnected and retries. It appears that, even if my client gets disconnected, the aws_s3 extension continues in the background, so on a retry, I end up transferring the entire amount of data twice, and I would prefer if there is a reliable way to query the S3 objects and know they are complete copy of the data in the query. The table partitions I am archiving are 15-20Gb each, and I notice that the S3 objects appear to be chunked into 6Gb chunks, so it seems that the existence of an S3 object following the right naming convention that is significantly smaller than 6Gb would imply that an earlier operation completed, but it's hard to be certain.

질문됨 2달 전729회 조회
1개 답변
0
수락된 답변

Realized that the S3 SelectObjectContent API allows us to count the rows in objects written as CSV. I think it would be awesome if the aws_s3 extension could write some object metadata so we could get this data without reading the full objects.

답변함 2달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠