How to merge 2 csv files from S3 bucket using Lambda.

0

Hi, I'm new to AWS. I have been trying to create a Lambda function that gets triggered every time a new file gets uploaded to an S3 bucket which, for the time being, will only ever contain 2 files. The function should join the 2 files. I'm using Python for this and have created a layer to be able to use pandas. My current attempt is posted below and yields and error message about the name of the bucket being wrong.

Any help would be much appreciated!

def lambda_handler(event, context):

bucket = 'arn:aws:s3:::name-of-the-bucket/subfolder/'
filename_1 = 'first_file.csv'
filename_2 = 'second_file.csv'

s3 = boto3.client('s3')

first_obj = s3.get_object(Bucket= bucket, Key= filename_1)
second_obj = s3.get_object(Bucket= bucket, Key= filename_2)

first_df = pd.read_csv(first_obj['Body'])
second_df = pd.read_csv(second_obj['Body'])

joined_df = first_df.merge(second_df, on=['ID'], how='left')

return joined_df
已提問 2 年前檢視次數 2369 次
1 個回答
3
已接受的答案

The bucket name should be the name of bucket without subfolder.

Change your code to

bucket = 'name-of-the-bucket'
filename_1 = 'subfolder/first_file.csv'
filename_2 = 'subfolder/second_file.csv'

--Syd

profile picture
Syd
已回答 2 年前
profile pictureAWS
專家
Chris_G
已審閱 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南