How to merge 2 csv files from S3 bucket using Lambda.

0

Hi, I'm new to AWS. I have been trying to create a Lambda function that gets triggered every time a new file gets uploaded to an S3 bucket which, for the time being, will only ever contain 2 files. The function should join the 2 files. I'm using Python for this and have created a layer to be able to use pandas. My current attempt is posted below and yields and error message about the name of the bucket being wrong.

Any help would be much appreciated!

def lambda_handler(event, context):

bucket = 'arn:aws:s3:::name-of-the-bucket/subfolder/'
filename_1 = 'first_file.csv'
filename_2 = 'second_file.csv'

s3 = boto3.client('s3')

first_obj = s3.get_object(Bucket= bucket, Key= filename_1)
second_obj = s3.get_object(Bucket= bucket, Key= filename_2)

first_df = pd.read_csv(first_obj['Body'])
second_df = pd.read_csv(second_obj['Body'])

joined_df = first_df.merge(second_df, on=['ID'], how='left')

return joined_df
已提问 2 年前2369 查看次数
1 回答
3
已接受的回答

The bucket name should be the name of bucket without subfolder.

Change your code to

bucket = 'name-of-the-bucket'
filename_1 = 'subfolder/first_file.csv'
filename_2 = 'subfolder/second_file.csv'

--Syd

profile picture
Syd
已回答 2 年前
profile pictureAWS
专家
Chris_G
已审核 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则