- Newest
- Most votes
- Most comments
You may have figured out the problem.
The code I was using used "ap-northeast-1" for the S3 region.
"ap-northeast-1" is a region created before 2019, so it worked properly without specifying "endpoint_url" etc.
However, since "eu-central-1" is a region created after 2019, you need to set the region and "endpoint_url" as shown in the code below.
import boto3
import requests
import os
import datetime
s3_client = boto3.client('s3',region_name="eu-central-1",endpoint_url="https://s3.eu-central-1.amazonaws.com")
def lambda_handler(event, context):
# bucket_name = 'kobayashi-lambda'
now = datetime.datetime.now()
strnow = now.strftime('%Y-%m-%d-%H-%M-%S')
filepath = f'/tmp/{strnow}.txt'
object_name = f'{strnow}.txt'
bucket_name = f'{strnow}-kobayashi-lambda'
s3_client.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={
'LocationConstraint': 'eu-central-1'
}
)
with open(filepath, 'w') as f:
f.write(strnow)
presigned_url = generate_presigned_url(bucket_name, object_name)
responce = requests.put(presigned_url, data=filepath)
print(responce.text)
# print(presigned_url)
def generate_presigned_url(bucket_name, object_name):
url = s3_client.generate_presigned_url(
'put_object',
Params={'Bucket': bucket_name, 'Key': object_name},
ExpiresIn=3600,
HttpMethod='PUT'
)
return url
The following documents contain relevant explanations.
So your problem probably isn't a time zone issue, but rather that you haven't set "endpoint_url" in your code.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
For all AWS Regions launched after March 20, 2019 you need to specify the endpoint-url and AWS Region with the request. For a list of all the Amazon S3 Regions and endpoints, see Regions and Endpoints in the AWS General Reference.
Hello.
Why not try setting the Lambda-wide environment variable instead of setting the TZ environment variable in your code?
Lambda environment variables can be set by following the steps in the document below.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html
Yes, I did both, tried both. And I print the lambda timezone and time and it is correct.
Does the code below work? It works on my AWS account without changing the timezone. Also, isn't it possible to change the timezone to something like UTC on the client side making the request to the URL instead of Lambda?
import boto3 import requests import os import datetime s3_client = boto3.client('s3') def lambda_handler(event, context): bucket_name = 'kobayashi-lambda' now = datetime.datetime.now() strnow = now.strftime('%Y-%m-%d-%H-%M-%S') filepath = f'/tmp/{strnow}.txt' object_name = f'{strnow}.txt' with open(filepath, 'w') as f: f.write(strnow) presigned_url = generate_presigned_url(bucket_name, object_name) requests.put(presigned_url, data=filepath) def generate_presigned_url(bucket_name, object_name): url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': bucket_name, 'Key': object_name}, ExpiresIn=3600, HttpMethod='PUT' ) return url
The workflow is, an S3 bucket is created, then a file is uploaded to the bucket. To update the file Client-request > lambda > returns url to client > uses url for put. So the put request is made to the URL on the client side, not lambda.
I uploaded a file from my local PC to the "presigned_url" created with the code below, and it worked fine. So, I think it will work without changing Lambda's time zone from UTC.
import boto3 import requests import os import datetime s3_client = boto3.client('s3') def lambda_handler(event, context): bucket_name = 'kobayashi-lambda' now = datetime.datetime.now() strnow = now.strftime('%Y-%m-%d-%H-%M-%S') # filepath = f'/tmp/{strnow}.txt' object_name = f'{strnow}.txt' # with open(filepath, 'w') as f: # f.write(strnow) presigned_url = generate_presigned_url(bucket_name, object_name) # requests.put(presigned_url, data=filepath) print(presigned_url) def generate_presigned_url(bucket_name, object_name): url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': bucket_name, 'Key': object_name}, ExpiresIn=3600, HttpMethod='PUT' ) return url
Yes, I changed the code to create a new S3 bucket at the same time as the request as you said, but I verified that the upload still works. Is it possible for you to share the code that issues the pre-signed URL and the code that requests the URL?
import boto3 import requests import os import datetime s3_client = boto3.client('s3',) def lambda_handler(event, context): # bucket_name = 'kobayashi-lambda' now = datetime.datetime.now() strnow = now.strftime('%Y-%m-%d-%H-%M-%S') filepath = f'/tmp/{strnow}.txt' object_name = f'{strnow}.txt' bucket_name = f'{strnow}-kobayashi-lambda' s3_client.create_bucket( Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': 'ap-northeast-1' } ) with open(filepath, 'w') as f: f.write(strnow) presigned_url = generate_presigned_url(bucket_name, object_name) requests.put(presigned_url, data=filepath) print(presigned_url) def generate_presigned_url(bucket_name, object_name): url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': bucket_name, 'Key': object_name}, ExpiresIn=3600, HttpMethod='PUT' ) return url
If the bucket was recently created and your using the global name it can take 24 hours for the global dns name to become available in all regions.
It could be this instead of the timezone issue.
And is there a work around for this? What I don't understand is how I'm capable of generating the URL but the url doesn't work.
Relevant content
- Accepted Answerasked a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 11 days ago
- AWS OFFICIALUpdated a year ago
Correcting a bit, eu-central-1, the Frankfurt region, was launched in October 2014, not after March 2019: https://aws.amazon.com/blogs/aws/aws-region-germany/
@Leo K Thank you for providing the information!