- Newest
- Most votes
- Most comments
Hi,
From your code, you seem to use the regular boto3 upload_file verb, in that case, use the real bucket name with this verb and not the access point url or arn. This verb doesn't work with access point but with simple bucket name.
Access points should be used when you want direct http access to the bucket and not via the AWS SDK.
Best,
Didier
Thank you for your detailed question about uploading files to S3 using an Access Point URL. I understand you're trying to use the Access Point URL format for your application, which needs to work from an on-premises environment.
While it's possible to use Access Point ARNs for uploading objects to S3, using the Access Point URL in the format you've described (https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com) is not directly supported for object operations like PutObject.
The Access Point URL you're trying to use is primarily intended for DNS resolution and routing purposes, not as an endpoint for API operations. This explains the error you're encountering when trying to use it as an endpoint_url in your boto3 client configuration.
For your use case, where you need to use a URL-style endpoint, I would recommend considering the following alternatives:
- Use S3 Presigned URLs: You can generate a presigned URL for the PutObject operation using your Access Point. This URL can then be used to upload objects directly, even from environments that don't have AWS credentials. Here's how you could modify your code to generate and use a presigned URL:
import boto3 from botocore.client import Config s3_client = boto3.client('s3', config=Config(signature_version='s3v4')) bucket_name = 'your-access-point-arn' key = 'your-object-key' file_path = 'path/to/your/file.txt' # Generate a presigned URL for uploading presigned_url = s3_client.generate_presigned_url( 'put_object', Params={'Bucket': bucket_name, 'Key': key}, ExpiresIn=3600 # URL expires in 1 hour ) # Use the presigned URL to upload the file with open(file_path, 'rb') as file: files = {'file': file} response = requests.put(presigned_url, data=files['file']) print(f"File upload status: {response.status_code}")
This approach allows you to generate a URL that can be used to upload files directly to your S3 bucket through the specified Access Point, without needing AWS credentials at the time of upload.
-
Use S3 Transfer Acceleration: If your use case allows, you could consider using S3 Transfer Acceleration, which provides a different endpoint format that might be more suitable for your needs. However, this would require enabling Transfer Acceleration on your bucket and using a different endpoint format.
-
API Gateway + Lambda: If you need more control over the URL format and the upload process, you could create an API Gateway endpoint that triggers a Lambda function. The Lambda function would then handle the S3 upload using the Access Point ARN. This gives you full control over the URL format your application uses.
Remember, when using Access Points, you should use the Access Point ARN (arn:aws:s3:region:account-id:accesspoint/access-point-name) as the bucket name in your S3 operations, not just the access point name.
I hope these alternatives help you find a solution that fits your architectural requirements. If you need further clarification or have additional questions, please don't hesitate to ask.
Sources
Community | Secure File Uploads to S3 Using AWS Lambda
Generating a presigned URL to upload an object to an S3 on Outposts bucket - Amazon S3 on Outposts
Using a bucket-style alias for your S3 on Outposts bucket access point - Amazon S3 on Outposts

Hi Sir, I hope you are doing well. Thank you for your answer, based on the first answer I started to investigate and it seems that maybe an Access Point HTTPs URL isn't why I need, since I can only use S3 Operations with the ARN and alias of this resource.
In this particular case, would it be better to use an S3 endpoint directly? example: https://bucket-name.s3.amazonaws.com