Skip to content

Using Access Point URL to upload a file

0

Greetings, I hope you are doing well.

I am trying to build an application that would upload a file from an On Premise environment to an S3 Bucket in AWS. For this I have created an Access Point that allows internet traffic and also includes the PutObject permission defined inside the policy.

Firstly, I am trying to test uploading a file using the Access Point URL. By this I mean the constructed URL that the docs indicate, with this format: https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com

From my computer, I have been able to upload a file using the Access Point ARN, but I really need to be able to use this URL, mostly because I have a backend URL inside a Datapower environment that would be pointing to this Access Point URL because the Architecture requires it.

This is the code snipet I am currently testing:

import boto3
from botocore.config import Config

# Create a session
session = boto3.Session(
    aws_access_key_id='your-access-key-id',
    aws_secret_access_key='your-secret-access-key',
    region_name='us-east-1'
)

# Configure the S3 client to use the access point URL
s3_client = session.client(
    's3',
    endpoint_url='https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com',
    config=Config(s3={'addressing_style': 'virtual'})
)

bucket_name = 'access-point-name'
key = 'your-object-key'
file_path = 'path/to/your/file.txt'

# Upload the file
s3_client.upload_file(file_path, bucket_name, key)
print("File uploaded successfully.")

I have tried sending the "access-point-name" and also "access-point-name-account-id" on the bucket_name parameter. But I am getting this error: An error occurred (InvalidAccessPoint) when calling the PutObject operation: The specified accesspoint or account s not valid.

Mind you that I am using virtual host style because is the recommendation for Access Points and also because path style is deprecated. However, if I use the path style adressing I manage to upload the file tweaking on the values of the object_key and bucket_name, but using something deprecated isn't an option as well.

Is it possible to achieve what I am trying to do here? I also need to point out that the credentials need to be created from my application.

Thank you in advance.

asked a year ago1K views
2 Answers
3

Hi,

From your code, you seem to use the regular boto3 upload_file verb, in that case, use the real bucket name with this verb and not the access point url or arn. This verb doesn't work with access point but with simple bucket name.

Access points should be used when you want direct http access to the bucket and not via the AWS SDK.

Best,

Didier

EXPERT
answered a year ago
EXPERT
reviewed a year ago
  • Hi Sir, I hope you are doing well. Thank you for your answer, based on the first answer I started to investigate and it seems that maybe an Access Point HTTPs URL isn't why I need, since I can only use S3 Operations with the ARN and alias of this resource.

    In this particular case, would it be better to use an S3 endpoint directly? example: https://bucket-name.s3.amazonaws.com

0

Thank you for your detailed question about uploading files to S3 using an Access Point URL. I understand you're trying to use the Access Point URL format for your application, which needs to work from an on-premises environment.

While it's possible to use Access Point ARNs for uploading objects to S3, using the Access Point URL in the format you've described (https://access-point-name-account-id.s3-accesspoint.region.amazonaws.com) is not directly supported for object operations like PutObject.

The Access Point URL you're trying to use is primarily intended for DNS resolution and routing purposes, not as an endpoint for API operations. This explains the error you're encountering when trying to use it as an endpoint_url in your boto3 client configuration.

For your use case, where you need to use a URL-style endpoint, I would recommend considering the following alternatives:

  1. Use S3 Presigned URLs: You can generate a presigned URL for the PutObject operation using your Access Point. This URL can then be used to upload objects directly, even from environments that don't have AWS credentials. Here's how you could modify your code to generate and use a presigned URL:
import boto3
from botocore.client import Config

s3_client = boto3.client('s3', config=Config(signature_version='s3v4'))

bucket_name = 'your-access-point-arn'
key = 'your-object-key'
file_path = 'path/to/your/file.txt'

# Generate a presigned URL for uploading
presigned_url = s3_client.generate_presigned_url(
    'put_object',
    Params={'Bucket': bucket_name, 'Key': key},
    ExpiresIn=3600  # URL expires in 1 hour
)

# Use the presigned URL to upload the file
with open(file_path, 'rb') as file:
    files = {'file': file}
    response = requests.put(presigned_url, data=files['file'])

print(f"File upload status: {response.status_code}")

This approach allows you to generate a URL that can be used to upload files directly to your S3 bucket through the specified Access Point, without needing AWS credentials at the time of upload.

  1. Use S3 Transfer Acceleration: If your use case allows, you could consider using S3 Transfer Acceleration, which provides a different endpoint format that might be more suitable for your needs. However, this would require enabling Transfer Acceleration on your bucket and using a different endpoint format.

  2. API Gateway + Lambda: If you need more control over the URL format and the upload process, you could create an API Gateway endpoint that triggers a Lambda function. The Lambda function would then handle the S3 upload using the Access Point ARN. This gives you full control over the URL format your application uses.

Remember, when using Access Points, you should use the Access Point ARN (arn:aws:s3:region:account-id:accesspoint/access-point-name) as the bucket name in your S3 operations, not just the access point name.

I hope these alternatives help you find a solution that fits your architectural requirements. If you need further clarification or have additional questions, please don't hesitate to ask.
Sources
Community | Secure File Uploads to S3 Using AWS Lambda
Generating a presigned URL to upload an object to an S3 on Outposts bucket - Amazon S3 on Outposts
Using a bucket-style alias for your S3 on Outposts bucket access point - Amazon S3 on Outposts

answered a year ago
EXPERT
reviewed a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.