- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
The issue lies within how boto3 handles different aws regions. This may be unique to usage on AWS GovCloud. Originally I did not have a region configured for S3, but according to the docs an optional environment variable named AWS_S3_REGION_NAME
can be set.
AWS_S3_REGION_NAME (optional: default is None) Name of the AWS S3 region to use (eg. eu-west-1)
I reached this conclusion thanks to a stackoverflow answer I was using to try to manually connect to s3 via boto3. I noticed that they included an argument for region_name
when creating the session, which alerted me to make sure I had appropriately set the region in my app.settings and environment variables.
If anyone has some background on why this needs to be set for GovCloud functionality but apparently not for commercial, I would be interested to know.
I also had to specify the AWS_S3_SIGNATURE_VERSION
in app.settings so boto3 knew to use version 4 of the signature. According to the docs
As of boto3 version 1.13.21 the default signature version used for generating presigned urls is still v2. To be able to access your s3 objects in all regions through presigned urls, explicitly set this to s3v4. Set this to use an alternate version such as s3. Note that only certain regions support the legacy s3 (also known as v2) version.
Some additional information in this stackoverflow response details that new S3 regions deployed after January 2014 will only support signature version 4. AWS docs notice
Apparently GovCloud is in this group of newly deployed regions.
If you do not specify this calls to the s3 bucket for static files, such as js scripts, during operation of the web application will receiving a 400 response. S3 responds with the error message
<Code>InvalidRequest</Code>
<Message>The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.</Message>
<RequestId>#########</RequestId>
<HostId>##########</HostId>
</Error>```
There are a couple of things in the configuration that don't look correct but should not cause the problem you are seeing. For instance, VPC Gateway Endpoints (S3) do not have Security Groups. Also, you have inbound rules in your security groups for S3 and RDS. These can be eliminated. The VPC Interface Endpoint for RDS is for the service calls (443) not database queries (5432). At this point, I think I would resort to a process of elimination, starting with the ESC SG rules.
So you are proposing I change the ECS security group to...
Inbound: Allows all traffic from the elb-sg Outbound: No restrictions
I can also eliminate the RDS VPC Endpoint since I am not making service calls.
Is that correct?
I tried the above changes without any change in behavior.
I also tried to use the aws cli to connect to s3 from the container (not just the instance running the containers). I am able to successfully execute 'aws s3 ls s3://BUCKET_NAME'. The issue seems to be isolated to the use of boto3 within the django application. I have confirmed that my access key ID and secret access key are set properly in the django application.
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor 4 Monaten
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor 2 Jahren
I am using a "bridge" network, because I have two containers running on the same EC2 instance. One is a "proxy" container running nginx and the other is the "app" container using uwsgi and django. The two containers communicate via a "link" over the bridge network. I am open to other approaches.
On the container, what does the S3 endpoint resolve to?
It resolves to https://BUCKET_NAME.s3.amazonaws.com
Executing
wget https://BUCKET_NAME.s3.amazonaws.com
on the container shellOutput
Connecting to BUCKET_NAME.s3.amazonaws.com (52.217.110.6:443)
then the prompt times out...