By using AWS re:Post, you agree to the Terms of Use
/Domain Name System (DNS)/

Questions tagged with Domain Name System (DNS)

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Django App in ECS Container Cannot Connect to S3 in Gov Cloud

I have a container running in an EC2 instance on ECS. The container is hosting a django based application that utilizes S3 and RDS for its file storage and db needs respectively. I have appropriately configured my VPC, Subnets, VPC endpoints, Internet Gateway, roles, security groups, and other parameters such that I am able to host the site, connect to the RDS instance, and I can even access the site. The issue is with the connection to S3. When I try to run the command `python manage.py collectstatic --no-input` which should upload/update any new/modified files to S3 as part of the application set up the program hangs and will not continue. No files are transferred to the already set up S3 bucket. **Details of the set up:** All of the below is hosted on AWS Gov Cloud **VPC and Subnets** * 1 VPC located in Gov Cloud East with 2 availability zones (AZ) and one private and public subnet in each AZ (4 total subnets) * The 3 default routing tables (1 for each private subnet, and 1 for the two public subnets together) * DNS hostnames and DNS resolution are both enabled **VPC Endpoints** All endpoints have the "vpce-sg" security group attached and are associated to the above vpc * s3 gateway endpoint (set up to use the two private subnet routing tables) * ecr-api interface endpoint * ecr-dkr interface endpoint * ecs-agetn interface endpoint * ecs interface endpoint * ecs-telemetry interface endpoint * logs interface endpoint * rds interface endpoint **Security Groups** * Elastic Load Balancer Security Group (elb-sg) * Used for the elastic load balancer * Only allows inbound traffic from my local IP * No outbound restrictions * ECS Security Group (ecs-sg) * Used for the EC2 instance in ECS * Allows all traffic from the elb-sg * Allows http:80, https:443 from vpce-sg for s3 * Allows postgresql:5432 from vpce-sg for rds * No outbound restrictions * VPC Endpoints Security Group (vpce-sg) * Used for all vpc endpoints * Allows http:80, https:443 from ecs-sg for s3 * Allows postgresql:5432 from ecs-sg for rds * No outbound restrictions **Elastic Load Balancer** * Set up to use an Amazon Certificate https connection with a domain managed by GoDaddy since Gov Cloud route53 does not allow public hosted zones * Listener on http permanently redirects to https **Roles** * ecsInstanceRole (Used for the EC2 instance on ECS) * Attached policies: AmazonS3FullAccess, AmazonEC2ContainerServiceforEC2Role, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com * ecsTaskExecutionRole (Used for executionRole in task definition) * Attached policies: AmazonECSTaskExecutionRolePolicy * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com * ecsRunTaskRole (Used for taskRole in task definition) * Attached policies: AmazonS3FullAccess, CloudWatchLogsFullAccess, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com **S3 Bucket** * Standard bucket set up in the same Gov Cloud region as everything else **Trouble Shooting** If I bypass the connection to s3 the application successfully launches and I can connect to the website, but since static files are supposed to be hosted on s3 there is less formatting and images are missing. Using a bastion instance I was able to ssh into the EC2 instance running the container and successfully test my connection to s3 from there using `aws s3 ls s3://BUCKET_NAME` If I connect to a shell within the application container itself and I try to connect to the bucket using... ``` s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET_NAME) s3.meta.client.head_bucket(Bucket=bucket.name) ``` I receive a timeout error... ``` File "/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 179, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f3da4467190>, 'Connection to BUCKET_NAME.s3.amazonaws.com timed out. (connect timeout=60)') ... File "/.venv/lib/python3.9/site-packages/botocore/httpsession.py", line 418, in send raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://BUCKET_NAME.s3.amazonaws.com/" ``` Based on [this article ](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) I think this may have something to do with the fact that I am using the GoDaddy DNS servers which may be preventing proper URL resolution for S3. > If you're using the Amazon DNS servers, you must enable both DNS hostnames and DNS resolution for your VPC. If you're using your own DNS server, ensure that requests to Amazon S3 resolve correctly to the IP addresses maintained by AWS. I am unsure of how to ensure that requests to Amazon S3 resolve correctly to the IP address maintained by AWS. Perhaps I need to set up another private DNS on route53? I have tried a very similar set up for this application in AWS non-Gov Cloud using route53 public DNS instead of GoDaddy and there is no issue connecting to S3. Please let me know if there is any other information I can provide to help.
1
answers
0
votes
20
views
asked 12 hours ago

Getting AccessDenied Error Trying to Get Wildcard SSL with Certbot and Route53 Plugin

I have been tasked with setting up Wilcard SSL for some domains. These domains are hosted through AWS Route53. I am using **Certbot** on an **Ubuntu 20.4** machine (we're using Lightsail), where the apps are hosted. I have also installed the Route53 DNS plugin for Certbot. I run this command: ``` sudo certbot certonly --dns-route53 --email 'me@derp.com' --domain 'mywebsite.rocks' --domain '*.mywebsite.rocks' --agree-tos --non-interactive ``` *Real domains remove for security reasons* I get this error: ``` An error occurred (AccessDenied) when calling the ListHostedZones operation: User: arn:aws:sts::789148085273:assumed-role/AmazonLightsailInstanceRole/i-0871f2572906140c4 is not authorized to perform: route53:ListHostedZones because no identity-based policy allows the route53:ListHostedZones action ``` Let me explain first how I set up the IAM user in the AWS console. 1. I created a new Policy with this config ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:GetHostedZone", "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/WHAT-EVER-MY-ID-IS-HERE" }, { "Effect": "Allow", "Action": "route53:ListHostedZones", "Resource": "*" } ] } ``` *Replacing `WHAT-EVER-MY-ID-IS-HERE` with my actual domain's Hosted Zone ID* 2. I then created a new **IAM User** and during set-up, I attached the above Policy to the user. 3. I then created an **Access Key** for my new User and took note of the `AccessKeyId` and `SecretAccessKey`. This has access to be used programmatically. 4. On the server, I created a config file at `/root/.aws/config` as instructed in the documentation. *I also tried `~/.aws/config`* but as I am using `sudo` the former seemed the preferred location (I could be wrong though, and during my tests, neither worked anyway) And as previously aforementioned, I run the command and get the error. Searched the web high and low for a solution, but cannot find one. Appreciate any help I can get from folk.
0
answers
0
votes
3
views
asked 2 months ago
2
answers
0
votes
8
views
asked 2 months ago

SES DKIM setting is empty, "Required tag not found"

Hello, I have set up SES with my domain. In AWS everything seems fine (Green ticks everywhere). In my DNS (Cloudflare) everything is set up as it should (This isnt my first time doing this) BUT for some reason, one of my dkim keys (aquy2ltuncjmajf4q2s****._domainkey.domain.com) does not validate with any online tool i've checked. Im getting back error all the time with something like this; "The syntax and semantics of this tag value before being encoded in base64 are defined by the (k) tag." "Required tag not found" "DKIM is present but is not valid." And the 'content' is empty. It seems that the DKIM doesnt include anything at all. I have removed everything from SES and also my DNS and re-configured everything (Each time I delete and re-create the domain, the same keys are generated). No difference.. The issue which I think is tied to this DKIM problem is that my deliveries (email) adds "via Amazonses.com" next to the sender email. Here's some original content from the received mail: ``` ARC-Authentication-Results: i=1; mx.google.com; dkim=temperror (no key for signature) header.i=@logicalcms.com header.s=xh7cyljyqmodtitmze7ewwckex2y3dmd header.b=SnWLbtp8; dkim=pass header.i=@amazonses.com header.s=uku4taia5b5tsbglxyj6zym32efj7xqv header.b=gsfDW8YN; spf=pass (google.com: domain of 0102017f117627fc-85c20317-d5e1-4bac-8a24-8da0d032424c-000000@eu-west-1.amazonses.com designates 54.240.7.10 as permitted sender) smtp.mailfrom=0102017f117627fc-85c20317-d5e1-4bac-8a24-8da0d032424c-000000@eu-west-1.amazonses.com ``` Anyone knows what to do?
0
answers
0
votes
3
views
asked 3 months ago

Domain not working after domain migration.

Hello everyone, I used AWS CLIv2 to transfer domain from one AWS account to another account. The domain transfer was successful. After that, I deleted my hosted zone from my previous AWS account. In my second AWS account, I created hosted zone and provided my domain name. The AWS provided me with the parameters (NS and SOA) When I compare the NS of hosted zone to the Name Servers of registered domain they were different. Also, the new hosted zone created using Route53 on the new AWS account is also unable to resolve DNS. When i run ``` dig +trace example.com ``` It shows the following output. ``` ;; Received 1177 bytes from 192.58.128.30#53(j.root-servers.net) in 303 ms example.com. 172800 IN NS ns-xx.awsdns-xx.com. example.com. 172800 IN NS ns-xx.awsdns-xx.net. example.com. 172800 IN NS ns-xxxx.awsdns-xx.org. example.com. 172800 IN NS ns-xxxx.awsdns-xx.co.uk. ``` However, when i run ``` dig ns example.com ``` It shows the ``` ; <<>> DiG 9.16.1-Ubuntu <<>> ns example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 15912 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;example. IN NS ;; Query time: 1787 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: शुक्र फरवरी 11 09:08:22 +0545 2022 ;; MSG SIZE rcvd: 46 ``` on command line. I am confused as i think the DNS NS record and Domain Name server record should be same. But in this case, as they are different, the DNS is unable to resolve the domain to the proper address. How can i resolve this issue?
1
answers
0
votes
4
views
asked 3 months ago
  • 1
  • 90 / page