By using AWS re:Post, you agree to the Terms of Use
Questions in Networking & Content Delivery
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

CDK Route 53 zone lookup brings back wrong zone ID

We are attempt to update our IaC code base to CDK v2. Prior to that we're deploy entire stacks of our system in another test environment. One part of a stack creates a TLS certificate for use with our load balancer. ``` var hostedZone = HostedZone.FromLookup(this, $"{config.ProductName}-dns-zone", new HostedZoneProviderProps { DomainName = config.RootDomainName }); DnsValidatedCertificate certificate = new DnsValidatedCertificate(this, $"{config.ProductName}-webELBCertificate-{config.Environment}", new DnsValidatedCertificateProps { HostedZone = hostedZone, DomainName = config.AppDomainName, // Used to implement ValidationMethod = ValidationMethod.DNS Validation = CertificateValidation.FromDns(hostedZone) }); ``` For some reason, the synthesised template defines the hosted zone ID for that AWS::CloudFormation::CustomResource has *something else other than the actual zone ID* in that account. That causes the certificate request validation process to fail - thus the whole cdk deploy - since it cannot find the real zone to place the validation records in. If looking at the individual pending certificate requests in Certificate Manager page, they can be approved by manually pressing the [[Create records in Route 53]] button, which finds the correct zone to do so. Not sure where exactly CDK is finding this mysterious zone ID that does not belong to us? ``` "AppwebELBCertificatetestCertificateRequestorResource68D095F7": { "Type": "AWS::CloudFormation::CustomResource", "Properties": { "ServiceToken": { "Fn::GetAtt": [ "AppwebELBCertificatetestCertificateRequestorFunctionCFE32764", "Arn" ] }, "DomainName": "root.domain", "HostedZoneId": "NON-EXISTENT ZONE ID" }, "UpdateReplacePolicy": "Delete", "DeletionPolicy": "Delete", "Metadata": { "aws:cdk:path": "App-webELBStack-test/App-webELBCertificate-test/CertificateRequestorResource/Default" } } ```
1
answers
0
votes
8
views
asked 2 days ago

Unknown reason for API Gateway WebSocket LimitExceededException

We have several API Gateway WebSocket APIs, all regional. As their usage has gone up, the most used one has started getting LimitExceededException when we send data from Lambda, through the socket, to the connected browsers. We are using the javascript sdk's [postToConnection](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ApiGatewayManagementApi.html#postToConnection-property) function. The usual behavior is we will not get this error at all, then we will get several hundred spread out over 2-4 minutes. The only documentation we've been able to find that may be related to this limit is the [account level quota](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-account-level-limits-table) of 10,000 per second (and we're not sure if that's the actual limit we should be looking at). If that is the limit, the problem then is that we are nowhere near it. For a single deployed API we're hitting a maximum of 3000 messages sent through the socket **per minute** with an overall account total of about 5000 per minute. So nowhere near the 10,000 per second. The only thing we think may be causing it is we have a "large" number messages going through the socket relative to the number of connected clients. For the api that's maxing at about 3000 messages per minute, we usually have 2-8 connected clients. Our only guess is there may be a lower limit to number of messages per second we can send to a specific socket connection, however we cannot find any docs on this. Thanks for any help anyone can provide
1
answers
0
votes
29
views
asked 2 days ago

Django App in ECS Container Cannot Connect to S3 in Gov Cloud

I have a container running in an EC2 instance on ECS. The container is hosting a django based application that utilizes S3 and RDS for its file storage and db needs respectively. I have appropriately configured my VPC, Subnets, VPC endpoints, Internet Gateway, roles, security groups, and other parameters such that I am able to host the site, connect to the RDS instance, and I can even access the site. The issue is with the connection to S3. When I try to run the command `python manage.py collectstatic --no-input` which should upload/update any new/modified files to S3 as part of the application set up the program hangs and will not continue. No files are transferred to the already set up S3 bucket. **Details of the set up:** All of the below is hosted on AWS Gov Cloud **VPC and Subnets** * 1 VPC located in Gov Cloud East with 2 availability zones (AZ) and one private and public subnet in each AZ (4 total subnets) * The 3 default routing tables (1 for each private subnet, and 1 for the two public subnets together) * DNS hostnames and DNS resolution are both enabled **VPC Endpoints** All endpoints have the "vpce-sg" security group attached and are associated to the above vpc * s3 gateway endpoint (set up to use the two private subnet routing tables) * ecr-api interface endpoint * ecr-dkr interface endpoint * ecs-agetn interface endpoint * ecs interface endpoint * ecs-telemetry interface endpoint * logs interface endpoint * rds interface endpoint **Security Groups** * Elastic Load Balancer Security Group (elb-sg) * Used for the elastic load balancer * Only allows inbound traffic from my local IP * No outbound restrictions * ECS Security Group (ecs-sg) * Used for the EC2 instance in ECS * Allows all traffic from the elb-sg * Allows http:80, https:443 from vpce-sg for s3 * Allows postgresql:5432 from vpce-sg for rds * No outbound restrictions * VPC Endpoints Security Group (vpce-sg) * Used for all vpc endpoints * Allows http:80, https:443 from ecs-sg for s3 * Allows postgresql:5432 from ecs-sg for rds * No outbound restrictions **Elastic Load Balancer** * Set up to use an Amazon Certificate https connection with a domain managed by GoDaddy since Gov Cloud route53 does not allow public hosted zones * Listener on http permanently redirects to https **Roles** * ecsInstanceRole (Used for the EC2 instance on ECS) * Attached policies: AmazonS3FullAccess, AmazonEC2ContainerServiceforEC2Role, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com * ecsTaskExecutionRole (Used for executionRole in task definition) * Attached policies: AmazonECSTaskExecutionRolePolicy * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com * ecsRunTaskRole (Used for taskRole in task definition) * Attached policies: AmazonS3FullAccess, CloudWatchLogsFullAccess, AmazonRDSFullAccess * Trust relationships: ec2.amazonaws.com, ecs-tasks.amazonaws.com **S3 Bucket** * Standard bucket set up in the same Gov Cloud region as everything else **Trouble Shooting** If I bypass the connection to s3 the application successfully launches and I can connect to the website, but since static files are supposed to be hosted on s3 there is less formatting and images are missing. Using a bastion instance I was able to ssh into the EC2 instance running the container and successfully test my connection to s3 from there using `aws s3 ls s3://BUCKET_NAME` If I connect to a shell within the application container itself and I try to connect to the bucket using... ``` s3 = boto3.resource('s3') bucket = s3.Bucket(BUCKET_NAME) s3.meta.client.head_bucket(Bucket=bucket.name) ``` I receive a timeout error... ``` File "/.venv/lib/python3.9/site-packages/urllib3/connection.py", line 179, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f3da4467190>, 'Connection to BUCKET_NAME.s3.amazonaws.com timed out. (connect timeout=60)') ... File "/.venv/lib/python3.9/site-packages/botocore/httpsession.py", line 418, in send raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://BUCKET_NAME.s3.amazonaws.com/" ``` Based on [this article ](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) I think this may have something to do with the fact that I am using the GoDaddy DNS servers which may be preventing proper URL resolution for S3. > If you're using the Amazon DNS servers, you must enable both DNS hostnames and DNS resolution for your VPC. If you're using your own DNS server, ensure that requests to Amazon S3 resolve correctly to the IP addresses maintained by AWS. I am unsure of how to ensure that requests to Amazon S3 resolve correctly to the IP address maintained by AWS. Perhaps I need to set up another private DNS on route53? I have tried a very similar set up for this application in AWS non-Gov Cloud using route53 public DNS instead of GoDaddy and there is no issue connecting to S3. Please let me know if there is any other information I can provide to help.
3
answers
0
votes
36
views
asked 3 days ago

Encrypted VPN Connectivity from VMC on AWS SDDC to On-Premise DC

Dear Team, I have the following setup requirements between VMware on AWS SDDC and on-Premise DC. 1. Need an encrypted VPN Solution between SDDC and On-Premise DC. 2. Need an Encrypted VPN Solution between SideCar VPC and On-Premise DC. 3. We have direct connect setup between DC and AWS. 4. Protected firewall sitting behind the edge device in on-Premise DC , encrypted VPN setup on DX need two set of public. Firewall sitting behind edge devise VPN connectivity but that firewall could not configured with public ip. The last hop where the public ip could be configured is the edge devise on the customer site. As per my understanding, I can use the public VIF on direct connect to setup the encrypted VPN connection between the client edge devise and AWS router. But the problem statement in this case is 1. How to setup the encrypted VPN solution for both SDDC and sidecar VPC? Can we route the traffic from SDDC to VTGW to TGW(of the sidecar account) and then leverage public VIF to setup encrypted VPN from TGW to customer edge devise? 2. Do we need the DX gateway to setup the encrypted VPN connectivity? 3. Encrypted VPN on DX would need to set of public IPS. What if the customer firewall is not having the option to configure the public IP for encrypted VPN ? 4. Can I use the DX setup in one OU to create the public VIF for another account in separate OU. This is required because I am looking to create the encrypted VPN connection from two OUs to the DC. Please advise with your comments or if there is any reference architecture available with VMC/AWS. Many Thanks Rio
1
answers
0
votes
4
views
asked 5 days ago

`RequestTimeout`s for S3 put requests from a Lambda in a VPC for larger payloads

# Update I added a VPC gateway endpoint for S3 in the same region (US East 1). I selected the route table for it that the lambda uses. But still, the bug persists. Below I've included details regarding my network configuration. The lambda is located in the "api" subnet. ## Network Configuration 1 VPC 4 subnets: * public &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.0.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: public * private &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.1.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: private &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: private * api &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.4.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: api &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: api * private2-required &nbsp;&nbsp;&nbsp;&nbsp;IPv4 CIDR: 10.0.2.0/24 &nbsp;&nbsp;&nbsp;&nbsp;route table: public &nbsp;&nbsp;&nbsp;&nbsp;Network ACL: - 3 route tables: * public &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: ::/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: igw-xxxxxxxx * private &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local * api &nbsp;&nbsp;&nbsp;&nbsp;Destination: 10.0.0.0/16 &nbsp;&nbsp;&nbsp;&nbsp;Target: local &nbsp;&nbsp;&nbsp;&nbsp;Destination: 0.0.0.0/0 &nbsp;&nbsp;&nbsp;&nbsp;Target: nat-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Destination: pl-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;Target: vpce-xxxxxxxx &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(VPC S3 endpoint) 4 network ACLs * public &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * private &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: PostgreSQL TCP 5432 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: PostgreSQL TCP 5432 10.0.4.0/24 (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;100: Custom TCP TCP 32768-65535 10.0.0.48/32 (allow) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;101: Custom TCP TCP 1024-65535 10.0.4.0/24 (allow) * api &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) * \- &nbsp;&nbsp;&nbsp;&nbsp;inbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) &nbsp;&nbsp;&nbsp;&nbsp;outbound rules: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;All traffic (allow) # Update # I increased the timeout of the lambda to 5 minutes, and the timeout of the PUT request to the S3 bucket to 5 minutes as well. Before this the request itself would timeout, but now I'm actually getting a response back from S3. It is a 400 Bad Request response. The error code is `RequestTimeout`. And the message in the payload of the response is "Your socket connection to the server was not read from or written to within the timeout period." This exact same code works 100% of the time for a small payload (on the order of 1KB), but apparently for payloads on the order of 1MB it starts breaking. There is no logic in _my code_ that does anything differently based on the size of the payload. I've read similar issues that suggest the issue is with the wrong number of bytes being provided in the "content-length" header, but I've never provided a value for that header. Furthermore, the lambda works flawlessly when executed in my local environment. The problem definitely appears to be a networking one. At first glance it might seem like this is just an issue with the lambda being able to interact with services outside of the VPC, but that's not the case because the lambda _does_ work exactly as expected for smaller file sizes (<1KB). So it's not that it flat out can't communicate with S3. Scratching my head here... # Original # I use S3 to host images for an application. In my local testing environment the images upload at an acceptable speed. However, when I run the same exact code from an AWS Lambda (in my VPC), the speeds are untenably slow. I've concluded this because I've tested with smaller images (< 1KB) and they work 100% of the time without making any changes to the code. Then I use 1MB sized payloads and they fail 98% percent of the time. I know the request to S3 is the issue because of logs made from within the Lambda that indicate the execution reaches the upload request, but — almost — never successfully passes it (times out).
1
answers
0
votes
32
views
asked 10 days ago

Non guessable CloudFront URL

I'm wondering if there's a way to make the S3 path unguessable. Let's suppose I have an S3 path like this: https://s3-bucket.com/{singer_id}/album/song/song.mp3, this file will be served through CloudFront, so the path will be: https://cloundfront-dist-id.com/{singer_id}/album/song/song.mp3?signature=... (I'm using signed URLs). My question is : it is possible to make the /{singer_id}/album/song/song.mp3 not guessable by hashing it using for example Lambda or Lambda@Edge function so the client will see a url like this https://cloundfront-dist-id.com/some_hash?signature= ? Thanks in advance. https://stackoverflow.com/questions/70885356/non-guessable-cloudfront-url I am also facing issue. Question may arise why need of hash because signed url are secure. For my side, I need such url with s3 path hidden. I am using same AWS bucket for retrieving image for internal use without signed url and sharing that file to others using signed url. Internal USe CDN without signed url after CNAMe https://data.example.com/{singer_id}/album/song/song.mp3 Signed url https://secured-data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires == Since both using same AWS bucket and if someone guesses in signed url then access content https://data.example.com/{singer_id}/album/song/song.mp3?signature=. &Expires . File opens . In this scenario, I want to hide {singer_id}/album/song/song.mp3 to some new value and file is displayed under new name
1
answers
0
votes
7
views
asked 11 days ago
0
answers
0
votes
7
views
asked 14 days ago

My ECS tasks (VPC A) can't connect to my RDS (VPC B) even though the VPCs are peered and networking is configured correctly

Hi, As mentioned in the question, my ECS tasks cannot connect to my RDS. The ECS tasks try to resolve the rds by name, and it resolves to the RDS public IP (RDS has public and private IPs). However, the security group on RDS doesn't allow open access from all IPs so the connection fails. I temporarily allowed all connections and could see that the ECS tasks are routing through the open internet to access the RDS. Reachability Analyzer checking specific tasks' Elastic Network Interface to the RDI ENI is successful, using internal routing through the peering connection. At the same time I have another server on VPC C that can connect to the RDS. All the config is similar between these two apps, including the peering connection, security group policies and routing tables. Any help is appreciated Here are some details about the VPCs VPC A - 15.2.0.0/16 [three subnets] VPC B - 111.30.0.0/16 [three subnets] VPC C - 15.0.0.0/16 [three subnets] Peering Connection 1 between A and B Peering Connection 2 between C and B Route table for VPC A: 111.30.0.0/16 : Peering Connection 1 15.2.0.0/16: Local 0.0.0.0/0: Internet Gateway Route table for VPC C: 111.30.0.0/16: Peering Connection 2 15.2.0.0/16: Local 0.0.0.0/0: Internet Gateway Security groups allow traffic to RDS: Ingress: 15.0.0.0/16: Allow DB Port 15.2.0.0/16: Allow DB Port Egress: 0.0.0.0/0: Allow all ports When I add the rule: 0.0.0.0/0 Allow DB Port to the RDS, then ECS can connect to my RDS through its public IP.
1
answers
2
votes
5
views
asked 17 days ago

ApplicationLoadBalancedFargateService with load balancer, target groups, targets on non-standard port

I have an ECS service that exposes port 8080. I want to have the load balancer, target groups and target use that port as opposed to port 80. Here is a snippet of my code: ``` const servicePort = 8888; const metricsPort = 8888; const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef'); const repository = ecr.Repository.fromRepositoryName(this, 'cloud-config-server', 'cloud-config-server'); taskDefinition.addContainer('Config', { image: ecs.ContainerImage.fromEcrRepository(repository), portMappings: [{containerPort : servicePort, hostPort: servicePort}], }); const albFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'AlbConfigService', { cluster, publicLoadBalancer : false, taskDefinition: taskDefinition, desiredCount: 1, }); const applicationTargetGroup = new elbv2.ApplicationTargetGroup(this, 'AlbConfigServiceTargetGroup', { targetType: elbv2.TargetType.IP, protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, vpc, healthCheck: {path: "/CloudConfigServer/actuator/env/profile", port: String(servicePort)} }); const addApplicationTargetGroupsProps: elbv2.AddApplicationTargetGroupsProps = { targetGroups: [applicationTargetGroup], }; albFargateService.loadBalancer.addListener('alb-listener', { protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, defaultTargetGroups: [applicationTargetGroup]} ); } } ``` This does not work. The health check is taking place on port 80 with the default URL of "/" which fails, and the tasks are constantly recycled. A target group on port 8080, with the appropriate health check, is added, but it has no targets. What is the recommended way to achieve load balancing on a port other than 80? thanks
1
answers
0
votes
6
views
asked 17 days ago
  • 1
  • 90 / page