By using AWS re:Post, you agree to the Terms of Use
/Amazon VPC/

Questions tagged with Amazon VPC

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

aws-sdk V3 timeout in lambda

Hello, I'm using NodeJS 14.x lambda to control an ecs service. As I do not need the ecs task to run permanently, I created a service inside the cluster so I can play around the desired count to start or stop it at will. I also created two lambdas, one for querying the current desired count and the current Public IP, another one for updating said desired count (to 0 or 1 should I want to start or stop it) I have packed aws-sdk v3 on a lambda layer to not have to package it on each lambda. Seems to work fine as I was getting runtime error > "Runtime.ImportModuleError: Error: Cannot find module '@aws-sdk/client-ecs'" But I do not anymore. The code is also working fine from my workstation as I'm able to execute it locally and I get the desired result (query to ecs api works fine) But All I get when testing from lambdas are Timeouts... It usually execute in less than 3 secondes on my local workstation but even with a lambda timeout set up at 3 minutes, this is what I get ``` START RequestId: XXXX-XX-XXXX Version: $LATEST 2022-01-11T23:57:59.528Z XXXX-XX-XXXX INFO before ecs client send END RequestId: XXXX-XX-XXXX REPORT RequestId: XXXX-XX-XXXX Duration: 195100.70 ms Billed Duration: 195000 ms Memory Size: 128 MB Max Memory Used: 126 MB Init Duration: 1051.68 ms 2022-01-12T00:01:14.533Z XXXX-XX-XXXX Task timed out after 195.10 seconds ``` The message `before ecs client send` is a console.log I made just before the ecs.send request for debug purposes I think I've set up the policy correctly, as well as the Lambda VPC with the default outbound rule to allow all protocol on all port to 0.0.0.0/0 so I I have no idea on where to look now. I have not found any way to debug aws-sdk V3 calls like you would do on V2 by adding a logger to the config. Maybe it could help understanding the issue....
1
answers
0
votes
5
views
Tomazed
asked 5 days ago

Selectively exposing a REST endpoint publicly in an AWS EKS cluster in a private VPC

**Cluster information:** **Kubernetes version: 1.19** **Cloud being used: AWS EKS** So here is my configuration. I have a private VPC on AWS within which is hosted an AWS EKS cluster. Now this VPC has public facing load balancers which are only accessible from only specific IP addresses. On this EKS cluster are hosted a number of micro services running in their own pods. Each of these pods exposes a REST endpoint. Now here is my requirement. Out of all the REST endpoints that we have, i would like to make only one REST endpoint publicly available from the internet. The remainder of our REST endpoints should remain private accessible only from certain IP addresses. What would be the best approach to achieve this? So far,from what i have researched, here are my options: 1)Have another instance of Ingress controller which deploys a public facing load balancer to handle requests to this public facing REST endpoint. This will work. However, i am concerned with the security aspects here. An attacker might just get into our VPC and create havoc. 2)Have a completely new EKS cluster which is public facing where i deploy this single REST endpoint. This is something i would like to avoid. 3)Use something like AWS API gateway to achieve this. I am not sure if this is possible as i have to research more about it. Anyone has any ideas on how this could be achieved securely? Any advice would be very much appreciated. Regards, Kiran Hegde
5
answers
0
votes
5
views
AWS-User-1971331
asked 5 days ago

NLB target groups not acting consistently?

Hi, I'm using terraform to create infrastructure for two environments: develop and production. Both environments consist of a self-hosted kubernetes cluster on EC2 instances, and a self-managed database on an EC2 instance. The develop env has all these in private subnets behind a NAT GW and a network load balancer. There are three target groups, one for http, one for https traffic pointing to the cluster and one for the protocol of the database. There are a few Route53 alias records pointing to our network load balancer and the target groups are associated to the right auto scaling groups. The cluster and the database is reachable from the public internet (this is intentional for the time being). This setup works very well. The problem is when I tried to reproduce the same setup for the production environment, the database was sometimes unreachable, more not than yes, and when it wasn't, the connection was just hanging. The only thing different are the names, like environment name etc., the configuration is pretty much the same. I can't figure out why it works in one case and not in the other. I've disabled cross-zone load balancing on both load-balancers so when I execute the dig command on the develop database record, I only get one IP address as it should be because of the disabled setting. But that isn't the case with the production NLB as I get 3, as much as the number of associated subnets. It's as though the cross load balancing setting is on even if it says it isn't. Has anyone experienced inconsistent behavior like this? In the end, I had to disassociate the production database from the production NLB target group, put it in a public subnet and create an A record just for it.
0
answers
0
votes
3
views
Rocky
asked 8 days ago

Connections time out of a client request to a Network Load Balancer

I connected two AWS Accounts with a peering connection. All subnets on each side are allowed to talk with each other. If I try to communicate between the two sides with the IPs of the instances it works fine. I added a NLB on one side to avoid IPs and use a DNS name as a host. The ECS service registers the IP automatically to the NLB target group to achieve the goal. The client on one side tries to make a request through the NLB to the same target as before. The NLB is configured as internal and assigned to 3 AZ, the target group contains the IP of the target I want to reach. Each AZ contains a subnet with its own small range of IPs(1.0.x.0/20) but all the CIDR used for the rules are using the broader IP range(1.0.0.0/16) to cover them all. There are no overlappings between any IP ranges on both Accounts. The NLB has 3 private IPs(one for each AZ) registered on its DNS entry. I can do the request to the IP behind the NLB with success and the request to the NLB IP which is associated with the AZ on which the target IP is located. The request to the two other IPs of the NLB results in a timeout. There's one ACL for the whole Account which allows all traffic, the default security group allows the traffic of the CIDR of both Accounts and the routing tables contains an entry to route the traffic to the peering connection for the CIDR of the other side and one route for the local CIDR to "local". I also tried the Reachability Analyzer with the peer connection as sender and the NLB as a receiver and specify the IP of the target in the target group. This test succeeds because it uses the one NIC which is in the same AZ. I tried to use the peer connection as sender and the other two NICs of the NLB and set the IP of the target which fails with NO_PATH. To me, it looks like the NLB doesn't route the request to the other NIC. But I couldn't find any limitations to this kind of setup on the documentation.
1
answers
0
votes
5
views
AWS-User-5257795
asked 11 days ago

InvalidParameterValue Error in docker compose deploy

I am trying to deploy two docker containers via docker compose to ECS. This already worked before. Now I'm getting the following error: > **DatabasemongoService TaskFailedToStart: Unexpected EC2 error while attempting to tag the network interface: InvalidParameterValue** I tried deleting all resources in my account and recreating a default VPC which the docker compose uses to deploy. I tried tagging the network interface via the management web UI, which worked without troubles. I found this Documentation about EC2 Error Codes: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html > **InvalidParameterValue**: A value specified in a parameter is not valid, is unsupported, or cannot be used. Ensure that you specify a resource by using its full ID. The returned message provides an explanation of the error value. I don't get any output besides the error above to put my search on a new trail. Also there is this entry talking about the error: > InvalidNetworkInterface.InUse: The specified interface is currently in use and cannot be deleted or attached to another instance. Ensure that you have detached the network interface first. If a network interface is in use, you may also receive the **InvalidParameterValue** error. As the compose CLI handles creation and deletion of network interfaces automatically, I assume this is not the problem. Below is my docker-compose.yaml file. I start it via `docker compose --env-file=./config/.env.development up` in the ecs context. ``` version: '3' services: feathers: image: xxx build: context: ./app args: - BUILD_MODE=${MODE_ENV:-development} working_dir: /app container_name: 'feather-container' ports: - ${BE_PORT}:${BE_PORT} environment: - MODE=${MODE_ENV:-development} depends_on: - database-mongo networks: - backend env_file: - ./config/.env.${MODE_ENV} database-mongo: image: yyy build: context: ./database container_name: 'mongo-container' command: mongod --port ${MONGO_PORT} --bind_ip_all environment: - MONGO_INITDB_DATABASE=${MONGO_DATABASE} - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME} - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD} ports: - ${MONGO_PORT}:${MONGO_PORT} volumes: - mongo-data:/data networks: - backend networks: backend: name: be-network volumes: mongo-data: ``` Any help, idea, or point in the right direction is very appreciated!
0
answers
0
votes
6
views
jkonrath
asked 12 days ago

What AWS services could be used for hosting a global low-latency udp-based service?

hi, am asking this question for a personal project I have - something that does not need extensive cloud infras, but wondering if there's a way to configure an existing service relatively easily to get this. I have set up a UDP-based service (a game server for a bunch of friends), and we are connecting to it from various parts of the world - US (spread across), EU, and asia-pacific. I've spun up instances running the service in various regions, and there's always 1 client region that suffers significantly from latency when connecting across public IP space to a remote region. I have previously done some testing with AWS that shows that AWS to AWS connectivity between regions (even across public IP space) is lower than directly connecting to a remote region from a home connection, and trying to think of a way to put this together easily.... Have looked at Cloudfront, but it looks like this is for web only - but something like cfn would be ideal, if it was configurable for UDP. I also thought that I could use NLBs, but if I wanted to create an NLB in a remote region, I can't have the target group configured to forward to the NLB public IP in the service region. What other options would be worth considering for an AWS networking solution on this? Would NLB work if VPC peering was set up? I was hoping to avoid getting into this level of complexity (i.e. any other AWS service that this can be achieved with for a really simple solution). Thanks in advance.
1
answers
0
votes
10
views
bcave
asked 21 days ago

Users in parts of northern Italy blocked from website access, but no other worldwide locales are blocked

I am supporting a company which has a production EC2 instance in Asia-Pacific (Singapore) running a fairly simple web server. As of sometime Friday (European Central time) a company employee in the Milan area reported that the instance was unreachable. An attempt to connect times out. Company people in the US, Asia, and central Europe have no issues connecting to the web server. If the user in Milan switches to a TOR browser (therefore a different source IP somewhere in the world) he has no issues accessing our website. I gave our user a URL with the public IP address of our web server instead of the name in order to validate that this was not a DNS issue, and the result is the same. There is no connection being made at all between his system and our instance via public IP. A traceroute shows that the connection goes through AWS routers with public IP addresses and eventually just never connects. Our firewall ACLs for the instance in Singapore have no restrictions at all to destination port 443 from 0.0.0.0 (everywhere). There have been no changes made to our AWS configuration or the configuration of the web server on the instance at all for the last several months. It appears quite strongly that there is an internal network routing problem or blockage within AWS which is preventing our user in Milan from reaching our site in Singapore. We do not have a paid support account which would allow us to create a Tech Support ticket. Does anyone have any idea how to reach AWS about what appears to be a network infrastructure issue for them? Does anyone have any other ideas which I should pursue in order to identify what is causing this connection problem specific to the Milan area?
1
answers
0
votes
6
views
Rob Scott
asked a month ago

how to connect to private RDS from localhost

I have a private VPC with private subnets a private jumpbox in 1 private subnet and my private RDS aurora MySql serverless instance in another private subnet. I did those commands on my local laptop to try to connect to my RDS via port forwarding: ``` aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["5901"],"localPortNumber"=["9000"] --profile myProfile aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["22"],"localPortNumber"=["9999"] --profile myProfile aws ssm start-session --target i-0d5470040e7541ab9 --document-name AWS-StartPortForwardingSession --parameters "portNumber"=["3306"],"localPortNumber"=["3306"] --profile myProfile ``` The connection to the server hangs. I had this error on my local laptop: ``` Starting session with SessionId: myuser-09e5cd0206cc89542 Port 3306 opened for sessionId myuser-09e5cd0206cc89542. Waiting for connections... Connection accepted for session [myuser-09e5cd0206cc89542] Connection to destination port failed, check SSM Agent logs. ``` and those errors in `/var/log/amazon/ssm/errors.log`: ``` 2021-11-29 00:50:35 ERROR [handleServerConnections @ port_mux.go.278] [ssm-session-worker] [myuser-017cfa9edxxxx] [DataBackend] [pluginName=Port] Unable to dial connection to server: dial tcp :3306: connect: connection refused 2021-11-29 14:13:07 ERROR [transferDataToMgs @ port_mux.go.230] [ssm-session-worker] [myuser-09e5cdxxxxxx] [DataBackend] [pluginName=Port] Unable to read from connection: read unix @->/var/lib/amazon/ssm/session/3366606757_mux.sock: use of closed network connection ``` and I try to connect to RDS like this : [![enter image description here][1]][1] I even tried to put the RDS Endpoint using ssh Tunnel, but it doesn't work: [![enter image description here][2]][2] Are there any additional steps to do on the remote server ec2-instance? It seems the connection is accepted but the connection to the destination port doesn't work. or is there any best other way to connect to private rds in private vpc when de don't have site-to site-vpn or Direct connect ? [1]: https://i.stack.imgur.com/RwiZ8.png [2]: https://i.stack.imgur.com/53GIh.png
6
answers
0
votes
30
views
AWS-User-1737129
asked a month ago

No able to connect to aws-vpn from a windows10 VirtualBox in an Ubuntu Host

Hi all, After discarding the possibility of connecting to AWS-VPN, configured with SAML Authentication (OKTA), from my Ubuntu Box, my next solution is use a Windows VM (VirtualBox) as a router/bridge. Not sure if this can be done, but my 1st step, that ism connecting the Window VM to the VPN still not working. I follow instructions in amazon site: - Installed amazon vpn-client - Disabled windows firewall - Allow incoming traffic 1194 and 443 (udp/tcp) in the ubuntu host So the process starts well, it shows me the Okta login in the browser, and then get stuck in "Waiting for Identity". Looking at the log, I see this item repeated and repeated ``` 2020-07-10 10:23:36.789 -07:00 [DBG] [TI=9] Process 6472 is owned by SYSTEM 2020-07-10 10:23:37.557 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:37 2020 us=556347 WE_CTL n=0 ev=000000000111D288 rwflags=0x0001 arg=0x0 2020-07-10 10:23:37.559 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:37 2020 us=559262 WE_WAIT enter n=1 to=1000 2020-07-10 10:23:37.559 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:37 2020 us=559262 [0] ev=0000000000000124 rwflags=0x0001 arg=0x0 2020-07-10 10:23:37.791 -07:00 [DBG] [TI=5] IsAlive method called 2020-07-10 10:23:37.834 -07:00 [DBG] [TI=5] Process 6472 is owned by SYSTEM 2020-07-10 10:23:38.568 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:38 2020 us=568870 WE_CTL n=0 ev=000000000111D288 rwflags=0x0001 arg=0x0 2020-07-10 10:23:38.569 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:38 2020 us=569984 WE_WAIT enter n=1 to=1000 2020-07-10 10:23:38.569 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:38 2020 us=569984 [0] ev=0000000000000124 rwflags=0x0001 arg=0x0 2020-07-10 10:23:38.836 -07:00 [DBG] [TI=9] IsAlive method called 2020-07-10 10:23:38.860 -07:00 [DBG] [TI=9] Process 6472 is owned by SYSTEM 2020-07-10 10:23:39.578 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:39 2020 us=578735 WE_CTL n=0 ev=000000000111D288 rwflags=0x0001 arg=0x0 2020-07-10 10:23:39.579 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:39 2020 us=579500 WE_WAIT enter n=1 to=1000 2020-07-10 10:23:39.581 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:39 2020 us=581527 [0] ev=0000000000000124 rwflags=0x0001 arg=0x0 2020-07-10 10:23:39.862 -07:00 [DBG] [TI=5] IsAlive method called 2020-07-10 10:23:39.886 -07:00 [DBG] [TI=5] Process 6472 is owned by SYSTEM 2020-07-10 10:23:40.591 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:40 2020 us=591988 WE_CTL n=0 ev=000000000111D288 rwflags=0x0001 arg=0x0 2020-07-10 10:23:40.591 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:40 2020 us=591988 WE_WAIT enter n=1 to=1000 2020-07-10 10:23:40.591 -07:00 [DBG] [TI=13] [6472] Fri Jul 10 10:23:40 2020 us=591988 [0] ev=0000000000000124 rwflags=0x0001 arg=0x0 2020-07-10 10:23:40.888 -07:00 [DBG] [TI=9] IsAlive method called ``` So not able to understand what the problem is, any help will be greatly appreciated. Thanks Tonio
4
answers
0
votes
0
views
tjc
asked 2 years ago

Connecting a Linux box to AWS-VPN using OKTA Authentication/Authorization

First of all, a rookie, related to VPN/Security issues, so really forgive me for whatever error I make while describing my problem, and hope I'm able to make it clear. Our contractors changed AVIATRIX-OKTA VPN for AWS-VPN with OKTA Authentication, they send as an .ovpn file, that works ok for Windows/MAC using AWS-Vpn-Client application software, but a couple of us using Linux boxes (Ubuntu specifically) run the described method in AWS which is: ``` openvn config-file.ovpn ``` but it fails to authenticate. It simply asks for usr/pwd an then it fails with auth error (we use our OKTA credentials) , seems nothing is configured to go to OKTA, open a browser or whatever it needs to do. As an aside note, we can connect without any trouble to our aws k8s cluster using OKTA client libraries, no sure is this is useful or not, just in case. The .ovpn file looks like this ------------------------------------------------------- ``` client dev tun proto tcp remote random.cvpn-endpoint-xxxxxx.yyy.clientvpn.us-west-2.amazonaws.com 443 remote-random-hostname resolv-retry infinite nobind persist-key persist-tun remote-cert-tls server cipher AES-256-GCM verb 5 <ca> .... .... .... </ca> auth-user-pass auth-federate auth-retry interact auth-nocache reneg-sec 0 ``` ------------------------------------------------------- **An interesting thing to notice is that openvpn complains about _auth-federate_** seems not to recognize it, so I started using gnome network-manager which seems to accept this configuration, but getting Auth error too. After this I tried **openvpn3** which didn't complain about configuration, but still getting the same error. Any help on how to configure it, or just know if it is possible, will be greatly welcome , seems there is very little information around this in the net and we are really stuck on this, we are willing not to change OS or machines as they are asking to, or using VM just to connect. Thanks in advance,
3
answers
0
votes
0
views
tjc
asked 2 years ago
  • 1
  • 90 / page