Browse through the questions and answers listed below or filter and sort to narrow down your results.
Elastic BeanStalk can't connect to ElastiCache Redis
I'm having issues connecting from Elastic BeanStalk to ElastiCache Redis. When I SSH into the EBS instance and try to use redis-cli to connect, it times out. This is how I set up my environment: I have an existing VPC with two subnets. I created a Security Group specifically for this that has an Inbound rule for IPv4, Custom TCP, port 6379, source 0.0.0.0/0 I created an ElastiCache Redis cluster with the following relevant parameters: Cluster mode: disabled * Location: AWS Cloud, Multi-AZ enabled * Cluster settings: number of replicas - 2 * Subnet group settings: existing subnet group with two associated subnets * Availability Zone placements: no preference * Security: encryption at rest enabled, default key * Security: encryption in transit enabled, no access control * Selected security groups: the one I described above As for the EBS environment, it has this configuration: * Platform: managed, Node.js 16 on Amazon Linux 2 5.5.3 * Instance settings: Public IP address UNCHECKED, both Instance subnets checked * Everything else left default After getting all of that set up, I would SSH into the EBS instance and follow the directions here to install redis-cli and try to connect: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/GettingStarted.ConnectToCacheNode.html I've tried using the Primary endpoint, the Reader endpoint, and all of the individual node endpoints, but I get a timeout error for all of them. Is there some configuration that I'm missing?
How to access a public RDS instance from lambda without compromising on the security of RDS by putting an inbound rule for RDS Sg with source as 0.0.0.0/0 ?
I have a lambda that wants to call RDS from a different account. The RDS is a public instance but has security group rules configured to make it secure and not open it to access from anywhere. The lambda in no VPC-mode on the other hand, does not have any static IP address associated with it which can be configured in RDS's Sg inbound rules. **Neither** the lambda in VPC which makes call through internet gateway has a static IP address which can be configured in RDS's SG rules. On the other hand, for the VPC peering approach, the lambda does not have a private IP address and in this blog https://aws.amazon.com/premiumsupport/knowledge-center/rds-ip-address-issues/ , it says - When you try to connect to your DB instance from resources within the same VPC, your RDS endpoint automatically resolves to the private IP address. When you connect to your DB instance from either outside the VPC or the internet, the endpoint resolves to a public IP address. How to make a call from lambda for a Public RDS without changing the SG's inbound source to 0.0.0.0/0 ?
Anything on the roadmap for this limitation "You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC."
Anything on the roadmap for this limitation "You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC." https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
Elastic Beanstalk Environment Health Sever: Process default has been unhealthy for 51 minutes (Target.ResponseCodeMismatch) or(Target.TimedOut) with ELB Target: 502 Bad Gateway
One of my environment start to get into Environment Health Sever state with `Target.ResponseCodeMismatch` while on the ELB target, the instance was listed as `HTTP 502: Bad gateway` I've verified below all looks correctly setup for me, but still not able to get the Environment Health back to normal 1. Load Balancer setting for listener and processors all on HTTP, PORT 80, for processor the Health check path is `/health` (this is the same as my other environment), the respond code was left as default `200` 2. Load Balancer is in a EB default created security group and EC2 Instance also has EB default created security group where is allowing traffic from and to ELB over 80 3. Subnet is in a subnet where ACL is allowing traffic on all port I also restarted and launched new instance couple of times, the error is not going away
EMR Serverless IPV6 connectivity issue in private subnet VPC
Hi, I have just been fiddling in EMR Serverless recently from this week after GA Release. I found that I am not able to download my AWS S3 jar files if I try to run EMR Serverless job from private VPC subnet. I have already tested the connectivity from EC2 using same subnet and same security group but the problem exist only in EMR Serverless Job. From the error logs I can see it is trying to connect to Spark IPV6 address which I am not sure why it is not connecting. Region: ap-northeast-1 Subnet: ap-northeast-1a (I have tried other subnets too) Here are my tail logs: ``` 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/xbean-asm9-shaded-4.20.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/xbean-asm9-shaded-4.20.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/xz-1.8.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/xz-1.8.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zookeeper-3.6.2.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zookeeper-3.6.2.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zookeeper-jute-3.6.2.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zookeeper-jute-3.6.2.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR file:/tmp/spark-cdc6b2aa-657b-4464-b7d8-3cbe2fea3872/zstd-jni-1.5.0-4.jar at spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/zstd-jni-1.5.0-4.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO SparkContext: Added JAR s3://datalake-cbts-test/spark-jobs/app.jar at s3://datalake-cbts-test/spark-jobs/app.jar with timestamp 1654283322058 22/06/03 19:08:42 INFO Executor: Starting executor ID driver on host ip-10-0-148-61.ap-northeast-1.compute.internal 22/06/03 19:08:42 INFO Executor: Fetching spark://[2406:da14:5a:5a01:41a7:4134:a18b:f5f8]:42539/jars/protobuf-java-2.5.0.jar with timestamp 1654283322058 22/06/03 19:08:43 ERROR Utils: Aborting task java.io.IOException: Failed to connect to /2406:da14:5a:5a01:41a7:4134:a18b:f5f8:42539 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230) at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:399) at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$openChannel$4(NettyRpcEnv.scala:367) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1508) at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:366) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:763) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:550) at org.apache.spark.executor.Executor.$anonfun$updateDependencies$13(Executor.scala:962) at org.apache.spark.executor.Executor.$anonfun$updateDependencies$13$adapted(Executor.scala:954) at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) at scala.collection.mutable.HashMap.foreach(HashMap.scala:149) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984) at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:954) at org.apache.spark.executor.Executor.<init>(Executor.scala:247) at org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:64) at org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:132) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220) at org.apache.spark.SparkContext.<init>(SparkContext.scala:582) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2694) at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943) at np.com.ngopal.spark.SparkJob.getSession(SparkJob.java:74) at np.com.ngopal.spark.SparkJob.main(SparkJob.java:111) ``` Thanks
IAM role needed to assign a security group to a running EC2 instance
What is the proper IAM role required to assign an existing security group to a running EC2 instance? My current permissions are: ``` AuthorizeSecurityGroupEgress AuthorizeSecurityGroupIngress RevokeSecurityGroupEgress RevokeSecurityGroupIngress UpdateSecurityGroupRuleDescriptionsEgress UpdateSecurityGroupRuleDescriptionsIngress ```
Can't access site on ec2 instance via public IPV4 address (Using Amazon Linux)
Hi All, I'm not able to figure out what steps needs to be done in order to access the webbrowser through IPV4 address. Showing this site can't be reached please help me , I'm new to AWS. what I have done:- 1. I have set the inbound rules HTTP (80) and SSH(22) Thanks in advance!!!
Delete EKS Node Group failed due to Security Group Dependency
I created a node group and specified a wrong security group, which was used by other resources. As a result, when I delete the node group through eksctl or AWS console, I got deletion failure due to: > Ec2SecurityGroupDeletionFailure DependencyViolation - resource has a dependent object Is it possible to delete this node group while keeping the security group, please? Thank you for your answers in advance
Are security groups enforced when using ssm start-session with port forwarding
Can you tell me if security groups are still enforced when we connect to an instance via the ssm start-session CLI command using the port forwarding option Or are security groups bypassed when connecting to instances using the ssm CLI ?
cannnot connect to my EC2 instance
I have created an EC2 instanced in US-west-2 (Oregon). It has passed both the checks. Have checked all the steps necessary for connecting my EC2 instance as well as to internet, but couldn't connect. The mandatory Instance Status Checks, both has passed. IAM Role: AmazonEC2FullAccess. OS as Ubunut 20.04. The 'Get instance screenshot' shows console with Ubuntu 22.04 LTS ip-172.31.xx.xx (my private IP) tty 1 ip-172.31.xx.xx login: _ One VPC, under that have created a subnet. Have an Internet Gateway, which is attached to the VPC. Have only one Security Group (default) with VPC ID as the one I have (VPC). Inbound rule for the Security Gr is Type: All TCP, Protocol: TCP, Port Range: 0 to 65535, Source: Custom 0.0.0.0/0 For type SSH, Protocol: TCP, Port Range: 22, Source: same as above. HTTP with port :80 and all other are same. For HPPTS only port is 443 all other are same. Have one Route table, attached VPC. Explicit Association with Subnet linked to the EC2 instance. What else should I need to check? BTW, previously I have created a personal account with gmail address. After some initial struggle could connect to my instance and then to internet, install web server (apache etc.) . But with the same setting could not connect this instance. I cannot start my project. Would be of great help if anyone could help. Thanks. Deb
RDS - SQL Server Express - Connectivity
Sorry, I've been working at this for the past 4 hours... Created an RDS SQL Server Express DB via "Easy Create" options. I modified config after to allow public access. I'm not able to connect via Visual Studio Code "mssql for Visual Studio Code" extension. "mssql: Error: Unable to connect using the connection information provided." I'm also not able to ping, telnet et cetera the endpoint plus 1433 port. That bit doesn't completely surprise me - I've seen posts stating Amazon blocks that type of traffic. Database is associated with default VPC which is wide open - no defaults changed. Under "Security Groups" for database I see "Security group rules (2)" which leads me to believe there are 2 groups associated but the list below is completely blank and I cannot edit - add, delete, modify. My thought it those groups are hidden since it's managed. Any ideas? Thank you!