The use case which you are looking for where all the outbound connections from the cluster should appears from a specific IP can be achieved using NAT gateway.
Currently as you have mentioned your nodes are in public subnet with internet gateway in the route table. Launch nodes into Private subnets which have NAT gateway in route table. This will make sure any outbound connection from the pods will appear to be from the IP of the NAT gateway.
Refer to https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html, which give more information on this.
Irrespective of SNAT status, this will make sure the outbound traffic to internet will appear from NAT gateway IP.
To answer your specific queries:
- Yes it can be reverted back to "false" by executing "kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=false"
- No do not make any change to public subnet, instead use private subnet to launch nodes.
- Ideally Private subnet should be used to launch nodes.
Enabling SNAT for eksclt created EKS clusterasked 3 months ago
EKS NodeGroup - The aws-auth ConfigMap in your cluster is invalidasked 2 years ago
Give cluster admin access to EKS worker nodes.asked 3 months ago
[EKS] Use Multus with existing cluster created by eksctlasked 9 months ago
Worker Node group doesn't join the EKS clusterasked 2 months ago
Fully private eks clusterasked 3 months ago
Access to Secrets Manager from pod in EKSasked 10 months ago
Unable to create EKS Clusterasked 7 months ago
EKS Cluster Create FailedAccepted Answerasked 10 months ago
unable to create eks cluster using REST APIAccepted Answerasked 3 months ago