Issue with EKS VPC CNI Addon: Pods Can Still Communicate with Internet Despite Deny All Egress Network Policy

0

Hello AWS Community,

I am experiencing an issue with the Amazon EKS VPC CNI Addon in my EKS cluster where, despite the addon being activated and allowNetworkPolicy set to true, and having a "deny all" egress network policy applied to the namespace, pods within that namespace are still able to communicate with the internet.

Here are the details of my setup:

  • Cluster Version: 1.29
  • VPC CNI Addon Version: v1.16.4-eksbuild.2
  • Region: us-east-1
  • Namespace Configuration: The namespace has a network policy that denies all egress traffic.
  • Observation: Pods in the specified namespace can initiate outbound connections to the internet, which contradicts the applied network policy.

Steps to Reproduce:

Activate the EKS VPC CNI addon with allowNetworkPolicy set to true. Apply a network policy to deny all egress traffic in a specific namespace. Deploy a pod in that namespace and attempt to access the internet from within the pod.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  namespace: staging
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress: []

Expected Behavior: The pods in the namespace should not be able to communicate with the internet due to the deny all egress network policy.

**Actual Behavior: ** Pods are still able to communicate with the internet.

I have already verified that the network policy is applied correctly and that there are no conflicting policies. The IAM roles and security groups are also configured as per the best practices.

Could someone please help me understand what might be causing this issue and how to resolve it?

Thank you in advance for your assistance!

rhuang
asked a month ago207 views
2 Answers
1
Accepted Answer

Hello,

Greetings for the day!!

From your correspondence I can understand that you are trying to deny all egress traffic over a kubernetes namespace with the help of network policies and you need assistance with the same. Please correct me if I misunderstood anything.

I did some testing on my side and here are the steps:

-First I created an EKS cluster with version 1.27
-Next, I created a managed node group with 2 nodes using Amazon EKS optimized AMI (linux).
-Next, I check the Amazon VPC CNI version was same as yours, which is v1.16.4-eksbuild.2
-Next, I used the below configuration schema from this documentation[1] to enable network policies:

{
    "enableNetworkPolicy": "true",
    "nodeAgent": {
        "enableCloudWatchLogs": "true",
        "healthProbeBindAddr": "8163",
        "metricsBindAddr": "8162"
    }
}

I had used AWS Management Console to do the above.

-Next, I created 2 namespaces named 'open' and 'close'.
-Next, I created the following pod in each of the above namespace using the below command(s):
$ kubectl run netshoot --image nicolaka/netshoot --command sleep 10000 -n open
$ kubectl run netshoot --image nicolaka/netshoot --command sleep 10000 -n close

-Now I have total 2 pods named 'netshoot' running, one pod in each namespace 'open' and 'close' respectively.
-Next, I exec'ed into each of the netshoot pod and tested the internet connectivity using the below commands:

First I exec'ed into the pod in 'open' namespace and I was able to connect to the internet as shown below:

$ kubectl exec -it netshoot -n open bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=1.35 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=1.38 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.354/1.372/1.382/0.013 ms

-Next, I exec'ed into the pod in 'close' namespace and I was able to connect to the internet as shown below:

$ kubectl exec -it netshoot -n close bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=1.75 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=1.75 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=1.73 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.733/1.744/1.750/0.008 ms

-Next, I created the following network policy on the namespace 'close':

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  namespace: close
spec:
  podSelector: {}
  policyTypes:
  - Egress

-Next, I applied the above policy and then exec'ed into the 'netshoot' pod in the 'close' namespace and this time I was not able to connect to the internet as shown below:

$ kubectl exec -it netshoot -n close bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
curl: (6) Could not resolve host: google.com
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7170ms

From the above replication, it is clear that the network policies are working on EKS. I would request you to verify and compare the above steps that I followed with your own setup and look for any inconsistencies.

There could be the following reasons the network policies are not working for you (the below are some of the reasons and not a complete list):
-Network policies work only on linux nodes.
-Verify that the target pod is in the correct namespace.
-Verify that the network policy is actually applied.
-Check if you are using a third party solution to managed network policies in addition to the Amazon VPC CNI.
-Please verify the same using test pods that I have shared.
-Ensure that the pod is running on the primary network interface of the worker node instance.

Please refer this documentation[2] to verify all the considerations.

If the network policies are still not working on your side then the issue needs to be troubleshooted by manually checking every configuration.

Have a fantastic day ahead!!

Reference:
[1] https://docs.aws.amazon.com/eks/latest/userguide/cni-network-policy.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/cni-network-policy.html#cni-network-policy-considerations

AWS
answered a month ago
0

Hi, I followed your example, and the network policy you attached is working:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  namespace: close
spec:
  podSelector: {}
  policyTypes:
  - Egress

my original one is not working because the additional rule set in the policy to all egress traffic.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-to-google
  namespace: close
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - {}
rhuang
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions