EKS v1.30 VPC-CNI v1.18.1 Network Policies are not working

0

Hi there, I have tried to implement the network policies. My EKS version is 1.30 and the VPC-CNI (aws-node daemonset) is 1.18.1. I modified the value of the args in the daemonset to enable network policies as follows: --enable-network-policy=true, I also have there --health-probe-bind-addr=:8163 and --metrics-bind-addr=:8162. I have checked that the CRD policyendpoints.networking.k8s.aws exists. However, the following deny-ingress network policy for the default namespace does not work for me: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-egress namespace: default spec: podSelector: {} policyTypes:

  • Egress

I have verified twice the NetworkPolicy as well as other parameters of the aws-node Daemonset, kindly ask for your advice. P.S. here are all the args of the VPC-CNI:

  • args: - --enable-ipv6=false - --enable-network-policy=true - --enable-cloudwatch-logs=true - --enable-policy-event-logs=false - --metrics-bind-addr=:8162 - --health-probe-bind-addr=:8163 - --conntrack-cache-cleanup-period=300
Michael
asked a month ago313 views
3 Answers
1
Accepted Answer

I want to update you on that. I was able to solve the issue by upgrading the VPC CNI to v1.18.3-eksbuild.1 from v1.18.1-eksbuild.3 and adding the following parameters in the configuration:

{
    "enableNetworkPolicy": "true",
    "nodeAgent": {
        "healthProbeBindAddr": "8163",
        "metricsBindAddr": "8162"
    }
}

Foreseeing possible accusations in my oversight, I must to say that before opening this request I manually modified the manifest of the aws-node Daemonset, changing the value of the --enable-network-policy from false to true. And of course, I waited for the Daemonset to roll over the new version. For those, who are interested, in the node itself, the logs are written to /var/log/aws-routed-eni/network-policy-agent.log. You can execute something as follows to get to them:

POD_HOSTIP_1=$(kubectl get po --selector app.kubernetes.io/component=service -n orders -o json | jq -r '.items[0].spec.nodeName')
kubectl debug node/$POD_HOSTIP_1 -it --image=ubuntu
tail -f /var/log/aws-routed-eni/network-policy-agent.log

Anywho, I am going to close this question, still unsure, was it the version of the VPC CNI or just the parameters that I added there to the add-on.

Michael
answered a month ago
profile picture
EXPERT
reviewed a month ago
0

My apologies, I believe that I just copied the wrong policy. This is the policy to block all ingresses:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress

This is how I test it. I have one pod exporting port 8888, then I started the netshoot pod (

kubectl run netshoot --image nicolaka/netshoot --command sleep 10000

) and connected to it (

kubectl exec -it netshoot -- bash

), and from there I use the curl for the service (

curl HTTP://exposed-service:8888

).

These two pods reside in the same default namespace.

Michael
answered a month ago
  • Did you install it as a self-managed addon or as EKS addon ? there is some diffrences in configuration

0

Hi,

The policy created is an egress policy, but you are expecting an ingress policy that's like this from k8s docs: https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions