Failing to upgrade self-managed kube-proxy add on to version 1.30

0

My eks cluster version is 1.30 and I am trying to upgrade my self-managed kube-proxy add-on through the AWS console to compatible version v1.30.0-minimal-eksbuild.3. But it fails with "ConfigurationConflict: Conflicts found when trying to apply. Will not continue due to resolve conflicts mode. Conflicts: ConfigMap kube-proxy-config - .data.config"

I have already updated the image in my daemonset to the latest image "kube-proxy:v1.30.0-minimal-eksbuild.3".

I'm not able to understand what is incompatible that is failing the upgrade. Any help would be much appreciated!

My ConfigMap looks like this: apiVersion: v1 data: config: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig qps: 5 clusterCIDR: "" configSyncPeriod: 15m0s conntrack: maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 0.0.0.0:10249 mode: "iptables" nodePortAddresses: null oomScoreAdj: -998 portRange: "" udpIdleTimeout: 250ms kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"config":"apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nclientConnection:\n acceptContentTypes: ""\n burst: 10\n contentType: application/vnd.kubernetes.protobuf\n kubeconfig: /var/lib/kube-proxy/kubeconfig\n qps: 5\nclusterCIDR: ""\nconfigSyncPeriod: 15m0s\nconntrack:\n maxPerCore: 32768\n min: 131072\n tcpCloseWaitTimeout: 1h0m0s\n tcpEstablishedTimeout: 24h0m0s\nenableProfiling: false\nhealthzBindAddress: 0.0.0.0:10256\nhostnameOverride: ""\niptables:\n masqueradeAll: false\n masqueradeBit: 14\n minSyncPeriod: 0s\n syncPeriod: 30s\nipvs:\n excludeCIDRs: null\n minSyncPeriod: 0s\n scheduler: ""\n syncPeriod: 30s\nkind: KubeProxyConfiguration\nmetricsBindAddress: 0.0.0.0:10249\nmode: "iptables"\nnodePortAddresses: null\noomScoreAdj: -998\nportRange: ""\nudpIdleTimeout: 250ms"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"eks.amazonaws.com/component":"kube-proxy","k8s-app":"kube-proxy"},"name":"kube-proxy-config","namespace":"kube-system"}} creationTimestamp: "2022-11-13T19:05:07Z" labels: eks.amazonaws.com/component: kube-proxy k8s-app: kube-proxy name: kube-proxy-config namespace: kube-system resourceVersion: "1612" uid: 55ebcccd-5d78-4aad-8091-2d212bc905e3

asked 10 months ago329 views
1 Answer
0

Good Day Jahnavi,

Thanks for sharing this question. Many times, I see lot of customers do this mistake of not selecting "Override" Option post shifting to managed add-on from their self-managed add-on from the Console. So be sure you're doing that. If so, it will resolve all conflicts of whatsoever is coming.

Let me know if that doesn't work. And Best-Practice is not to touch the ConfigMap/editing the manifest of kube-proxy daemonset i.e., changing the image name and saving it, this won't work.

profile pictureAWS
SUPPORT ENGINEER
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions