How to massively increase EKS pod limit
I am trying to move an existing Kubernetes cluster to EKS but I ran into a bit of a snag.
By default a t2.small instance for example has a pod limit of 11 (eleven). That is ridiculously low. On a node with equivalent CPU and RAM I'm currently running 110 pods.
This is mainly because my pods are tiny, plenty and mostly idle. I am running review apps for feature branches during software development, so there is one installation for each feature branch and each installation has a couple of pods, one per service. Most of the time those pods do nothing, except for a couple of minutes per day when someone reviews the particular feature branch.
I need to increase the pod limit per instance massively for this to make sense. A possible alternative is of course to deploy a couple of EC2 instances and install k3s on them but I would prefer to have this all on EKS.
First of all, generally speaking you should not run 110 pods on t2.small. You should seriously consider a different way of launching pods. If you don't need the pod, delete them. Or have some other ways to schedule the jobs.
As to why you encounter the limit is due to the fact that by default each pod get its IPs from ENIs of the node. And there is a limit how many ENIs and IPs the node can have.
One way to work around the issue is to use custom networking and CNI add-on. You can refer here. You should be able to at least get ~110 pods. However, you should seriously consider your scheduling strategy.
Not sure if I get the reasons why many pods are bad. A pod is just a process, with minimal overhead. If the executable uses maybe 5 MB RAM and is mostly idle, why not squeeze them all on a small instance. Anyway, I will look into "prefix delegation", that seems to be the magic sauce that will make it work.
Let's assume 110 pods need to run in EKS, you need to review how many IPs can a node have.
I have the same situation and I use m5.4xlarge that can host 234 max pods/ node (cost effective I can use also with two ec2 instances m5.2xlarge it will give me 116 total pods per nodes)
Other option you can consider multi containers pod to reduce the number of node note: the instance type will depends on your application
Well, sure, but m5.4xlarge is 50 times the price of t3.small and still 2.5 times the price per pod. Makes sense when your pods actually need the performance but for a bunch of idle pods it seems like a waste. I will look into the solution that Jason_S provided.
Problem adding nodegroup in EKS cluster with GW NATasked 3 months ago
Use API Gateway to run a Fargate task and then terminate that pod upon completionasked 3 months ago
Pod not scheduled due to FARGATE_CONCURRENT_POD_LIMIT_EXCEEDEDasked a month ago
AWS EKS - EIA attached on node not reachable by Podasked a month ago
Upload ECR image into EKSasked 2 days ago
EKS Fargate: restrict access to service to only certain podsasked 3 months ago
EKS - Get availability zone of pod deployment from within podAccepted Answerasked a month ago
Access to Secrets Manager from pod in EKSasked 5 months ago
How to massively increase EKS pod limitasked 17 days ago
How to debug pod failure on EKS?asked 3 years ago