- Newest
- Most votes
- Most comments
Since Amazon EKS Fargate runs only one pod per node, the scenario of evicting pods in case of fewer resources doesn't occur. All Amazon EKS Fargate pods run with guaranteed priority, so the requested CPU and memory must be equal to the limit for all of the containers. For more information, see Configure Quality of Service for Pods in the Kubernetes documentation.
AWS Fargate lets you launch containers or pods without the need to manage the underlying servers or nodes. So with EKS Fargate, each pod that runs on Fargate has its own isolation boundary. There is a 1:1 corelation between the pods and nodes. You won't have multiple pods on a single nodes as you would in Kubernetes normally.
You can review the quotas here - https://docs.aws.amazon.com/eks/latest/userguide/service-quotas.html#service-quotas-eks-fargate. Since many of these quotas are adjustable, you should be able to scale and have multiple replicas.
The error you have reported relates to fargate profiles vs namespaces in which you are trying to create the pod / deployment. Ideally you have Fargate profiles specifying which pods use fargate on the basis of namespaces and selectors defined in the profile as outlined here. If you have only EKS fargate configured ie. no node groups in EKS and you try to deploy to a namespace which does not have a corresponding Fargate profile, you can end up with the above error. This is also evident from the fact that default-scheduler is trying to schedule the pods rather than fargate-scheduler. You either need to launch the deployment / pod in namespace that has a Fargate profile or else create a new profile for the namspace in question and then delete + retry the deployment
--Syd
Relevant content
- asked a month ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago