Pod not scheduled due to FARGATE_CONCURRENT_POD_LIMIT_EXCEEDED

0

I have just deployed an EKS cluster as described here: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html

In the cluster I have created a Fargate profile with the namespace name "eks-sample-app" and no other tags.

I then attempted to deploy the sample application following this: https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html . The yaml file is included below.

I created the cluster with an IAM user in "System Administrator" group and since I did not modify ~/.kube/config at all , I am using that same IAM user with the "kubectl apply" command.

This is in us-east-2

The "kubectl apply -f " command works but when I check in the AWS console, I can see that the pods are not getting scheduled due to FARGATE_CONCURRENT_POD_LIMIT_EXCEEDED .

When I check for Fargate related service quotas I see that there are 2 (spot and regular) and both show an applied value of "50". The backplane of the cluster appears to be using 2 and there is nothing else running in my account. As far as I can tell, the pod limit has not been exceeded. I am looking for advice on what the real problem is or on how to troubleshoot this issue.

Here is my deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: eks-sample-linux-deployment
  namespace: eks-sample-app
  labels:
    app: eks-sample-linux-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: eks-sample-linux-app
  template:
    metadata:
      labels:
        app: eks-sample-linux-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
      containers:
      - name: nginx
        image: public.ecr.aws/nginx/nginx:1.21
        ports:
        - name: http
          containerPort: 80
        imagePullPolicy: IfNotPresent
      nodeSelector:
        kubernetes.io/os: linux
asked 2 years ago215 views
1 Answer
0

I resolved this issue by briefly starting a (t2.micro) instance in the region where I was trying to deploy the EKS cluster. It appears there is some account specific initialization that needs to happen. I found a post suggesting this as a solution to a similar, but slightly different problem: https://repost.aws/questions/QUHp9PA5wIQuG-8FLz-W4pmQ/cannot-deploy-to-fargate-with-4-tasks-limit-reached-for-concurrent-tasks

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions