EKS Cluster with Fargate type Pod going to Pending state

0

I have an EKS Cluster with Fargate type and I want to spin Jaeger all in one image in the pod using the declarative yaml configuration but whenever I create a deployment the pod goes to Pending state so I ran describe pod command and I get below output

Name:             jaeger-6ffb9947dd-jv8kj
Namespace:        telemetry
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=jaeger
                  pod-template-hash=6ffb9947dd
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/jaeger-6ffb9947dd
Containers:
  jaeger:
    Image:        jaegertracing/all-in-one:latest
    Port:         16686/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5prvf (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  kube-api-access-5prvf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 eks.amazonaws.com/compute-type=fargate:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  59s   default-scheduler  0/8 nodes are available: 8 Too many pods. preemption: 0/8 nodes are available: 8 No preemption victims found for incoming pod.

namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: telemetry

jaeger-deployment.yaml

# jaeger-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jaeger
  namespace: telemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jaeger
  template:
    metadata:
      labels:
        app: jaeger
    spec:
      tolerations:
      - key: "eks.amazonaws.com/compute-type"
        operator: "Equal"
        value: "fargate"
        effect: "NoSchedule"
      containers:
      - name: jaeger
        image: jaegertracing/all-in-one:latest
        ports:
        - containerPort: 16686

---

apiVersion: v1
kind: Service
metadata:
  name: jaeger
  namespace: telemetry
spec:
  selector:
    app: jaeger
  ports:
    - protocol: TCP
      port: 80
      targetPort: 16686

command "kubectl describe quota -n telemetry" gives below output

No resources found in telemetry namespace.

How do I make my pods running? What are the steps to resolve the same.

4개 답변
0
수락된 답변

This was resolved by creating a fargate profile in EKS Cluster under the namespace telemetry with label app=jaeger.

So if you want to run a pod in fragate then you need to create a fargate profile and mention the same key value pair label which you mentioned in your deployment.

Below lines from AWS documentation link https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations

Pods must match a Fargate profile at the time that they're scheduled to run on Fargate. Pods that don't match a Fargate profile might be stuck as Pending. If a matching Fargate profile exists, you can delete pending Pods that you have created to reschedule them onto Fargate.

Atif
답변함 5달 전
profile picture
전문가
검토됨 한 달 전
0

Check the resource utilization on your nodes. It's possible that the nodes are running out of resources such as CPU or memory. Use tools like kubectl top nodes to view resource usage.

profile picture
답변함 5달 전
  • top nodes gave below output.

    NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% fargate-ip-10-98-246-134.ec2.internal 12m 0% 144Mi 7% fargate-ip-10-98-246-139.ec2.internal 13m 0% 158Mi 8% fargate-ip-10-98-246-142.ec2.internal 13m 0% 129Mi 7% fargate-ip-10-98-246-168.ec2.internal 16m 0% 131Mi 7% fargate-ip-10-98-246-178.ec2.internal 14m 0% 115Mi 6% fargate-ip-10-98-246-179.ec2.internal 14m 0% 136Mi 7% fargate-ip-10-98-246-181.ec2.internal 13m 0% 150Mi 8% fargate-ip-10-98-246-148.ec2.internal <unknown> <unknown> <unknown> <unknown>

    It seems memory is there

0

How did you configure your Fargate Profile ? seems your deployment is not targeting Fargate but Your EC2 nodes which are out of capacity.

You can look at this section of eksworkshop that explain how enabling and targeting Fargate to schedule EKS Pods : https://www.eksworkshop.com/docs/fundamentals/fargate/enabling

AWS
답변함 5달 전
  • Actually I have very limited permission of EKS given by my organization but I can see one fargate profile and targeted the same in my Jaeger deployment.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: jaeger
      namespace: telemetry
      annotations:
        eks.amazonaws.com/fargate-profile: app-fargate-dev
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: jaeger
      template:
        metadata:
          labels:
            app: jaeger
            fargate: 'yes'
        spec:
          tolerations:
          - key: "eks.amazonaws.com/compute-type"
            operator: "Equal"
            value: "fargate"
            effect: "NoSchedule"
          containers:
          - name: jaeger
            image: jaegertracing/all-in-one:latest
            ports:
            - containerPort: 16686
            resources:
              requests:
                memory: "64Mi"
                cpu: "250m"
              limits:
                memory: "128Mi"
                cpu: "500m"
    
    
    
0
profile picture
답변함 5달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠