- 新しい順
- 投票が多い順
- コメントが多い順
Hello,
If you are planning to only use Fargate with EKS, you could look under the heading named "Update CoreDNS" on this documentation. You will find that you need to slightly update your Fargate profile and the patch command based on the above documentation link. For example - the patch command according to the above documentation seems to be:
kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
Please let me know if this does not work for you and I'll be happy to help.
Editing based on the comment:
Please follow the below steps to troubleshoot your error:
Step 1. Execute kubectl describe deploy coredns -n kube-system
and check the Pod template section. Does it show something like:
Pod Template:
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
Annotations: eks.amazonaws.com/compute-type: ec2
Service Account: coredns
if there is annotations field in the deployment please execute the command:
kubectl patch deployment coredns \ -n kube-system \ --type json \ -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
If there if no annotation field like below:
Pod Template:
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
Service Account: coredns
then proceed to step 2 as executing the patch command again will result in error as we are trying to remove the annotations field from the deployment which does not exist.
Step 2 - Execute - k describe pod <your-coredns-podname> -n kube-system
and look for Events field, i.e. Is it showing something like below?
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41s (x2 over 6m) default-scheduler no nodes available to schedule pods
Based on the error above, you will be able to find the reason of why coredns pods are not able to schedule.
If suppose the error is something similar to above, please create a fargate profile to target coredns pods like this below (replace my-cluster with your cluster name, 111122223333 with your account ID, AmazonEKSFargatePodExecutionRole with the name of your Pod execution role, and 0000000000000001, 0000000000000002, and 0000000000000003 with the IDs of your private subnets. If you don't have a Pod execution role, you must create one first. ):
aws eks create-fargate-profile \
--fargate-profile-name coredns \
--cluster-name my-cluster \
--pod-execution-role-arn arn:aws:iam::111122223333:role/AmazonEKSFargatePodExecutionRole \
--selectors namespace=kube-system,labels={k8s-app=kube-dns} \
--subnets subnet-0000000000000001 subnet-0000000000000002 subnet-0000000000000003
If you are not able to troubleshoot based on the above, please let me know and I would be happy to help based on your comments.
Thanks, Manish
Hi Manish, I have posted the same command in the image above, that gave an error too.
Hi, I have edited my answer based on your comment, please check if the edited answer helps you in moving forward.