Does this happen if the user tries to delete the stack when the cluster is being created ?
b'.............+++
writing new private key to \'/tmp/cert-key-0ab8096a3e18.pem\'
-----
Importing self signed SSL certificate to ACM
Successfully imported SSL certificate to ACM with ARN: arn:aws:acm:eu-west-2:047390586134:certificate/eb1871dd-b570-4160-93ec-011b9a52c110
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 192 0 0 100 192 0 3043 --:--:-- --:--:-- --:--:-- 3096
[Errno 2] No such file or directory: \'/root/cb_domain\'
[Errno 2] No such file or directory: \'/root/cb_domain\'
Generating infra manifests now..
TemplateName: single_tenant
!!INFO!! [i-version-switch-1.10.2] [T] [image version override prod -> 1.10.2]
Generating autoscaler manifest..
Generating EKS config..
Generating S3 Bucket Properties Config...
0.149.0-dev
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.10-eks-48e63af
Kustomize Version: v4.5.4
[Errno 2] No such file or directory: \'/root/cb_domain\'
[Errno 2] No such file or directory: \'/root/cb_domain\'
!!STEP!! [install-eks] [UT] [Creating EKS cluster]
/opt/cloudbridge/etc/eks/eksconfig.yaml exists.
2023-11-30 20:06:09 [\xe2\x84\xb9] eksctl version 0.149.0-dev
2023-11-30 20:06:09 [\xe2\x84\xb9] using region eu-west-2
2023-11-30 20:06:10 [\xe2\x9c\x94] using existing VPC (vpc-011bddb34827e891c) and subnets (private:map[eu-west-2a:{subnet-0b004173e64c12636 eu-west-2a 172.31.16.0/20 0 } eu-west-2b:{subnet-05c8c3799af11f340 eu-west-2b 172.31.48.0/20 0 }] public:map[eu-west-2a:{subnet-079d14b480ef3cd01 eu-west-2a 172.31.0.0/20 0 } eu-west-2b:{subnet-0af1daf981ee7282d eu-west-2b 172.31.32.0/20 0 }])
2023-11-30 20:06:10 [!] custom VPC/subnets will be used; if resulting cluster doesn\'t function as expected, make sure to review the configuration of VPC/subnets
2023-11-30 20:06:10 [\xe2\x84\xb9] nodegroup "private-ng-1" will use "" [AmazonLinux2/1.25]
2023-11-30 20:06:10 [\xe2\x84\xb9] nodegroup "private-ng-2" will use "" [AmazonLinux2/1.25]
2023-11-30 20:06:10 [\xe2\x84\xb9] using Kubernetes version 1.25
2023-11-30 20:06:10 [\xe2\x84\xb9] creating EKS cluster "0ab8096a3e18" in "eu-west-2" region with managed nodes
2023-11-30 20:06:10 [\xe2\x84\xb9] 2 nodegroups (private-ng-1, private-ng-2) were included (based on the include/exclude rules)
2023-11-30 20:06:10 [\xe2\x84\xb9] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2023-11-30 20:06:10 [\xe2\x84\xb9] will create a CloudFormation stack for cluster itself and 2 managed nodegroup stack(s)
2023-11-30 20:06:10 [\xe2\x84\xb9] if you encounter any issues, check CloudFormation console or try \'eksctl utils describe-stacks --region=eu-west-2 --cluster=0ab8096a3e18\'
2023-11-30 20:06:10 [\xe2\x84\xb9] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "0ab8096a3e18" in "eu-west-2"
2023-11-30 20:06:10 [\xe2\x84\xb9] CloudWatch logging will not be enabled for cluster "0ab8096a3e18" in "eu-west-2"
2023-11-30 20:06:10 [\xe2\x84\xb9] you can enable it with \'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-2 --cluster=0ab8096a3e18\'
2023-11-30 20:06:10 [\xe2\x84\xb9]
2 sequential tasks: { create cluster control plane "0ab8096a3e18",
2 sequential sub-tasks: {
4 sequential sub-tasks: {
wait for control plane to become ready,
associate IAM OIDC provider,
5 parallel sub-tasks: {
2 sequential sub-tasks: {
create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
create serviceaccount "kube-system/aws-load-balancer-controller",
},
2 sequential sub-tasks: {
create IAM role for serviceaccount "kube-system/cluster-autoscaler",
create serviceaccount "kube-system/cluster-autoscaler",
},
2 sequential sub-tasks: {
create IAM role for serviceaccount "kube-system/autoscaler-service",
create serviceaccount "kube-system/autoscaler-service",
},
2 sequential sub-tasks: {
create IAM role for serviceaccount "kube-system/ebs-csi-controller-sa",
create serviceaccount "kube-system/ebs-csi-controller-sa",
},
2 sequential sub-tasks: {
create IAM role for serviceaccount "kube-system/aws-node",
create serviceaccount "kube-system/aws-node",
},
},
restart daemonset "kube-system/aws-node",
},
2 parallel sub-tasks: {
create managed nodegroup "private-ng-1",
create managed nodegroup "private-ng-2",
},
}
}
2023-11-30 20:06:10 [\xe2\x84\xb9] building cluster stack "eksctl-0ab8096a3e18-cluster"
2023-11-30 20:06:10 [\xe2\x84\xb9] deploying stack "eksctl-0ab8096a3e18-cluster"
2023-11-30 20:06:40 [\xe2\x84\xb9] waiting for CloudFormation stack "eksctl-0ab8096a3e18-cluster"
2023-11-30 20:06:40 [\xe2\x9c\x96] unexpected status "DELETE_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-0ab8096a3e18-cluster"
2023-11-30 20:06:40 [\xe2\x84\xb9] fetching stack events in attempt to troubleshoot the root cause of the failure
2023-11-30 20:06:40 [\xe2\x9c\x96] AWS::IAM::Policy/PolicyELBPermissions: CREATE_FAILED \xe2\x80\x93\xc2\xa0"Resource creation cancelled"
2023-11-30 20:06:40 [\xe2\x9c\x96] AWS::EKS::Cluster/ControlPlane: CREATE_FAILED \xe2\x80\x93\xc2\xa0"Resource creation cancelled"
2023-11-30 20:06:40 [\xe2\x9c\x96] AWS::IAM::Policy/PolicyCloudWatchMetrics: CREATE_FAILED \xe2\x80\x93\xc2\xa0"Resource creation cancelled"
2023-11-30 20:06:40 [!] AWS::CloudFormation::Stack/eksctl-0ab8096a3e18-cluster: DELETE_IN_PROGRESS \xe2\x80\x93\xc2\xa0"User Initiated"
2023-11-30 20:06:40 [!] 1 error(s) occurred and cluster hasn\'t been created properly, you may wish to check CloudFormation console
2023-11-30 20:06:40 [\xe2\x84\xb9] to cleanup resources, run \'eksctl delete cluster --region=eu-west-2 --name=0ab8096a3e18\'
2023-11-30 20:06:40 [\xe2\x9c\x96] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "0ab8096a3e18"
[Errno 2] No such file or directory: \'/root/cb_domain\'
[Errno 2] No such file or directory: \'/root/cb_domain\'