- Le plus récent
- Le plus de votes
- La plupart des commentaires
Hi, the message asks to install botocore (used by CLI and other AWS tools). You may want to start there to get more precise info about the real issue.
See https://github.com/boto/botocore
Are you running the failing command from CLI ? If yes, can you tell us what it is ?
In case, you may also want to check https://stackoverflow.com/questions/72389626/unable-to-access-efs-from-ecs-fargate-task as a possible solution to your problem
Yes, I am running it from CLI. I create EFS using the command line. I have a VPC with 3 public subnet, 3 private subnet and 3 private db subnet.
a) I use this command to create a security group for VPC aws ec2 create-security-group --group-name <group-name> --description " development EFS security group" --vpc-id <VPC_ID> --output text --region <REGION_NAME>b) Add inbound rule for the security group aws ec2 authorize-security-group-ingress --group-id <SECURITY_GROUP_ID> --protocol tcp --port 2049 --cidr <CIDR_RANGE> --region <REGION_NAME>
c) create efs aws efs create-file-system --region <REGION_NAME> --performance-mode generalPurpose --query 'FileSystemId' --tags Key=Name,Value=<VALUE> --output text --encrypted
d) Get the subnets that have the internal-elb tag (this gives me the private subnet) aws ec2 describe-subnets --filters "Name=tag:kubernetes.io/role/internal-elb,Values=1" --query "Subnets[*].SubnetId" --output text
e) For each subnet, I mount the efs using this command For each subnet, mount efs aws efs create-mount-target --file-system-id <FILESYSTEM_ID> --subnet-id <SUBNET_ID> --security-groups <SECURITY_GROUP_ID> --region <REGION_NAME>
I check the mounted subnet and they are correct.
I have two files efs.yaml which I execute using the kubectl command kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs provisioner: efs.csi.aws.com
-- kubectl apply -f efs.yaml
Is kubelet attached to this same vpc where the EFS in mounted?
I finally figured out the issue. I was missing the VPC to be enabled with DNS. Since in my configuration, DNS was not enabled for DNS, the URL for the EFS mount was not getting accessed. Once I clicked on the checkbox to enable the DNS for VPC, the EFS mount in the POD worked.
I than use a manifest file to create the PersistenVolume, PersistentVolumeClaim, and efs-app. I get the manifest file from github and replace the EFS filesystem id. apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: efs persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: <EFS_ID>
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: efs-claim spec: accessModes: - ReadWriteOnce storageClassName: efs resources: requests: storage: 5Gi
apiVersion: v1 kind: Pod metadata: name: efs-app spec: containers: - name: app image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u); sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: efs-claim
But the pod stuck in the ContainerCreating state kube-system efs-app 0/1 ContainerCreating 0 6h53m
When I check on the pod using the describe command, I get the error.
kubectl describe pod efs-app --namespace kube-system Name: efs-app Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Service Account: default Node: fargate-10.0.6.84/10.0.6.84 Start Time: Sun, 04 Jun 2023 17:52:16 -0600 Labels: eks.amazonaws.com/fargate-profile=fp-default Annotations: CapacityProvisioned: 0.25vCPU 0.5GB Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND Status: Pending Events: Type Reason Age From Message
Warning FailedMount 18m (x38 over 6h39m) kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[kube-api-access-w5bxs persistent-storage]: timed out waiting for the condition Warning FailedMount 14m (x139 over 6h52m) kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-w5bxs]: timed out waiting for the condition Warning FailedMount 4m8s (x209 over 6h54m) kubelet MountVolume.SetUp failed for volume "efs-pv" : rpc
Contenus pertinents
- demandé il y a 2 ans
- demandé il y a 5 jours
- demandé il y a 5 jours
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 5 mois
- AWS OFFICIELA mis à jour il y a 5 mois
I have defined a VPC with three public subnet, three private subnet and three private DB subnet. I then define the EFS, and then use the following link to deploy a sample that uses a persistent volume I created. git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git. I navigate to the Navigate to the multiple_pods example directory, update pv.yaml with the file system id of the EFS. kubectl apply -f specs/pv.yaml kubectl apply -f specs/claim.yaml kubectl apply -f specs/storageclass.yaml The test application pod is not running and it gives me the error that I posted before.