How do I use persistent storage in Amazon EKS?
I want to use persistent storage in Amazon Elastic Kubernetes Service (Amazon EKS).
Short description
Set up persistent storage in Amazon EKS using either of the following options:
- Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver
- Amazon Elastic File System (Amazon EFS) Container Storage Interface (CSI) driver
To use one of these options, complete the steps in either of the following sections:
- Option A: Deploy and test the Amazon EBS CSI driver
- Option B: Deploy and test the Amazon EFS CSI driver
The commands in this article require kubectl version 1.14 or greater. To see your version of kubectl, run the following command:
kubectl version --client --short
Note: It's a best practice to make sure you install the latest version of the drivers. For more information, see in the GitHub repositories for the Amazon EBS CSI driver and Amazon EFS CSI driver.
Resolution
Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.
Before you complete the steps in either section, you must:
2. Set AWS Identity and Access Management (IAM) permissions for creating and attaching a policy to the Amazon EKS worker node role CSI Driver Role.
3. Create your Amazon EKS cluster and join your worker nodes to the cluster.
Note: Run the kubectl get nodes command to verify your worker nodes are attached to your cluster.
4. Run the following command to verify that your AWS IAM OpenID Connect (OIDC) provider exists for your cluster:
aws eks describe-cluster --name your_cluster_name --query "cluster.identity.oidc.issuer" --output text
Note: Replace your_cluster_name with your cluster name.
5. Run the following command to verify that your IAM OIDC provider is configured:
`aws iam list-open-id-connect-providers | grep <ID of the oidc provider>`;
Note: Replace ID of the oidc provider with your OIDC ID. If you receive a No OpenIDConnect provider found in your account error, you must create an IAM OIDC provider.
7. Run the following command to create an IAM OIDC provider:
eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve
Note: Replace my-cluster with your cluster name.
Option A: Deploy and test the Amazon EBS CSI driver
Deploy the Amazon EBS CSI driver:
1. Create an IAM trust policy file, similar to the one below:
cat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:oidc-provider/oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<your OIDC ID>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:aud": "sts.amazonaws.com", "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa" } } } ] } EOF
Note: Replace YOUR_AWS_ACCOUNT_ID with your account ID. Replace YOUR_AWS_REGION with your AWS Region. Replace your OIDC ID with the output from creating your IAM OIDC provider.
2. Create an IAM role named Amazon_EBS_CSI_Driver:
aws iam create-role \ --role-name AmazonEKS_EBS_CSI_Driver \ --assume-role-policy-document file://"trust-policy.json"
3. Attach the AWS managed IAM policy for the EBS CSI Driver to the IAM role you created:
aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --role-name AmazonEKS_EBS_CSI_DriverRole
4. Deploy the Amazon EBS CSI driver:
Note: You can deploy the EBS CSI driver using Kustomize, Helm, or an Amazon EKS managed add-on. In the example below, the driver is deployed using the Amazon EKS add-on feature. For more information, see the aws-ebs-csi-driver installation guide.
aws eks create-addon \ --cluster-name my-cluster \ --addon-name aws-ebs-csi-driver \ --service-account-role-arn arb:aws:iam:: YOUR_AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole
Note: Replace my-cluster with your cluster name and YOUR_AWS_ACCOUNT_ID with your account ID.
Test the Amazon EBS CSI driver:
You can test your Amazon EBS CSI driver with a sample application that uses dynamic provisioning for the pods. The Amazon EBS volume is provisioned on demand.
1. Clone the aws-ebs-csi-driver repository from AWS GitHub:
git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git
2. Change your working directory to the folder that contains the Amazon EBS driver test files:
cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/
3. Create the Kubernetes resources required for testing:
kubectl apply -f manifests/
Note: The kubectl command creates a StorageClass (from the Kubernetes website), PersistentVolumeClaim (PVC) (from the Kubernetes website), and pod. The pod references the PVC. An Amazon EBS volume is provisioned only when the pod is created.
4. Describe the ebs-sc storage class:
kubectl describe storageclass ebs-sc
5. Watch the pods in the default namespace and wait for the app pod's status to change to Running. For example:
kubectl get pods --watch
6. View the persistent volume created because of the pod that references the PVC:
kubectl get pv
7. View information about the persistent volume:
kubectl describe pv your_pv_name
Note: Replace your_pv_name with the name of the persistent volume returned from the preceding step 6. The value of the Source.VolumeHandle property in the output is the ID of the physical Amazon EBS volume created in your account.
8. Verify that the pod is writing data to the volume:
kubectl exec -it app -- cat /data/out.txt
Note: The command output displays the current date and time stored in the /data/out.txt file. The file includes the day, month, date, and time.
Option B: Deploy and test the Amazon EFS CSI driver
Before deploying the CSI driver, create an IAM role that allows the CSI driver's service account to make calls to AWS APIs on your behalf.
1. Download the IAM policy document from GitHub:
curl -o iam-policy-example.json https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
2. Create an IAM policy:
aws iam create-policy \ --policy-name AmazonEKS_EFS_CSI_Driver_Policy \ --policy-document file://iam-policy-example.json
3. Run the following command to determine your cluster's OIDC provider URL:
aws eks describe-cluster --name your_cluster_name --query "cluster.identity.oidc.issuer" --output text
Note: In step 3, replace your_cluster_name with your cluster name.
4. Create the following IAM trust policy, and then grant the AssumeRoleWithWebIdentity action to your Kubernetes service account. For example:
cat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:oidc-provider/oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa" } } } ] } EOF
Note: In step 4, replace YOUR_AWS_ACCOUNT_ID with your account ID. Replace YOUR_AWS_REGION with your Region. Replace XXXXXXXXXX45D83924220DC4815XXXXX with the value returned in step 3.
5. Create an IAM role:
aws iam create-role \ --role-name AmazonEKS_EFS_CSI_DriverRole \ --assume-role-policy-document file://"trust-policy.json"
6. Attach your new IAM policy to the role:
aws iam attach-role-policy \ --policy-arn arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy \ --role-name AmazonEKS_EFS_CSI_DriverRole
7. Save the following contents to a file named efs-service-account.yaml.
--- apiVersion: v1 kind: ServiceAccount metadata: labels: app.kubernetes.io/name: aws-efs-csi-driver name: efs-csi-controller-sa namespace: kube-system annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AmazonEKS_EFS_CSI_DriverRole
8. Create the Kubernetes service account on your cluster. The Kubernetes service account named efs-csi-controller-sa is annotated with the IAM role that you created.
kubectl apply -f efs-service-account.yaml
9. Install the driver using images stored in the public Amazon ECR registry by downloading the manifest:
$ kubectl kustomize "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.5" > public-ecr-driver.yaml
Note: You can install the EFS CSI Driver using Helm and a Kustomize with AWS Private or Public Registry. For more information, see the AWS EFS CSI Driver documentation.
10. Edit the file 'public-ecr-driver.yaml' and annotate 'efs-csi-controller-sa' Kubernetes service account section with the ARN of the IAM role that you created:
apiVersion: v1 kind: ServiceAccount metadata: labels: app.kubernetes.io/name: aws-efs-csi-driver annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<accountid>:role/AmazonEKS_EFS_CSI_DriverRole name: efs-csi-controller-sa namespace: kube-system
Deploy the Amazon EFS CSI driver
The Amazon EFS CSI driver allows multiple pods to write to a volume at the same time with the ReadWriteMany mode.
1. To deploy the Amazon EFS CSI driver, apply the manifest:
$ kubectl apply -f public-ecr-driver.yaml
2. If your cluster contains only AWS Fargate pods (no nodes), then deploy the driver with the following command (all Regions):
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/deploy/kubernetes/base/csidriver.yaml
Create an Amazon EFS File system
1. Get the VPC ID for your Amazon EKS cluster:
aws eks describe-cluster --name your_cluster_name --query "cluster.resourcesVpcConfig.vpcId" --output text
Note: In step 3, replace your_cluster_name with your cluster name.
2. Get the CIDR range for your VPC cluster:
aws ec2 describe-vpcs --vpc-ids YOUR_VPC_ID --query "Vpcs[].CidrBlock" --output text
Note: In step 4, replace the YOUR_VPC_ID with the VPC ID from the preceding step 3.
3. Create a security group that allows inbound network file system (NFS) traffic for your Amazon EFS mount points:
aws ec2 create-security-group --description efs-test-sg --group-name efs-sg --vpc-id YOUR_VPC_ID
Note: Replace YOUR_VPC_ID with the output from the preceding step 3. Save the GroupId for later.
4. Add an NFS inbound rule so that resources in your VPC can communicate with your Amazon EFS file system:
aws ec2 authorize-security-group-ingress --group-id sg-xxx --protocol tcp --port 2049 --cidr YOUR_VPC_CIDR
Note: Replace YOUR_VPC_CIDR with the output from the preceding step 4. Replace sg-xxx with the security group ID from the preceding step 5.
5. Create an Amazon EFS file system for your Amazon EKS cluster:
aws efs create-file-system --creation-token eks-efs
Note: Save the FileSystemId for later use.
6. To create a mount target for Amazon EFS, run the following command:
aws efs create-mount-target --file-system-id FileSystemId --subnet-id SubnetID --security-group sg-xxx
Important: Be sure to run the command for all the Availability Zones with the SubnetID in the Availability Zone where your worker nodes are running. Replace FileSystemId with the output of the preceding step 7 (where you created the Amazon EFS file system). Replace sg-xxx with the output of the preceding step 5 (where you created the security group). Replace SubnetID with the subnet used by your worker nodes. To create mount targets in multiple subnets, you must run the command in step 8 separately for each subnet ID. It's a best practice to create a mount target in each Availability Zone where your worker nodes are running.
Note: You can create mount targets for all the Availability Zones where worker nodes are launched. Then, all the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Availability Zone with the mount target can use the file system.
The Amazon EFS file system and its mount targets are now running and ready to be used by pods in the cluster.
Test the Amazon EFS CSI driver
You can test the Amazon EFS CSI driver by deploying two pods that write to the same file.
1. Clone the aws-efs-csi-driver repository from AWS GitHub:
git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git
2. Change your working directory to the folder that contains the Amazon EFS CSI driver test files:
cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
3. Retrieve your Amazon EFS file system ID that was created earlier:
aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text
Note: If the command in step 3 returns more than one result, you can use the Amazon EFS file system ID that you saved earlier.
4. In the specs/pv.yaml file, replace the spec.csi.volumeHandle value with your Amazon EFS FileSystemId from previous steps.
5. Create the Kubernetes resources required for testing:
kubectl apply -f specs/
Note: The kubectl command in the preceding step 5 creates an Amazon EFS storage class, PVC, persistent volume, and two pods (app1 and app2).
6. List the persistent volumes in the default namespace, and look for a persistent volume with the default/efs-claim claim:
kubectl get pv -w
7. Describe the persistent volume:
kubectl describe pv efs-pv
8. Test if the two pods are writing data to the file:
`kubectl exec -it app1 -- tail /data/out1.txt kubectl exec -it app2 -- tail /data/out1.txt`b
Wait for about one minute. The output shows the current date written to /data/out1.txt by both pods.
Related information
Vídeos relacionados

There is a mismatch in the naming in **Option A: Deploy and test the Amazon EBS CSI driver - Deploy the Amazon EBS CSI driver: **
Step 2 starts by saying to use the role name Amazon_EBS_CSI_Driver but then the aws cli examples in Step 2 and Step 3 use the role name AmazonEKS_EBS_CSI_Driver
Thank you for your comment. We'll review and update the Knowledge Center article as needed.
Contenido relevante
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 9 meses
- OFICIAL DE AWSActualizada hace 4 meses