- Newest
- Most votes
- Most comments
Hello there , thank you for reaching out :)
In answering your questions:
Theoretically it should be possible to implement persistent EBS volume claim for Kubernetes pods, but it's unclear whether the multi-attach -- to the main server and to the pods -- would work.
In this case, the multi-attach from the main server and to the pods would not work, this is because kubernetes does not support this function, There has been an open issue for this: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/449 . Both the server and pods might have to perform read-write operations to the EBS volume and this may cause a number of issues, and this functionality is not even supported by the CSI storage driver.
Referring to document https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html :
Standard file systems, such as XFS and EXT4, are not designed to be accessed simultaneously by multiple servers, such as EC2 instances. Using Multi-Attach with a standard file system can result in data corruption or loss, so this is not safe for production workloads. You can use a clustered file system to ensure data resiliency and reliability for production workloads.
Could there be a viable setup where both the main server and Kubernetes can access one EBS and the same time? If so what would it look like?
The server would be one instance and a kubernetes worker node would be another instance, a similar case to the above mentioned would be seen, Where the EBS drive would then be accessed simultaneously.
I tried starting NFS server from a separate ec-2 to which I attached the EBS (it supports multi-attach) and mounting the NFS to pods. But it seems that when an IDE process is started with the volume attached like this, it breaks the NFS server.
In this case, when the pods are the only ones using the NFS, the setup would work properly, but when the process starts up, the pods and worker nodes would still be modifying the same volume essentially.
Possible Workaround:
The data in the EBS volume could be transferred into an EFS volume, from there the EFS volume would be mounted on a new server machine with a normal root EBS volume, which would be tasked with running the cloud IDE, and then the EFS voulme would be mounted into kubernetes as a persistent volume. This can be supported because this is not type XFS and EXT4 but is NFS. This can be done using managed node groups, instead of fargate, and would therefore be slightly faster.
In this way, The cloud IDE application would get all the necessary data from the mounted EFS but the pods in Kubernetes could use the same data from the same EFS volume.
Relevant content
- asked 2 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 5 months ago
Thanks for the detailed answer! Now it's clear to me that in my case EBS is certainly not the best idea. Even if I manage to set something up, seems like I'll have lots of maintenance work in the future. EFS turned out to be too slow on some common tasks we have, so I gave FSx Lustre a try and it's working fine so far.