I have a CUDA application that requires version 470.x.x of NVIDIA's CUDA drivers. The Amazon Linux 2 ECS-optimized GPU AMI was updated a few weeks ago to carry driver version 470.57.02 (updated from 460.73.01), which is great. However, I find that a new Batch compute environment configured for p3-family instances launches instances using the older AMI amzn2-ami-ecs-gpu-hvm-2.0.20210916-x86_64-ebs from September, which has the old 460.73.01 driver version. This indicates that Batch does not directly track the latest recommended ECS-optimized GPU AMI version.
When can I expect Batch to be updated to use the new amzn2-ami-ecs-gpu-hvm-2.0.20211120-x86_64-ebs ECS-optimized GPU AMI with NVIDIA driver version 470.57.02?
In general, does AWS have a policy (official or unofficial) for when the ECS-optimized GPU AMI should be updated to include new drivers from NVIDIA, or for when Batch will start using a new ECS-optimized GPU AMI by default? Knowing this would be very helpful for my planning in order to avoid driver version incompatibility issues in the future.
Thanks.
@jesse where can I read more about using GPU in AWS Batch? I have quite some problems: https://repost.aws/questions/QUIQVmco0IRUKkCEoO2PIz1g/use-nvidia-gpu-in-aws-batch Thanks!