Amazon Linux 2 ECS-optimized GPU AMI on AWS Batch - update to NVIDIA R470 drivers

0

I have a CUDA application that requires version 470.x.x of NVIDIA's CUDA drivers. The Amazon Linux 2 ECS-optimized GPU AMI was updated a few weeks ago to carry driver version 470.57.02 (updated from 460.73.01), which is great. However, I find that a new Batch compute environment configured for p3-family instances launches instances using the older AMI amzn2-ami-ecs-gpu-hvm-2.0.20210916-x86_64-ebs from September, which has the old 460.73.01 driver version. This indicates that Batch does not directly track the latest recommended ECS-optimized GPU AMI version.

When can I expect Batch to be updated to use the new amzn2-ami-ecs-gpu-hvm-2.0.20211120-x86_64-ebs ECS-optimized GPU AMI with NVIDIA driver version 470.57.02?

In general, does AWS have a policy (official or unofficial) for when the ECS-optimized GPU AMI should be updated to include new drivers from NVIDIA, or for when Batch will start using a new ECS-optimized GPU AMI by default? Knowing this would be very helpful for my planning in order to avoid driver version incompatibility issues in the future.

Thanks.

1개 답변
-1

1- Run sudo yum update command.

2- Reboot your instance to ensure that you are using the latest packages and libraries from your update.

Azeem
답변함 2년 전
  • Hi Azeem, my question is not about updating anything inside a running instance. I'm asking when the AWS Batch service will start using the latest ECS GPU AMI (with GPU driver version 470.x.x) when launching instances in a Batch compute environment.

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠