Amazon Linux 2 ECS-optimized GPU AMI on AWS Batch - update to NVIDIA R470 drivers

0

I have a CUDA application that requires version 470.x.x of NVIDIA's CUDA drivers. The Amazon Linux 2 ECS-optimized GPU AMI was updated a few weeks ago to carry driver version 470.57.02 (updated from 460.73.01), which is great. However, I find that a new Batch compute environment configured for p3-family instances launches instances using the older AMI amzn2-ami-ecs-gpu-hvm-2.0.20210916-x86_64-ebs from September, which has the old 460.73.01 driver version. This indicates that Batch does not directly track the latest recommended ECS-optimized GPU AMI version.

When can I expect Batch to be updated to use the new amzn2-ami-ecs-gpu-hvm-2.0.20211120-x86_64-ebs ECS-optimized GPU AMI with NVIDIA driver version 470.57.02?

In general, does AWS have a policy (official or unofficial) for when the ECS-optimized GPU AMI should be updated to include new drivers from NVIDIA, or for when Batch will start using a new ECS-optimized GPU AMI by default? Knowing this would be very helpful for my planning in order to avoid driver version incompatibility issues in the future.

Thanks.

1 Antwort
-1

1- Run sudo yum update command.

2- Reboot your instance to ensure that you are using the latest packages and libraries from your update.

Azeem
beantwortet vor 2 Jahren
  • Hi Azeem, my question is not about updating anything inside a running instance. I'm asking when the AWS Batch service will start using the latest ECS GPU AMI (with GPU driver version 470.x.x) when launching instances in a Batch compute environment.

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen