Hi all,
I want to import the openai whisper module (https://github.com/openai/whisper) into a Python Lambda Function . This package is large (4GB), so I had to attach an EFS file system to the Lambda function. All right until I test the function and I'm getting this error when trying to import the whisper module.
[ERROR] OSError: /mnt/ddv/ddv/nvidia/cufft/lib/libcufft.so.10: failed to map segment from shared object Traceback (most recent call last): File "/var/lang/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/var/task/lambda_function.py", line 9, in <module> import whisper File "/mnt/ddv/ddv/whisper/__init__.py", line 8, in <module> import torch File "/mnt/ddv/ddv/torch/__init__.py", line 228, in <module> _load_global_deps() File "/mnt/ddv/ddv/torch/__init__.py", line 189, in _load_global_deps _preload_cuda_deps(lib_folder, lib_name) File "/mnt/ddv/ddv/torch/__init__.py", line 155, in _preload_cuda_deps ctypes.CDLL(lib_path) File "/var/lang/lib/python3.9/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode)
Anyone knows how to resolve this error?
Thanks in advance
Hi Riku, how can I check if this module work without a GPU in the Lambda execution environment?
A good place to start would be to create a sample Python script and see if it works on an EC2 without a GPU.
did you ever get this working? We are attempting to do something similar, and would be happy to work with you to help debug this and see if we can get this working as a lambda function. I've gotten Whisper working inside an Anaconda Notebook on an M2 Macbook Air, but that does have a built in GPU, and I'm not entirely sure on what hardware the code is executing. Would love to get this deployed to Lambda.