By using AWS re:Post, you agree to the Terms of Use

AWS Lambda “Cannot load native module 'Cryptodome.Hash._MD5'”


I recently added some dependencies to my serverless project and ran into the following error when invoking my newly deployed Lambda.

I don't encounter this issue on my local dev instance running MacOS 10.13.6 and Python 3.6.0.

module initialization error: Cannot load native module 'Cryptodome.Hash._MD5': Trying '': /var/task/vendored/Cryptodome/Util/../Hash/ cannot open shared object file: No such file or directory, Trying '': /var/task/vendored/Cryptodome/Util/../Hash/ cannot open shared object file: No such file or directory, Trying '': /var/task/vendored/Cryptodome/Util/../Hash/ cannot open shared object file: No such file or directory

I did some research on this problem and here's what I've gathered:

- Lambda runs on Linux and the package above may need to be built on a Linux environment to resolve correctly
- pycryptodome may be a drop in replacement for pycrypto and may be causing some conflicts with Lambda's environment

This dependency stems from one of my other dependencies and I don't want to manually modify these dependencies to use a different package. I would also prefer to not set up a virtual Linux environment to package this project.

What can I do to better investigate this issue, and ideally, resolve it?

Edited by: JGMeyer on Aug 8, 2019 7:17 PM

Edited by: JGMeyer on Aug 8, 2019 7:17 PM

asked 3 years ago404 views
8 Answers

You can try the virtualenv method described here:

Make sure that when running the pip commands, you specify the flags to get lambda compatible wheel packages. for example

--python-version 27 --only-binary :all: --platform manylinux1_x86_64 --abi cp27mu

for python 2.7 or

--python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m

for python 3.6.

This will only work if all of the dependencies have compatible wheels. If they don't, you will have to build your deployment package in an environment compatible with lambda. See here for details:

answered 3 years ago
  • Thanks a lot, i was getting crazy after almost 2 weeks, because i couldn't find any answer to a related package from pycryptodomex with the same error but after trying this, everything started to work flawless.


Thanks for information. This definitely gives me a direction to work from.

I tried modifying my setup to run:

python -m pip install -r requirements.txt --python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m -t vendored

And I get:

Collecting gmusicapi (from -r requirements.txt (line 2))
  ERROR: Could not find a version that satisfies the requirement gmusicapi (from -r requirements.txt (line 2)) (from versions: none)
ERROR: No matching distribution found for gmusicapi (from -r requirements.txt (line 2))

I still need to read up more on wheels. Am I correct in assuming this is the case where the package doesn't have a compatible wheel? If so, would I need to pull the repo for this package and set up wheels manually?

I'd like to avoid setting up a virtual box to develop my code if possible, but how much of a rabbit hole would I be jumping in to avoid doing so?

answered 3 years ago

So I actually got the idea to move my resolved dependencies from my first install into a new requirements.txt pip freeze > requirements.txt .

When I rerun:

python -m pip install -r requirements.txt --no-cache-dir --python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m -t vendored

I get:

ERROR: Could not find a version that satisfies the requirement future==0.17.1 (from -r requirements.txt (line 7)) (from versions: none)
ERROR: No matching distribution found for future==0.17.1 (from -r requirements.txt (line 7))

Checking it looks like "future" does not have a wheel archive. Is this the actual root of my issue?

answered 3 years ago

That's one of those ones where you'll have to build the wheel yourself. If you can determine that the package uses no native code, it doesn't matter where you build it. If it does use native code, a virtual box may not be enough. I've found it useful to spin up a tiny ec2 instance, build the package there, then save the generated wheel in a local pip repo for future builds.

And for the record, building wheels is fairly easy. modern pips with the wheel package installed will automatically create and cache wheels after installing packages, to speed up future installs. so just pip install wheel and then pip install <packagename> and that should be it.

answered 3 years ago

So since your last response, I've been doing a lot of investigation into using Docker for my deployment. I'm assuming I could use this to replace the EC2 instance in your example. With this workflow, are you suggesting that I could potentially:

  1. spawn up a Docker container running amazonlinux
  2. compile the wheel for the problematic pip dependency (in amazonlinux)
  3. copy the contents from the container into my local serverless repo
  4. somehow deploy my serverless app using the new wheel?

I understand this is glancing over your recommendation about a local pip repo- I'm going to do more research into this to better understand what you mean here. Maybe this would simplify the workflow even further.

answered 3 years ago

That's about the size of it. I'm unfortunately not familiar with the serverless framework, so I don't know how that part would work.

For the local pip repo, it can be as simple as an http server with a proper folder hierarchy: Once you have the properly built wheels in a local repo, you can tell pip to include the local repo in it's search.

answered 3 years ago

Oh cool. So may be hard to automate as a robust solution, but maybe I can run some tests building a simple web server, building with docker, then moving the wheels over to the webserver.

I appreciate the help! I've definitely been learning a lot with this whole process. I tried a workflow where I could mimic my environment in Docker and deploy from it directly, but installing node (to install serverless) was way too long of a process for dev testing because the node install is huge. I ended up trying to build with the serverless-python-requirements plugin for serverless and got most of the way there. Definitely anything I import back to my local repo gets the opposite issue saying it can't find the macos .os files. But it's still a work in progress getting it to behave correctly in AWS.

answered 3 years ago

Using the serverless-python-requirements plugin with pipenv and dockerization resolved my issue! I needed to go back and recreate my Pipfile.lock to fix some of my dependencies, but after doing so, I seem to have everything running again in dev and prod!

Thank you for all the help diagnosing this problem :)

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions