Trouble with AWS lambda runtime API with docker image

1

Short version

I am running a lambda function in a docker container, and all executions are marked as failures with a Runtime.ExitError, even though I am using the runtime API and the lambda added as on_success destination is running.

Longer version, with context

I have a setup with a bunch of functions chained using API invocations and destinations. One of them requires a custom runtime (handler is a PHP command), I have been using a docker image for that. In order to get it running correctly, I am getting the request ID in the entrypoint, and in the command, running both my command and a curl to the runtime API, like so:

CMD ["/bin/bash", "-c", "/app/bin/my-super-command && curl --silent -X POST \"http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/${REQUEST_ID}/response\" -d 'SUCCESS'"]

I know the request id is correct (I am printing it in the entrypoint), and at the end of the logs, I am getting the following lines (edited of course):

End of my-super-command
{"status":"OK"}
END RequestId: 123456-abcd-1234-abcd-12345678910
REPORT RequestId: 123456-abcd-1234-abcd-12345678910	Duration: 39626.80 ms	Billed Duration: 39777 ms	Memory Size: 384 MB	Max Memory Used: 356 MB	Init Duration: 149.26 ms
RequestId: 123456-abcd-1234-abcd-12345678910 Error: Runtime exited without providing a reason
Runtime.ExitError
Beginning of the entrypoint

The first line is from my command, the second line looks is the output from the curl (it looks like a success, and the API documentation seems to agree with me), but as we can see, the call seems to be marked as failed later.

The weird stuff:

  • The lambda logs a failure even though the Runtime API returns an OK to my call for success
  • The lambda is marked as failed in the monitoring
  • The function I put after this one in the workflow, in a destination, with the on_success condition, runs !

The problems I have had, and then processed:

  • I am getting the request id with a combination of grep/sed/trim because there's a \r somewhere, that's not optimal but I am printing it and appears correctly (I have printed the full curl command too, just in case)
  • I have had issues with timeout/OOM, but as you can see above, it is not the case here.

Am I missing something here ? Maybe I did not understand the usage of the runtime API. As you can see the next run seems to be launched but interrupted, so there might be some timing issue.

asked 2 years ago1251 views
1 Answer
0

If you are creating a custom runtime you'll need to call the AWS Lambda runtime API to get the next invocation (as you did), then call the API to return the result (as you did), but you also need to then call the API again to get the next invocation. Essentially your code needs to do this in an infinite loop.

When there are no more invocations AWS Lambda will suspend your function and you won't get billed. When there is another invocation it will try to reuse this suspended function. This allows the service to avoid running the initialization each time there's a new invocation which can save on startup time. Sometimes if the requests come in too slowly it'll start a new container anyway. But when you have lots of requests coming in this can save a lot of billable CPU time.

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions