Questions tagged with AWS Lambda

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to use GreenGrass MQTT to trigger a script

My testing has hit a number of roadblocks: 1. I often run into an issue where my GG device stops accepting any new component commands including deployments from the console. The console will report all is well, but the instance does not show the updated component list for example. This can be solved easily by restarting the instance and running the commands again or redeploying, although I do not know why this occurs, this will happen after about 1-2 hours of constant use on a t3.large Ubuntu Cloud9 instance. It seems to occur specifically with the Publisher and Subscriber components running but enough work with any components will cause logs to start failing to appear in the /greengrass/v2/logs file and new commands to be essentially ignored. This is hard to catch and harder to debug due to the lack of new log files. 2. Just trying to test the Stream Manager component has proven to be almost impossible and my test case (which I've gotten to work just a week ago) no longer runs correctly and I have no idea how to fix it or if I'm looking at the incorrect logs. I've been running the Custom Component tutorial here: https://docs.aws.amazon.com/greengrass/v2/developerguide/use-stream-manager-in-custom-components.html This is under the "Define component recipes that use stream manager" > "Use the Stream Manager SDK for Python" > https://github.com/aws-greengrass/aws-greengrass-stream-manager-sdk-python/blob/main/samples/stream_manager_s3.py I will include the logs in a response below 3. Finally, I don't see any documentation on how to trigger a component using MQTT. I've tried importing a Lambda which has the option to be triggered by the local topic but haven't been able to test it due to the above. Any documentation on how to subscribe to a topic and run a script on the GG Core device when a message comes through from a client device (similar to an SNS topic triggering a Lambda) would be helpful. PS. I did find some stuff for V1 of GG but I'm specifically looking for V2.
2
answers
0
votes
35
views
asked 23 days ago

Multipart upload with aws S3 + checksums

I am trying to implement browser multipart upload to a S3 bucket. I should be able to pause and play the upload and also I'll like to automatically generate the checksums as I'm uploading. I have tried several approaches and I've been hitting a wall. Some of the approaches I've tried. * Using the amplify S3 upload, this works well, but has the caveat that I can't generate the checksums automatically, to generate the checksums, I run a lambda function after file upload, the caveat is for large files, the lambda function times out. Also, I'll like to avoid going this route as I believe It's quite computationally expensive. * Using https://blog.logrocket.com/multipart-uploads-s3-node-js-react/. This is also similar to the above, the caveat is when I add the checksum algorithm to the upload part query, I get a **checksum type mismatch occurred, expected checksum type sha256, actual checksum type: null site:stackoverflow.com s3**. After a lot of googling, I'm not sure I can compute the checksums using presigned url. * and the current approach is to do away with the presigned url and send the chunked data to the lambda functions which then sends to the bucket. Since I'm managing everything with amplify, I run into some problems with API gateway(multipart/form-data). I have set the gateway to accept binary data and followed other fixes I found online but I’m stuck on **execution failed due to configuration error unable to transform request**. How do I fix the above error and what will be the ideal approach to implement the functionalities(multipart file upload to support resumable uploads and checksum computation)
0
answers
0
votes
24
views
asked 25 days ago

AWS Lambda: [Errno 13] Permission denied: '/tmp/ffmpeg'

Hi, I am trying to use moviepy in my serverless application on AWS Lambda. Since ffmpeg is used inside moviepy I am facing errors related to ffmpeg while executing the function. 1. I have created a docker file and deployed the container image for the function. I am using M1 (arm64)- MacOS as the platform 2. I have set the exec variable before importing moviepy, as shown below ``` os.environ["IMAGEIO_FFMPEG_EXE"] = "/tmp/ffmpeg" ``` 3. I have uploaded the ffmpeg executable to S3 and am downloading to /tmp/ffmpeg in the lambda function handler. 4. After downloading I am changing the permissions of "/tmp/ffmpeg" to 755. 5. Still I am getting error as shown below: ``` errorMessage": "[Errno 13] Permission denied: '/tmp/ffmpeg'", "errorType": "PermissionError", "stackTrace": [ " File \"/var/task/app.py\", line 65, in handler\n scaledDownFilename = scaleDown(event[\"filename\"])\n", " File \"/var/task/app.py\", line 40, in scaleDown\n clip = mp.VideoFileClip(filename + \".mp4\")\n", " File \"/var/task/moviepy/video/io/VideoFileClip.py\", line 88, in __init__\n self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,\n", " File \"/var/task/moviepy/video/io/ffmpeg_reader.py\", line 35, in __init__\n infos = ffmpeg_parse_infos(filename, print_infos, check_duration,\n", " File \"/var/task/moviepy/video/io/ffmpeg_reader.py\", line 257, in ffmpeg_parse_infos\n proc = sp.Popen(cmd, **popen_params)\n", " File \"/var/lang/lib/python3.8/subprocess.py\", line 858, in __init__\n self._execute_child(args, executable, preexec_fn, close_fds,\n", " File \"/var/lang/lib/python3.8/subprocess.py\", line 1704, in _execute_child\n raise child_exception_type(errno_num, err_msg, err_filename)\n" ] } ``` **Does anyone know how to resolve this ?**
1
answers
0
votes
78
views
asked a month ago

Using Cognito and Cloudfront to control access to user files on S3

Hi, I'm putting together a media viewer website for myself to learn how AWS works. My first step was to host a webpage (index.html) on S3, and have this webpage allow for image/video uploads to a folder in my bucket using the AWS Javascript SDK (v2), and having the mediaviewer on the web page access these files directly through http. I have lambda functions that convert media formats appropriately, and hold metadata in DynamoDB that can be queried by the website using the javascript SDK. This all works fine. Now, I'd like to make it a bit more secure, and support users who login, individual user directories within the buckets, and control access to the media files so users can only view their own files. So the steps I used to do this were the following: 1. Create a user pool and identity pool in Cognito. 2. Add a google sign in button, and enable user pool sign in with the google button... To do this, Google requires the webpage to be served via https (not http). 3. Since S3 can't serve files via https, I put the S3 bucket behind cloudfront. 4. Modify my bucket to have a user directory, and subdirectories for each cognito identityid. Modify the access policies so that users can only read/write to their individual subdirectory, and can only read/write to a subset of DynamoDB based on their identity ID. The webpage uses AWS Javascript SDK calls to login with cognito, upload to S3, access dynamodb. It all appears to work well, and seems to give me secure user access control. 5. Now, the hole... I want the media viewer portion of my app to access the images/media via https:// links, and not via the javascript sdk. The way its currently configured, https access goes through cloudfront, and cloudfront has access to all the files in the S3 bucket. I'm trying to figure out how to make an https request via cloudfront (along with a cognito token), and then have cloudfront inspect the token, determine the identity ID of the user, and only serve contents for that user if he is logged in. Does this require lambda@edge or is there an easier way? I don't want to use signed urls, because I anticipate having a single user view hundreds of urls at a single time (in a gallery view), and figure generations signed urls will slow things down too much. 6. In the future, I may want to enable sharing of files... Could I enable that by having an entry in DynamoDB for every file, and have cloudfront check if the user is allowed to view the file, before serving it? Would this be part of the lambd@edge function? Thanks
0
answers
0
votes
31
views
rrrpdx
asked a month ago