By using AWS re:Post, you agree to the Terms of Use
/AWS DeepLens/

Questions tagged with AWS DeepLens

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Can we please get firmware to disable Secure Boot on DeepLens?

This is unfortunately directed mostly at AWS employees as none of the rest of use can do anything about it. My issue is that the DeepLens device is so locked down that it's impossible to run anything other than a release of a distro from 6 years ago. I've dug into this at length and discovered the following: - It's possible to enter the firmware, but there's no way to disable Secure Boot (at least on my v1.0). - The EFI executable signed by AWS and booted by the firmware is actually a unified kernel image; it has the Linux kernel, initrd, and command line all built into it. This means no possibility of altering the arguments used to boot the kernel. - You also can't use `kexec` to warm-boot another kernel as a chain-load workaround. Again, Secure Boot. - It's not possible to use the various `/sys/firmware/efi` drivers to register new Secure Boot keys. I have nothing against Secure Boot, but generally it's implemented to allow end-users to set up their own keys or disable it entirely. The DeepLens obviously isn't a Windows-certified device, but it's interesting to note that for all the hate Secure Boot received the Windows certification process actually requires these features to be present. - As far as I can tell the kernel doesn't ever actually get updated by `apt` because the contents of `/boot` aren't actually booted. Doing so would require that AWS is distributing the signing key to devices to sign locally-built bundles. Linux 4.13.0 is from November 2018... In short, it's kind of disingenuous to claim that ``` To protect the AWS DeepLens device from malicious attacks, it is configured to boot securely. ``` I guess technically it boots only the intended kernel, but that kernel is open to any exploits found since its release. To that end, can we _please_ get unlocked firmware? I don't care about warranty; I want to be able to use the device that I supposedly own in the way I see fit. It seems that AWS isn't interested in keeping the device current, so please allow us to take that on ourselves.
1
answers
0
votes
7
views
asked 3 months ago

Deeplens ; Face Detection; send MQTT output locally

Hello, I would like to send in parallel to the cloud inference output MQTT messages locally from deeplens to my raspberry by. In the Deeplens recipes (trash sorting) there is an example of it which I took as guide to modify the Face Detection Lambda. I am struggling to get it work. Could you please help me out here. The version code posted here below does not send any messages anymore. Here the code I'm trying to use: #***************************************************** # * # Copyright 2018 Amazon.com, Inc. or its affiliates. * # All Rights Reserved. * # * #***************************************************** """ A sample lambda for face detection""" from threading import Thread, Event import os import json import numpy as np import awscam import cv2 import greengrasssdk import time import mo class LocalDisplay(Thread): """ Class for facilitating the local display of inference results (as images). The class is designed to run on its own thread. In particular the class dumps the inference results into a FIFO located in the tmp directory (which lambda has access to). The results can be rendered using mplayer by typing: mplayer -demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/results.mjpeg """ def __init__(self, resolution): """ resolution - Desired resolution of the project stream """ # Initialize the base class, so that the object can run on its own # thread. super(LocalDisplay, self).__init__() # List of valid resolutions RESOLUTION = {'1080p' : (1920, 1080), '720p' : (1280, 720), '480p' : (858, 480)} if resolution not in RESOLUTION: raise Exception("Invalid resolution") self.resolution = RESOLUTION[resolution] # Initialize the default image to be a white canvas. Clients # will update the image when ready. self.frame = cv2.imencode('.jpg', 255*np.ones([640, 480, 3]))[1] self.stop_request = Event() def run(self): """ Overridden method that continually dumps images to the desired FIFO file. """ # Path to the FIFO file. The lambda only has permissions to the tmp # directory. Pointing to a FIFO file in another directory # will cause the lambda to crash. result_path = '/tmp/results.mjpeg' # Create the FIFO file if it doesn't exist. if not os.path.exists(result_path): os.mkfifo(result_path) # This call will block until a consumer is available with open(result_path, 'wb') as fifo_file: while not self.stop_request.isSet(): try: # Write the data to the FIFO file. This call will block # meaning the code will come to a halt here until a consumer # is available. fifo_file.write(self.frame.tobytes()) except IOError: continue def set_frame_data(self, frame): """ Method updates the image data. This currently encodes the numpy array to jpg but can be modified to support other encodings. frame - Numpy array containing the image data tof the next frame in the project stream. """ ret, jpeg = cv2.imencode('.jpg', cv2.resize(frame, self.resolution)) if not ret: raise Exception('Failed to set frame data') self.frame = jpeg def join(self): self.stop_request.set() def infinite_infer_run(): """ Entry point of the lambda function""" try: # This face detection model is implemented as single shot detector (ssd). model_type = 'ssd' output_map = {1: 'face'} # Create an IoT client for sending to messages to the cloud. client = greengrasssdk.client('iot-data') iot_topic = '$aws/things/{}/infer'.format(os.environ['AWS_IOT_THING_NAME']) pi_topic = 'deeplens/infer' # Create a local display instance that will dump the image bytes to a FIFO # file that the image can be rendered locally. local_display = LocalDisplay('480p') local_display.start() # The sample projects come with optimized artifacts, hence only the artifact # path is required. model_path = '/opt/awscam/artifacts/mxnet_deploy_ssd_FP16_FUSED.xml' # Load the model onto the GPU. client.publish(topic=iot_topic, payload='Loading face detection model') model = awscam.Model(model_path, {'GPU': 1}) client.publish(topic=iot_topic, payload='Face detection model loaded') # Set the threshold for detection detection_threshold = 0.25 # The height and width of the training set images input_height = 300 input_width = 300 # Do inference until the lambda is killed. while True: # Get a frame from the video stream ret, frame = awscam.getLastFrame() if not ret: raise Exception('Failed to get frame from the stream') # Resize frame to the same size as the training set. frame_resize = cv2.resize(frame, (input_height, input_width)) # Run the images through the inference engine and parse the results using # the parser API, note it is possible to get the output of doInference # and do the parsing manually, but since it is a ssd model, # a simple API is provided. parsed_inference_results = model.parseResult(model_type, model.doInference(frame_resize)) # Compute the scale in order to draw bounding boxes on the full resolution # image. yscale = float(frame.shape[0]) / float(input_height) xscale = float(frame.shape[1]) / float(input_width) # Dictionary to be filled with labels and probabilities for MQTT cloud_output = {} # Get the detected faces and probabilities for obj in parsed_inference_results[model_type]: if obj['prob'] > detection_threshold: # Add bounding boxes to full resolution frame xmin = int(xscale * obj['xmin']) ymin = int(yscale * obj['ymin']) xmax = int(xscale * obj['xmax']) ymax = int(yscale * obj['ymax']) # See https://docs.opencv.org/3.4.1/d6/d6e/group__imgproc__draw.html # for more information about the cv2.rectangle method. # Method signature: image, point1, point2, color, and tickness. cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (255, 165, 20), 10) # Amount to offset the label/probability text above the bounding box. text_offset = 15 # See https://docs.opencv.org/3.4.1/d6/d6e/group__imgproc__draw.html # for more information about the cv2.putText method. # Method signature: image, text, origin, font face, font scale, color, # and tickness cv2.putText(frame, '{:.2f}%'.format(obj['prob'] * 100), (xmin, ymin-text_offset), cv2.FONT_HERSHEY_SIMPLEX, 2.5, (255, 165, 20), 6) # Store label and probability to send to cloud cloud_output[output_map[obj['label']]] = obj['prob'] # Set the next frame in the local display stream. local_display.set_frame_data(frame) # Send results to the cloud client.publish(topic=iot_topic, payload=json.dumps(cloud_output)) # Send the top k results to the Raspberry Pi via MQTT pi_output = {} pi_output[output_map[obj['label']]] = obj['prob'] client.publish(topic=pi_topic, payload=json.dumps(pi_output)) except Exception as ex: client.publish(topic=iot_topic, payload='Error in face detection lambda: {}'.format(ex)) infinite_infer_run() Essentially I added the few last lines and also setup the IoT part of it with certificates and policies. Thank you for any help or hint
0
answers
0
votes
2
views
asked 4 months ago

Error while restoring

Hi guys! Last week I had a problem with my DeepLens, suddenly was not able to boot an therefore I cannot use it. So I follow the procedure to perform a factory reset, but once ubuntu live have finished the boot, any terminal window were poping up. So I manually went where the NTFS partition is located and I've executed the following command './usb_flash.sh ' and the output is the following: ubuntu@ubuntu:/media/ubuntu/Image$ ./usb_flash.sh Sat Nov 7 20:41:23 UTC 2020 USB flash version : V0.7 Sat Nov 7 20:43:07 UTC 2020 Check Bios version match? Sat Nov 7 20:43:07 UTC 2020 Current BIOS version is 1.0.15 match require Sat Nov 7 20:43:07 UTC 2020 This is fuse board match require Sat Nov 7 20:43:07 UTC 2020 USB flash version : V0.7 Sat Nov 7 20:43:07 UTC 2020 Image file /media/ubuntu/Image/image_deepcam_19WW19.5_tpm.img exists. Sat Nov 7 20:43:07 UTC 2020 Image MD5 : e6bab8e1c0b35d39433feaaaa85b002a. Sat Nov 7 20:43:07 UTC 2020 PCR_DAT_PATH : /media/ubuntu/Image/APL_1.0.15_19WW19.5_pcr_fuse.dat Sat Nov 7 20:43:07 UTC 2020 SIGN_PUBLIC_KEY_DER_PATH : /media/ubuntu/Image/pubkey.der.aws Sat Nov 7 20:43:07 UTC 2020 RANDOM_KEY_PATH : Sat Nov 7 20:43:07 UTC 2020 TPM_SCRIPT_PATH : /media/ubuntu/Image/seal_and_luksChangeKey.sh Sat Nov 7 20:43:07 UTC 2020 OTG Function : 0 Sat Nov 7 20:43:07 UTC 2020 Resize Encrype Partition : 1 System image will auto recovery after 10 sec.... Are you sure you want to recovery system image? (y/n) Sat Nov 7 20:43:17 UTC 2020 Recovery system image...now Sat Nov 7 20:43:17 UTC 2020 eMMC Path : Sat Nov 7 20:43:17 UTC 2020 Root Partition Path : p3 Sat Nov 7 20:43:17 UTC 2020 OTG Partition Path : p4 Sat Nov 7 20:43:17 UTC 2020 Can not find GPT partition table And get stopped there, any help is welcome! Best, Jose
1
answers
0
votes
4
views
asked 2 years ago

deployment of custom model to device failed

Hi, I'm trying to deploy a custom model trained with SageMaker to my DeepLens device. The model is based on MXNet Resnet50 and makes good predictions when deployed on on a sagemaker endpoint. However when deploying to DeepLens we seem to be getting errors when the lambda function tries to optimize the model. No inferences are made by the device. The lambda log shows this (errors reported for -mo.py 161/173 - what are these?): ``` [2020-02-09T19:00:28.525+02:00][ERROR]-mo.py:161, [2020-02-09T19:00:30.088+02:00][ERROR]-mo.py:173, [2020-02-09T19:00:30.088+02:00][INFO]-IoTDataPlane.py:115,Publishing message on topic "$aws/things/deeplens_rFCSPJQhTGS5Y9NGkJIz8g/infer" with Payload "Loading action cat-dog model" [2020-02-09T19:00:30.088+02:00][INFO]-Lambda.py:92,Invoking Lambda function "arn:aws:lambda:::function:GGRouter" with Greengrass Message "Loading action cat-dog model" [2020-02-09T19:00:30.088+02:00][INFO]-ipc_client.py:142,Posting work for function [arn:aws:lambda:::function:GGRouter] to http://localhost:8000/2016-11-01/functions/arn:aws:lambda:::function:GGRouter [2020-02-09T19:00:30.099+02:00][INFO]-ipc_client.py:155,Work posted with invocation id [158058ef-7386-49c5-791a-9c61bd1b9951] [2020-02-09T19:00:30.109+02:00][INFO]-IoTDataPlane.py:115,Publishing message on topic "$aws/things/deeplens_rFCSPJQhTGS5Y9NGkJIz8g/infer" with Payload "Error in cat-dog lambda: Model path is invalid" [2020-02-09T19:00:30.11+02:00][INFO]-Lambda.py:92,Invoking Lambda function "arn:aws:lambda:::function:GGRouter" with Greengrass Message "Error in cat-dog lambda: Model path is invalid" ``` It seems to me that the model optimizer is failing for some reason and not producing the optimized output, however we cannot understand the errors, is there somewhere we can decipher these? BTW, device is installed with all latest updates and MXNet version on device is 1.4.0 Many thanks Edited by: Mike9753 on Feb 10, 2020 1:02 AM
2
answers
0
votes
2
views
asked 2 years ago
  • 1
  • 90 / page