AWS Rekognition and Sagemaker Running Local to automate annotation base on video streaming

0

Hi everyone, I have a project in manufacturing area that I would like to improve by implementing computed vision capability. I already work with AWS services, but I still have no experience with Rekognition and Sagemaker, so I will explain my application below and would like to know if it is possible to make and what are the key point I need to put my attention and study. Today in a manufacturing plant, anytime a machine stops, the worker annotates the downtime cause on a tablet(running my app) with predefined options. I'd like to implement computed vision to automate this annotation, or suggest the most probable cause. I'm expecting a way for the application to auto train and improve itself given the historical data composed by: downtime cause annotated by operator + period of time when that downtime occurred + video. I plan to run on an edge through greengrass with a tpu processor.

Thanks in advance.

3 Risposte
0

Amazon Monitron could be a good addition to consider in this scenario. Monitron is an IoT platform that includes a suite of sensors, gateways, and machine learning algorithms that help industrial customers monitor the health and performance of their equipment. Monitron can be used to automatically detect equipment anomalies and predict when maintenance is required. There's actually also a whole suite of other solutions and services for industrial.

In your case, Monitron could be a complementary solution to implement along with Amazon Rekognition and SageMaker. You could use Monitron to collect sensor data and use Rekognition to process the video data and SageMaker to train the machine learning models. Or you could you use Monitron by itself, as it is an end-to-end system.

A knowledge of AWS services such as Amazon Rekognition, SageMaker, and Greengrass would help. For studying, https://workshops.aws/ has some good hands-on training on some of these topics. It would also be helpful to study Monitron in more detail and understand how to integrate it with the other AWS services you plan to use.

Here's a blog that talks about using Monitron to reduce unplanned downtime: https://aws.amazon.com/blogs/aws/amazon-monitron-a-simple-cost-effective-service-enabling-predictive-maintenance/

I would also recommend you also get in touch with your AWS account team. They may be able to provide some time with a Solutions Architect to dive a little deeper into helping you think through the architecture.

AWS
con risposta un anno fa
0

If you can have connectivity then Amazon Rekognition Photo Custom Labels can be a good start, by sampling a live video stream for annotation event - this would be the simplest solution but with less flexibility in terms of personalization of your model.

Otherwise, if you want what your model to work offline or you want to train a custom Computer Vision model, the Greengrass for the inference but leverage the power of the SageMaker + GPU ML instances for training your model in cloud. For the later approach, please try the Quick Starts in SageMaker Studio that you can try for Image Classification sample notebook: https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/imageclassification_mscoco_multi_label/Image-classification-multilabel-lst.ipynb Note that in this case you can leverage the SageMaker framework to train faster, optimize the hyperparameters using build-in tools like "hyperparameters tuning" feature: https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-how-it-works.html

A 3rd option could be Amazon Lookout for Vision agent for Greengrass if you want only to detect a defect: https://docs.aws.amazon.com/greengrass/v2/developerguide/lookout-for-vision-edge-agent-component.html

profile pictureAWS
con risposta un anno fa
0

Your project idea sounds quite promising! Implementing computer vision in a manufacturing environment to automate downtime cause annotation using AWS services like Rekognition and Sagemaker is indeed feasible. Here are key points and considerations for your implementation:

  1. Understanding AWS Rekognition and Sagemaker: Familiarize yourself with AWS Rekognition, which offers a range of computer vision capabilities like object detection, image and video analysis, and Sagemaker, a platform for building, training, and deploying machine learning models.Learn about the features and limitations of Rekognition for video analysis and how Sagemaker can be utilized for custom model training.

  2. Data Collection and Preprocessing: Gather historical data containing downtime causes annotated by operators, timestamps of downtime occurrences, and related videos.Preprocess the data to ensure it's in a suitable format for model training, including proper labeling of downtime causes and aligning videos with corresponding annotations.

  3. Model Training and Auto-Improvement: Use Sagemaker to develop a custom machine learning model that can analyze the videos and correlate downtime causes with visual cues.Implement mechanisms for auto-training the model with new data to improve its accuracy over time. This could involve periodic retraining using new annotations and video data collected from the manufacturing plant.

  4. Integration with Edge Computing (AWS Greengrass): Explore AWS Greengrass for deploying your computer vision model to the edge (such as on local devices in the manufacturing plant) to enable real-time inference without relying solely on cloud services. Utilize compatible hardware like TPU processors to enhance the speed and efficiency of model inference at the edge.

  5. Considerations and Challenges: Ensure data privacy and security measures are in place, especially when dealing with video and sensitive manufacturing data.Address any potential challenges related to varying lighting conditions, camera angles, or occlusions in the manufacturing environment that might affect the model's performance.

  6. Continuous Monitoring and Improvement: Establish mechanisms for monitoring the model's performance in real-world scenarios and collecting feedback to further refine and optimize the model's predictions.

con risposta 3 mesi fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande