Questions tagged with Image Processing & Analytics
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
AWS rekognition for Identity verification
I want to use rekognition to create an identity verification workflow such as https://docs.aws.amazon.com/rekognition/latest/dg/identity-verification-tutorial.html. I need to validate a selfie against an ID card. My problem is how I can recognize that the selfie photo is not a "fake/spoof" image, ex: I use an already taken image as a selfie or use the ID card in both images. Specifically, Is there a way to recognize Presentation Attack Detection (PAD)?
programatic compare faces gives far different results than demo ui
when using the python boto3 compare_faces function with the same image I get far different results than when I comapre them in the demo ui. for ex around ~83 similarity+ ~99 confidence with the python code and ~1.9 similarity+ ~99 confidence in the ui demo. why? I thought that it might had to do with the QualityFilter but it can't be changes due to what the documentations says: `To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.` please let me know what am I missing see screenshots demo ui ![demo ui](https://repost.aws/media/postImages/original/IMxW_odd05Tcey02F2CHcJeg) programmatically ![programmatically](https://repost.aws/media/postImages/original/IMwYPo3o0HRR2LamQvIK7iiw) thanks
Workflow information needed for AWS Lambda image processing
We have an EC2 that handles our image processing using PHP and ImageMagick. When processing 5000 images, it takes about 5 hours. So, I've been looking at implementing Lambda with Sharp. There are times we will have 10-15k images to process as well, but that's rare. Currently, when an export is triggered by a user our steps are: 1) Retrieve each image using it's URL and save to a folder. The images are very big. 2) Resize each image to less than 1500x1500 and less than 600KB, store in secondary folder. 3) Create a CSV file with data for each image, stored in secondary folder. 4) TAR and GZ resized photos and csv, move tar.gz to an export directory and update database. In my reading, the image processing is best handled in Lambda. However, should the retrieving of images still be from the EC2 and saving to S3 Bucket, triggering the Lamba? If so, then how to know when all the processing is done and zip, move, delete all images and folders from the bucket? Or, perhaps better to send the url via API to Lamba, process image and save? Can you hit the API 5000+ times and Lamba scales? The former solution sounds more reasonable. Anyway, looking for anyone with some experience in this to comment. Would appreciate some answers.
Choosing of AMIs which suits for our project regarding
We are working on Automation of CCTV camera videos (RTSP link) using image processing techniques such as back ground subtraction, contour detection, annotation using opencv-python for 80 sites.so we have to run 80 python scripts in parallel manner. I would like to know which AMI suits for this project without any drop in outputs. The outputs are images in jpg format. These code would run 24*7.kindly suggest best AMI for this so that we would suggest the same to our management for purchase.
The image cannot be displayed because it contains errors
Hi, I'm working with saving and displaying images in and S3 bucket. I am on a mac and the images show fine on the mac. I am able to upload many images to the bucket and then I can display them using a pre-signed URL. All good... But then I have some other varied images such as .jpg that I see fine on the mac and seem to upload OK however do not display from s3 using pre-signed URL. When viewed in Mac Safari or chrome or Firefox I get the broken image symbol. Firefox also says: The image "https://xxxxxxxxxx" cannot be displayed because it contains errors" Someone suggested that possibly the original file creation might have been strange in someway and the Mac might be able to interpret the image however S3 cannot do this successfully. Possibly this might be a cross platform Windows / Mac / Linux image issue? Test: I took one of the .jpg images that did not show up from S3 - and I opened it in preview on the Mac and exported it also as .jpg under a different name. Then I uploaded this new Version add this did seem to fix the problem because it now she displays correctly from s3. However for what I'm doing I do not want to have to export every image and resave it - in order to go to S3. Q: Does anybody have any solutions as to why I am getting some Errors when trying to display images from S3? Any ideas how to fix this? Quick update - in Mac terminal I tried : file -I ~/Desktop/test.jpg and surprisingly it came back as = image/heic even though the file had .jpg suffix.... Any idea how to get s3 to read "heic files"? Thanks dave
Is one supposed to grayscale and brightness contrast process the image before sending to textract?
Textract results on recognizing basic arithmetic seems to degrade with color This series of images show Textract **failing unusually in all cases except the one** where the image has been both grayscale and brightness/contrast (50/50 and 25/25) - [unedited image from the camera](https://i.gyazo.com/0c6d8126dff5269dbe089a090a9e9d26.png) FAIL - [brightness contrast applied without grayscale](https://gyazo.com/8e3cc523552449ff54b9ed8fdbe6594f) FAIL - [grayscale](https://gyazo.com/8e3cc523552449ff54b9ed8fdbe6594f) FAIL - [grayscale with brightness contrast] (https://i.gyazo.com/22269131293c8e7b50c7aee7b998554c.png) finally! Is one supposed to grayscale the image before sending to textract? Should one also apply brightness/contrast? I assume Textract was trained with grayscale images - so should the service automatically convert the input images to grayscale?
AI used for recognizing emojis on wallpaper
I want to build an app that will recognize what emojis have been used on the wallpaper. So for instance this app will receive an input image like [this](https://i.stack.imgur.com/5wRQm.png) And on output should return an array of names of recognized emojis: ``` [ "Smiling Face with Sunglasses", "Grinning Face with Smiling Eyes", "Kissing Face with Closed Eyes" ] ``` Of course, the names of these emojis will come from the names of files of training images. For example, [this](https://i.stack.imgur.com/BaEGG.png) file, will be called `Grinning_Face_with_Smiling_Eyes.jpg` I would like to use `AWS Rekognition Label`, but they require a minimum of 10 images of each emoji for training. As you know, I can only provide one image of each emoji, because there is no more option, they are in 2D ;) Now my question is: What should I do? How can I skip these requirements? Which service should I choose? On the stack, I have read that I should for instance rotate each image 12 times by 30o, or crop the emoji by half. I have done that, but, precision is very small - around `0.3` PS. In real business instead of emojis, there are covers of the books, which AI has to recognize. There is also one image per book-cover photo in 2D.