By using AWS re:Post, you agree to the Terms of Use

Questions tagged with Amazon SageMaker Ground Truth

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Custom Post Annotation Lambda Function for Custom Labeling Job

Hello, I have implemented a post annotation lambda function for my sagemaker ground truth custom job. After the annotations are finished, the results come after the consolidation of the annotations are saved in a subdirectory called "*iteration_X*" of *"annotations/consolidated-annotation/consolidation-response/"*. However, the outcome of the annotations is never successful and from the log of the lambda function used for the post annotation I always receive this type of error: ``` { "labeling-job-name": "labeling-job-full-dataset-test-giusy-10", "event-name": "ANNOTATION_CONSOLIDATION_LAMBDA_SCHEMA_MATCHING_FAILED", "event-log-message": "ERROR: Annotation consolidation Lambda response did not match expected data format for line 1." } ``` Based on this guide (*https://docs.aws.amazon.com/id_id/sagemaker/latest/dg/sms-custom-templates-step3-lambda-requirements.html*) I made sure that my lambda function returns: RESPONSE: ``` [ { "datasetObjectId": "1", "consolidatedAnnotation": { "content": { "annotations": { "relations": [ { "subj": "CW", "predicate": "adjust", "obj": "key" }, { "subj": "key", "predicate": "with", "obj": "right_hand" }, { "subj": "key", "predicate": "on", "obj": "lock" } ], "groundings": { "pre_frame": [ { "object": "right_hand", "left": 776.5, "top": 219.5, "width": 282.52, "height": 246.5 }, { "object": "lock", "left": 716.4, "top": 255.6, "width": 93.60000000000002, "height": 111.6 } ], "pnr_frame": [ { "object": "right_hand", "left": 974.16, "top": 275.14, "width": 287.21, "height": 215.84 }, { "object": "lock", "left": 914.4, "top": 291.6, "width": 97.20000000000005, "height": 90 } ], "post_frame": [ { "object": "right_hand", "left": 858.58, "top": 240.54, "width": 316.28, "height": 209.37 }, { "object": "lock", "left": 741.6, "top": 237.6, "width": 61.19999999999993, "height": 169.20000000000002 } ] }, "timestamp": "0", "clip_uid": "undefined" } } } } ] ``` I can't figure out how to avoid this type of error and make the annotations go through when the job is finished.
2
answers
0
votes
43
views
asked 2 months ago

Unable to configure SageMaker execution Role with access to S3 bucket in another AWS account

**Requirement:** Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account **High Level Steps Followed:** Lets say, *Account_A:* SageMaker GroundTruth labeling job and *Account_B*: S3 bucket 1. Create role *AmazonSageMaker-ExecutionRole* in *Account_A* with 3 policies attached: * AmazonSageMakerFullAccess * Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B * AssumeRolePolicy: Assume role policy for *arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role* 2. Create role *Cross-Account-S3-Access-Role* in *Account_B* with 1 policy and 1 trust relationship attached: * S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B * TrustRelationship: For principal *arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole* **Error:** While trying to create SakeMaker GroundTruth labeling job with IAM role as *AmazonSageMaker-ExecutionRole*, it throws error *AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.*
2
answers
0
votes
99
views
asked 3 months ago

Does the custom entity recognition of Amazon Comprehend does not works with sem-structured data in Spanish?

I want to extract custom entities from custom PDF documents in Spanish. To do so, I am (unsuccessfully) trying to follow this tutorial [Extract entities from insurance documents using Amazon Comprehend named entity recognition](https://aws.amazon.com/blogs/machine-learning/extract-entities-from-insurance-documents-using-amazon-comprehend-named-entity-recognition/) , to extract custom entities from my documents. Just a side note, in order to annotate my custom data, I've successfully followed this related tutorial [Custom document annotation for extracting named entities in documents using Amazon Comprehend.](https://aws.amazon.com/blogs/machine-learning/custom-document-annotation-for-extracting-named-entities-in-documents-using-amazon-comprehend/). No issues with this tutorial, I've the annotations output. My issue is with the first tutorial. After filling all the required fields to create and train a new model, I get an error message like [this](https://pasteboard.co/WVlyHOGwFDz0.png). Given that all my documents are in Spanish, the error message makes sense, but the language restriction is too restrictive to make sense. I see [here](https://docs.aws.amazon.com/comprehend/latest/dg/supported-languages.html) that said Spanish is a supported language for [Custom entity recognition](https://docs.aws.amazon.com/comprehend/latest/dg/custom-entity-recognition.html) feature of Amazon Comprehend... What am I doing wrong? What assumptions am I making that are wrong?
1
answers
0
votes
54
views
asked 3 months ago