Multiple vs Single Comprehend Custom Classifiers?

0

Let's say I want to extract entities (like address) from invoices from many companies and countries around the world. In many cases, I would want to pass Amazon Textract to Amazon Translate to Amazon Comprehend, storing the intermediate results in Amazon S3 each step of the way for resilience.

This is the kind of input and output I would want to have for Comprehend:

input keyinput valueoutput label
Address50 Park AvenueAddress
CompanyAcme IncCompany
Total$5000Total
100 Main StreetAddress
InstitutionDunder MiffCompany
$640Total
Invoice$10000Total

But let's say it's only 90% accurate at providing labels like that, because the invoices from some countries use different terminology, even after translation to English. I can see a few options to tackle this:

  1. Provide a custom entity list with a similar table to the one above, careful to only have a total of 25 different output labels, and train a custom classifier with that in Amazon Comprehend.
  2. Include a SageMaker Augmented AI (A2I) step, so that a human can give their annotate the data with the proper label, ensuring that the labelers can only choose from up to 25 different labels. This way has an advantage over option 1, in that the model will continuously improve as it gets more samples from the human review.
  3. And with either option 1 or 2 above, I could use either one custom classifier per customer, because they may have different requirements,
  4. OR I could have one customer classifier for all customers, and limit myself to 25 labels universally. This seems easier to maintain and makes all use cases better simulataneously.
  5. I could include the human review after Transcribe (after Translate, if non-English) gives a low confidence score.
  6. OR I could include the human review after Comprehend gives a low confidence score. I like this option better for human review placement, because the Comprehend step will get better over time, as it learns from our data, whereas Textract doesn't have that feature. This way, the need for human review should diminish over time.

In conclusion, I would recommend human annotations (2) that happen after (6) a custom Comprehend classifier, which is the same for all customers (4).

My question: am I thinking about this right? Are my conclusions logical?

1 Answer
0

Option-2 will be required only during training hence less manual intervention while option-6 could be required often during inference when there is a low score predicted. You would need to evaluate the trade-offs between human annotation during training or inference for your use-case. I would suggest starting with option-2 and fine-tune it using human annotations and plug-in option-6 if needed.

You can use flywheel to iterate on your model - https://docs.aws.amazon.com/comprehend/latest/dg/flywheels-about.html

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions