Sporadic real-time classification - how to do it cost-efficient?

0

Hello,

We want to sporadically classify large documents (up to 40 pages) in real-time. It works well using comprehend custom classification.

The problem now is that you need to keep alive an endpoint all the time for just a couple of requests per day. This is way too expensive.
I am under the impression that the synchronous classification was designed for high workloads only and does not provide a cost-effective way for an infrequent amount of requests.

Are there any cost-effective alternatives besides crafting a self-made algorithm using Sagemaker?

Kind regards
Thomas

posta 3 anni fa293 visualizzazioni
2 Risposte
0

Hello. Thanks for reaching out. There are a few options here. If you know when you expect to get sporadically high usage, you can set up time based Application Autoscaling. This would increase your throughput during certain times of the day and then scale it down.
If your work is not predictable by time, you can set up endpoint utilization based Application Autoscaling. This would increase your throughput when your endpoint reaches a certain target utilization. Both these options require you to maintain at least 1 IU of throughput on your endpoint so you will continue to incur that minimum cost.
See here: https://docs.aws.amazon.com/comprehend/latest/dg/comprehend-autoscaling.html

If your workload is not large enough to maintain 1 IU, you could consider programmatically deleting and creating an endpoint after/before your workload is expected. Note that endpoint creation takes a few minutes so you need to start the endpoint creation with enough time before your workload is expected.

con risposta 3 anni fa
0

Thank you for clarification

con risposta 3 anni fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande