Sporadic real-time classification - how to do it cost-efficient?

0

Hello,

We want to sporadically classify large documents (up to 40 pages) in real-time. It works well using comprehend custom classification.

The problem now is that you need to keep alive an endpoint all the time for just a couple of requests per day. This is way too expensive.
I am under the impression that the synchronous classification was designed for high workloads only and does not provide a cost-effective way for an infrequent amount of requests.

Are there any cost-effective alternatives besides crafting a self-made algorithm using Sagemaker?

Kind regards
Thomas

feita há 3 anos293 visualizações
2 Respostas
0

Hello. Thanks for reaching out. There are a few options here. If you know when you expect to get sporadically high usage, you can set up time based Application Autoscaling. This would increase your throughput during certain times of the day and then scale it down.
If your work is not predictable by time, you can set up endpoint utilization based Application Autoscaling. This would increase your throughput when your endpoint reaches a certain target utilization. Both these options require you to maintain at least 1 IU of throughput on your endpoint so you will continue to incur that minimum cost.
See here: https://docs.aws.amazon.com/comprehend/latest/dg/comprehend-autoscaling.html

If your workload is not large enough to maintain 1 IU, you could consider programmatically deleting and creating an endpoint after/before your workload is expected. Note that endpoint creation takes a few minutes so you need to start the endpoint creation with enough time before your workload is expected.

respondido há 3 anos
0

Thank you for clarification

respondido há 3 anos

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas