How does sagmaker batch inference processes individual files?

0

based on the documentation provided here , https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html, large dataset can be structured as shown below in a csv file. is it possible to have multiple files in this format for batch inference, is there any configuration that can be set , for it to process multiple files. Also, what other formats , beside csv can the batch inference handle?

Record1-Attribute1, Record1-Attribute2, Record1-Attribute3, ..., Record1-AttributeM
Record2-Attribute1, Record2-Attribute2, Record2-Attribute3, ..., Record2-AttributeM
...
...
RecordN-Attribute1, RecordN-Attribute2, RecordN-Attribute3, ..., RecordN-AttributeM  
feita há 2 anos520 visualizações
1 Resposta
0

If you have multiple files in S3 bucket for Batch Inference, general guidelines is set the number of workers/instances = multiple of number of files in S3 to process. In addition, you can set the BatchStrategy to MultiLine in order to speed up the processing. To enable parallel processing, set the MaxConcurrentTransforms to 0 to start off, Amazon SageMaker checks the optional execution-parameters to determine the settings for your chosen algorithm.

AWS
respondido há 2 anos
  • @AWS-Anonymous - thanks. so if i have 2 files then set the number of instances to 2, 4, 6.... performance wise , is it better to have everything in one file , if possible , or split files up into multiple ones. Also, you mentioned "set the MaxConcurrentTransforms to 0 to start off", does this strategy work when we bring our own container, algorithm

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas