1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
Hi,
the problem you are facing is indeed pretty strange. If you are passing the same bytes to the Textract API (I assume one of AnalyzeDocument, AnalyzeExpense or AnalyzeId), the result should be the same independently from the call being made from your local computer or from a Lambda function.
From your description it seems you are performing some redundant steps: as your documents are already on S3, you can pass the S3 object location directly to the Textract APIs, thus avoiding the download step.
response = client.analyze_document(
Document={
'Bytes': b'bytes',
'S3Object': {
'Bucket': 'string',
'Name': 'string',
'Version': 'string'
}
},
...
If you have multiple documents to process, you can also use the batch operators, like start_document_analysis
Contenus pertinents
- demandé il y a 4 mois
- demandé il y a un an
- demandé il y a 2 mois
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 4 mois
hi, thanks for you response. but this not solution my problem, i'm use star_document_analysis for process one to one document. the problem in reality is in docker. I was doing some tests and it seems that the problem is in docker. but I am installing the same versions but it doesn't seem to work.