1 Answer
- Newest
- Most votes
- Most comments
0
@grazitti,
Greetings!! By any chance, can you provide additional information from CloudWatch logs (like anything related to the reason behind failure of sync-job)? Also, are you following this documentation to perform the sync-job?
answered 3 months ago
Relevant content
- asked 3 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago
- How do I troubleshoot permission errors that I get when I create a knowledge base in Amazon Bedrock?AWS OFFICIALUpdated 14 days ago
Hi @arjun, Thanks for your response.
Yes we are following the documentation for performing the syncs i.e. content file size < 50 MB, metadata file size is < 10 KB and all the keys in metadata file have data types in [string, numbers and Booleans] only. For the cloud watch logs, unfortunately we just get this much information only for the failed syncs.
{ "event_timestamp": 1724553492795, "event": { "ingestion_job_id": "XXXX", "document_location": { "type": "S3", "s3_location": { "uri": "s3://XXXX.json" } }, "chunk_statistics": { "ignored": 0, "metadata_updated": 0, "failed_to_update_metadata": 180, "deleted": 0, "failed_to_delete": 0, "created": 0, "failed_to_create": 0 }, "data_source_id": "XXXX", "knowledge_base_arn": "XXXX", "status": "FAILED" }, "event_version": "1.0", "event_type": "StartIngestionJob.ResourceStatusChanged", "level": "INFO" }
Not sure what's the exact issue here. Typically when I ingest fresh data, the sync process never fails. But suppose if I ingest the same files again into the source S3 bucket, the sync process fails for those files with above information.
Let me know if you some clarity on this and maybe we can connect on some other channel for prompt communication. TIA.