- Newest
- Most votes
- Most comments
This behavior you're experiencing is likely related to how Amazon S3 handles object listings and how AWS Data Exchange (ADX) interacts with S3 when browsing directories.
S3 doesn't have a true hierarchical folder structure. Instead, it uses a flat structure where the folder-like appearance is created by using key name prefixes with the '/' delimiter. When you're browsing through the S3 structure in ADX, it's essentially performing list operations on these prefixes.
For performance reasons, S3 list operations typically return a limited number of results per request. If a directory contains a large number of objects or subdirectories, not all of them may be returned in the initial request. This could explain why you're not seeing all subdirectories automatically loaded at the 3rd level.
When you click on the page number or force a reload, you're likely triggering additional list requests to S3, which then return the remaining subdirectories.
To address this issue, you could try the following:
-
Ensure that your S3 bucket structure is optimized for performance. Having a very large number of objects in a single prefix can slow down list operations.
-
If possible, reorganize your data structure to reduce the depth of nesting or the number of objects at each level.
-
If reorganization isn't feasible, you may need to account for this behavior in your workflow, understanding that manual reloads might be necessary for certain directories.
-
Check if there are any S3 bucket policies or permissions that might be affecting how ADX can list objects in certain prefixes.
If the issue persists or significantly impacts your work, it would be worth contacting AWS support for further investigation, as there could be specific console-side optimizations that could be made to improve this experience.
Sources
Why is the folder/prefix getting deleted when deleting the objects inside using the s3 lifecycle policy? | AWS re:Post
Hierarchical structure of S3 KB data source | AWS re:Post