- Newest
- Most votes
- Most comments
I'm completely stuck here. I just gave my bucket full public permissions and it's still failing with Access Denied.
The policy attached to my job grants "s3:*" permissions and I know it works because if I drop a file in the bucket it copies correctly.
This problem only exists with the files created by redshift.
Any help?
I figured out the problem. It ended up being related to a bad key. It turns out that the key string passed in by the s3 event trigger is urlencoded. These urlencoded strings appear as normal strings when the error message appears in the lambda window but adding debug logging to my script let me see the actual string.
I'm not entirely sure why this was causing an Access Denied error, but adding a call to urlib.parse.unquote fixed the issue.
Relevant content
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 11 days ago
- AWS OFFICIALUpdated 3 years ago