- 最新
- 最多得票
- 最多評論
I'm completely stuck here. I just gave my bucket full public permissions and it's still failing with Access Denied.
The policy attached to my job grants "s3:*" permissions and I know it works because if I drop a file in the bucket it copies correctly.
This problem only exists with the files created by redshift.
Any help?
I figured out the problem. It ended up being related to a bad key. It turns out that the key string passed in by the s3 event trigger is urlencoded. These urlencoded strings appear as normal strings when the error message appears in the lambda window but adding debug logging to my script let me see the actual string.
I'm not entirely sure why this was causing an Access Denied error, but adding a call to urlib.parse.unquote fixed the issue.
相關內容
- AWS 官方已更新 7 個月前
- AWS 官方已更新 2 年前
- AWS 官方已更新 1 年前