- 최신
- 최다 투표
- 가장 많은 댓글
I'm completely stuck here. I just gave my bucket full public permissions and it's still failing with Access Denied.
The policy attached to my job grants "s3:*" permissions and I know it works because if I drop a file in the bucket it copies correctly.
This problem only exists with the files created by redshift.
Any help?
I figured out the problem. It ended up being related to a bad key. It turns out that the key string passed in by the s3 event trigger is urlencoded. These urlencoded strings appear as normal strings when the error message appears in the lambda window but adding debug logging to my script let me see the actual string.
I'm not entirely sure why this was causing an Access Denied error, but adding a call to urlib.parse.unquote fixed the issue.
관련 콘텐츠
- AWS 공식업데이트됨 2년 전