- Newest
- Most votes
- Most comments
Disregard - figured it out.
Per Amazon's instructions, after you create your EKS cluster you have to attach certain policies to the EKSNodeRole created by default. This is outlined in Step 6 of installing your cluster... *6. (Recommended) Configure your cluster for the Amazon VPC CNI plugin for Kubernetes plugin before deploying Amazon EC2 nodes to your cluster. *
That's not all though. In order for that role to provision storage for prometheus, you need to create a custom policy that allows it access to create/destroy EC2 volumes.
I created the below policy, attached it to the EKSNodeRole, then deleted my prometheus-server pod. When the pod started back up, everything kicked off like it was supposed to.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:CreateVolume", "ec2:DeleteVolume", "ec2:DetachVolume", "ec2:AttachVolume", "ec2:DescribeInstances", "ec2:CreateTags", "ec2:DeleteTags", "ec2:DescribeTags", "ec2:DescribeVolumes" ], "Resource": "*" } ] }
Relevant content
- Accepted Answerasked 2 years ago
- Accepted Answerasked 2 years ago
- asked 4 months ago
- asked 7 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago