Questions tagged with Amazon Elastic Kubernetes Service
Content language: English
Sort by most recent
I have created a terraform project to build eks with karpenter, but when I try to build certain projects I get the problem that I show below, does anyone know how to fix it or what terraform configuration I need to apply to do it.
```
Warning FailedMount 25m kubelet MountVolume.SetUp failed for volume "kube-api-access-xxxxx" : write /var/lib/kubelet/pods/xxxxxx-xxxxx-xxxxxx/volumes/kubernetes.io~projected/kube-api-access-xxxxx/..2023_02_15_09_10_29.2455859137/token: no space left on device
Warning FailedMount 5m57s (x8 over 24m) kubelet Unable to attach or mount volumes: unmounted volumes=[kube-api-access-xxxx], unattached volumes=[kube-api-access-xxxx]: timed out waiting for the condition
Warning FailedMount 3m39s (x13 over 24m) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[kube-api-access-xxxxx], unattached volumes=[kube-api-access-xxxxx]: timed out waiting for the condition
```
My Message routing destination is web service on EKS . After updating EKS version(change Message routing destination Domain name resolution ). the Message routing is not working on AWS IOT core. Can you help me check?
Could AWS batch work with Amazon ECS Anywhere, Amazon EKS Anywhere
In the documentation I do not sow construction to use it like:
"platformCapabilities": [
"EXTERNAL"
]
https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#job-definition-parameters-platform-capabilities
Hello, if I have ec2 instances that belong to an EKS group, and I buy savings plans of the ec2 type, is the discount applied or does it only apply to instances that are not under a managed service?
I have a data science cloud IDE running on an EC-2 server with a multi-attach EBS volume attached. This volume contains user home directories and shared libraries. For the record, the IDE is RStudio Workbench, but not sure if it makes any difference.
I want to setup a Kubernetes integration for this IDE. I'm considering two options:
- EKS managed node groups
- EKS Fargate
For both I was able to implement the integration, that's not the issue. What is is that for the integration to be useful it is required for the pods to be able to mount this volume with user data and libraries.
If we omit this requirement, Fargate looks like the perfect option for us. But I know that it only supports EFS mounts. I am considering using EFS instead of EBS. But there are concerns about EFS speed. I did some tests and it's noticeable in some day-to-day tasks. So I kind of want to put this option on a shelf and try figure something out with EKS managed node groups first.
I tried starting NFS server from a separate ec-2 to which I attached the EBS (it supports multi-attach) and mounting the NFS to pods. But it seems that when an IDE process is started with the volume attached like this, it breaks the NFS server.
Theoretically it should be possible to implement persistent EBS volume claim for Kubernetes pods, but it's unclear whether the multi-attach -- to the main server and to the pods -- would work.
So I have a general question: am I doing something stupid with these attempts and should I just go for EFS + Fargate? Or could there be a viable setup where both the main server and Kubernetes can access one EBS and the same time? If so what would it look like?
I want to enable Kubernetes secret encryption. In order to do that, I need to create KMS key first.
When creating KMS key, there's page to choose what roles that will be assigned to administrative and usage permission.


My question is, what roles should I choose for Administrative and Usage permission ?
My assumption is this :
**Administrative**
* Role that will manage KMS (update key policy, add/remove tags, enable/disable automatic rotate KMS key, create/delete alias, enable/disable key, delete key)
* Role that will setup kubernetes secret encryption.
**Usage**
* Role that will manage KMS (update key policy, add/remove tags, enable/disable automatic rotate KMS key, create/delete alias, enable/disable key, delete key)
* Role that will setup kubernetes secret encryption.
* EKS role service
Are those assignment correct ?
We have an application deployed in EKS that dynamically registers ingress rules in ALB.
Each ingress rule maps to a distinct hostname on a common domain (eg `foo-001.example.com` `foo-002.example.com` etc).
At the moment we are hitting the ALB Target Group limit of 100 as each ingress rule is creating both an ALB rule *and* an ALB Target Group. We have had the rule limit increase to 200, but the Target Group limit cannot be changed.
Is there are way to reuse/share Target Groups when creating the EKS ingress objects?
We currently use the following annotation when creating the ingress object:
```
'alb.ingress.kubernetes.io/target-type': 'ip',
```
The documentation implies changing this to `instance` would then allow us to have one Target Group per k8s node the services are deployed to... but we aren't sure.
This is what we're reading: https://catalog.workshops.aws/eks-immersionday/en-US/services-and-ingress/targetgroupbinding
While running eks cli commnds facing the below issues
/usr/local/bin/aws eks list-clusters --region us-east-1
SSL validation failed for https://eks.us-east-1.amazonaws.com/clusters hostname 'eks.us-east-1.amazonaws.com' doesn't match either of '*.us-east-1.es.amazonaws.com', '*.cell-01.us-east-1.es.amazonaws.com'
but not seen when executing other cli commands. like (aws ec2)..
Hi, from some time I have EKS cluster and now I want to enable the cluster secrets encryption with the use of my KMS key. In documentation it's mentioned: **After you enabled encryption on your cluster, you must encrypt all existing secrets with the new key**
But in a console I read that it will be automatically encrypted. What action should take after I enabled this encryption. In my cluster I have a lot of secrets for different namespaces (argocd, kube prometheus stack and so on...). I don't want to break anything.
Thank you,
M
I have enabled EKS audit logs, in cloud logs there are two files getting generated for audit logs alone, each getting written with logs in parallel.
Why there are two files generated, is there any difference between them.
Is it possible to set up a node with 2 network interface? if it is how?
When I try to configure node group with launch template witch is configure to use 2 network interface, I can't select the template version because it is grayed out and it shows "the launch template version can only have a max of one network interface"
Using an eksctl file to setup the EKS cluster and add-ons usually the addon for EBS CSI is like this:
```
addons:
- name: aws-ebs-csi-driver
version: latest
resolveConflicts: overwrite
```
The eksctl documentation says that is possible to use a field `configurationValues:` to set custom values in addon deployment.
But the EBS CSI driver has a Deployment `ebs-csi-controller` in `kube-system` namespace with the following affinity setup:
```
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ebs-csi-controller
topologyKey: kubernetes.io/hostname
```
so the question is if is possible to add a `nodeAffinity` to complement the default affinity using the field `configurationValues:` of eksctl file on `addons` setup, and if yes how to do it?
For the moment the "fix" was done changing direct the Deployment object in the EKS.
The need for this kind of customization is because if you have a specific nodegroup to run core services, like EBS CSI, will be necessary to add a custom affinity to put them (are 2 replicas) on there.