Questions tagged with Amazon Elastic Kubernetes Service
Content language: English
Sort by most recent
There was an error creating this change set
Index: 1, Size: 1
I get this ver unfriendly error message. Any ideas?
I am trying to import a manually created eks node group into cloud formation.
I got the template from former2
According to:
https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html
For an 1.24 EKS Cluster with eks.3 platform version, the kubernetes version (control plane) should be 1.24.7, however all our clusters (created using AWS CDK) are reporting 1.24.8 as kubernetes version (v1.24.8-eks-ffeb93d to be more precise). I re-read that platform-versions.html multiple times and cannot interpret it in such a way that what we're observing is expected behaviour. Would anyone be able to confirm if that's just bad wording/documentation and if yes, what would be a way to get possible kubernetes version(s) for a given EKS platform version (i.e. 1.24/eks.3).
Many thanks,
damjan
We are trying to launch a POD in EKS from MWAA. Our EKS is authenticated using aws-iam-authenticator in kube_config.yaml. But MWAA shows below error in the MWAA log
kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
MWAA Environment ARN or Name: arn:axxxxxx:environment/airflow-demo
Region: us-east-1
It looks like the DAG is unable to read the config file stored in S3. I am not sure whether its related to using the kube_config.yaml from S3 or using aws-iam-authenticator. We referred below writeup except the kubeconfig authentication part.
https://blog.beachgeek.co.uk/working-with-amazon-eks-and-amazon-managed-workflows-for-apache-airflow-v2x/
Can someone help?
Thanks
--Venky
Hi,
We're looking for a solution to remediate the excessive IP address consumption by EKS clusters. As the enterprise CIDR ranges are limited and tend to get eaten up fast by EKS we are facing an IP shortage and overlap.
We thought of having a peering between two VPCs (1 that is routable and the 2nd will be a non-routable VPC which is the by default AWS VPC). We would then have the IPs we would like to publish on the routable one...
Have anyone tried that approach ? Is there an alternative solution ?
Thanks in advance,
A few days ago attaching EBS volumes suddenly stopped working.
My EKS cluster uses ebs.csi.aws.com addon with dynamic provisioning.
here is my storageClass config
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```
and volumeClaimTemplate in my sts config
```
volumeClaimTemplates:
- metadata:
name: log
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
```
after sts deployment a PVC, PV and VolumeAttachment are created, however the pod is stuck in ContainerCreating state with error AttachVolume.Attach failed for volume "pvc-xxx" : rpc error: code = NotFound desc = Instance "i-xxx" not found
I triple-checked, the volume is not attached to any other instance, and the instance exists.
One funny thing though - when I describe the created PV I see this
```
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
FSType: ext4
VolumeHandle: vol-xxx
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=xxx-8081-ebs.csi.aws.com
```
the (unmasked) volumeHandle does not even exist.
Where might be the problem? As I said earlier, this issue popped up from day to day without changing the config
K8S version 1.24
EBS CSI Driver addon version v1.11.5-eksbuild.2 (upgrade nor downgrade didn't help)
Thanks
Suppose an 'EKS Cluster' was created, and if no loadbalancers exists, is there any way to associate the 'SSL Policies' without loadbalancer
AWS documents 30 default managed node groups per eks cluster, however as its adjustable , need to know the maximum hard limit for the managed node groups per eks cluster without any performance issue ?
Hi, I am using the **aws-ebs-csi-driver** add-on, and while I was able yesterday to input a custom JSON content, today I tried to upgrade the add-on's version to latest (*v1.15.0-eksbuild.1*) and I got the below error:
`ConfigurationValue is not in valid JSON or YAML format.`
Here is my JSON:
```
{
"controller": {
"nodeSelector": {
"kubernetes.io/os": "linux",
"aaaaa": "xxx-yyy-zzz",
"some_other_key": "abcd"
}
}
}
```
which seems valid, according to the schema I get from
```
aws eks describe-addon-configuration --addon-name aws-ebs-csi-driver --addon-version v1.15.0-eksbuild.1
```
It's very strange that I was able to input that JSON yesterday, but I cannot now? Has the updated version broken something in the schema validator? Is this a bug or something wrong with the data I try to input?
We have deployed a Django application in EKS and used RDS PostgreSQL with RDS proxy as a database backend.
Over the last month, we have started noticing occasional 500 "Internal Server Error" responses from our web app with the following error coming from Django:
`django.db.utils.OperationalError: connection to server at "<proxy DNS name>" (<proxy IP address>), port 5432 failed: server closed the connection unexpectedly`
This suggests that RDS proxy closed the client connection. In Django settings, the configured value of `CONN_MAX_AGE` parameter is the default 0, which means Django opens a new database connection for every query - this means that the observed failures cannot be related to RDS proxy's idle client connection timeout setting, which we have set to 30 minutes.
To deal with this issue, we have implemented retries on the service mesh level (Istio). However, we would like to know more about the root cause of the failures and why we have seen an increased frequency of them during the last month - this almost never happened previously.
Looking at the proxy and the database metrics in Cloudwatch, it doesn't look like there was increased traffic during the failures. Nevertheless, could the proxy close a client connection during a scaling operation? How can we get more insight into RDS Proxy internal operations? Turning on Enhanced Logging keeps it enabled only for 24 hours and there is no guarantee that the error will occur during that time window - we are also a bit nervous enabling it on production since it can slow down performance.
Team I Building a cloudformation stack in which we are creating AWS-EKS cluster and post creation of EKS cluster by using "Custom::Helm" resource type we are deploying Fluentbit in the cluster.
```
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
pEKSClusterName:
Description: Name of the EKS Cluster
Type: String
Default: EKSCluster
VPCID:
Description: VPC ID
Type: AWS::EC2::VPC::Id
AllowedPattern: ".+"
Resources:
fluentbitagent:
Type: "AWSQS::Kubernetes::Helm"
Properties:
TimeOut: 10
ClusterID: !Ref pEKSClusterName
Name: fluent-bit
Namespace: aws-cloudwatch
Repository: https://aws.github.io/eks-charts
Chart: eks/aws-for-fluent-bit
Values:
image.repository: !FindInMap [RegionMap, !Ref "AWS::Region", cwrepo]
ValueYaml:
!Sub
- |
clusterName: ${ClusterName}
serviceAccount:
create: false
name: aws-logs
region: ${AWS::Region}
vpcId: ${VPCID}
- ClusterName: !Ref pEKSClusterName
VPCID: !Ref VPCID
Mappings:
RegionMap:
us-east-1:
cwrepo: public.ecr.aws/aws-observability/aws-for-fluent-bit
```
I wanted to pass custom value to helm values for Fluentbit, for example i wanted to pass FluentBitHttpPort='2020', TIA:-)
I have an angular and spring-boot application in the EKS cluster. My spring boot is connected to RDS in a private subnet in the same VPC as my cluster. I have created one alb ingress controller for my two deployment services. my frontend is in http://albdns/health and my backend is in http://albdns/user/app. How do I enable communication between the backend and frontend?
I followed instruction https://aws.amazon.com/blogs/containers/introducing-amazon-cloudwatch-container-insights-for-amazon-eks-fargate-using-aws-distro-for-opentelemetry/ to deploy container insights to eks fargate. But there is nothing in cloudwatch->container insights dashboard. Is it supported in eks fargate?
I also tried to deploy cloudwatch agent for prometheus in eks fargate by following instruction https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup.html. I still could not see anything in cloudwatch-> container insights dashboard. It says "You have not enabled insights on your containers"