All Questions

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How open port 25 from instance
1
answers
0
votes
16
views
asked a day ago
Configured the ingress controller using the following configuration: ``` --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: user namespace: frontend annotations: alb.ingress.kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: # ingressClassName: alb rules: - host: "app-dev.marcelo.ai" http: paths: - path: / pathType: Exact backend: service: name: user-app port: number: 80 ``` When checking the logs, I am getting the following error: ``` {"level":"error","ts":1680300069.0612311,"logger":"controller.ingress","msg":"Reconciler error","name":"user","namespace":"frontend","error":"ValidationError: 1 validation error detected: Value 'app**' at 'tags.2.member.value' failed to satisfy constraint: Member must satisfy regular expression pattern: ^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$\n\tstatus code: 400, request id: 8c37758c-ba2d-4fea-825b-62f60df0a426"} ```
0
answers
0
votes
15
views
asked a day ago
I have an existing virtual interface connected to direct connect connection. While I try to create a new virtual interface, I am getting this error "Exceeded the maximum number of virtual interfaces on [Connection_ID]. The limit is 1" How do I increase this limit ? It is very strange because this is what I see it in the documentation "You can create 50 VIFs per Direct Connect connection, allowing you to connect to a maximum of 50 VPCs (one VIF provides connectivity to one VPC). There is one BGP peering per VPC."
2
answers
0
votes
20
views
2sb
asked a day ago
HI, Anyone faced this problem while learning AWS Terraform ? I am following the Book to practice the Terraform I am getting below error. url: (7) Failed to connect to <<<removed Public IP Address of EC2 >> port 8080 after 49 ms: Couldn't connect to server Please advise if there is any recent upgrade ? My Code is below provider "aws" { region="us-east-2" } resource "aws_security_group" "instance" { name="terraform-example-instance" ingress { from_port=8080 to_port=8080 protocol="tcp" cidr_blocks=["0.0.0.0/0"] } } resource "aws_instance" "example" { ami="ami-0a695f0d95cefc163" instance_type="t2.micro" vpc_security_group_ids=[aws_security_group.instance.id] user_data=<<-EOF #!/bin/bash echo "Hello, World" > index.html nohup busybox httpd -f -p 8080 & EOF tags={ "Name" = "terraform-example" } }
2
answers
0
votes
33
views
asked a day ago
Trying to remove a ACL rule and I get the following 2 error messages. AWS WAF Region: east -1 Error Message 1: WAFInternalErrorException: AWS WAF couldn’t perform the operation because of a system problem. Retry your request. Error Message 2: ThrottlingException: Rate exceeded 1491/1500 WCUs Note: Same thing on west-1 with the same rule set and I can add and remove as expected. Tried creating new rule set and after adding the rule I get the same error message in region east-1 Any suggestions?
0
answers
0
votes
7
views
asked a day ago
Hello, Brand new EKS cluster latest version. Followed the first example in this guide: https://docs.aws.amazon.com/eks/latest/userguide/cross-account-access.html Created an OIDC Identity provider on Account1 accepting requests from the EKS cluster on account 2. In the EKS cluster, my k8s ServiceAccount resource have an annotation eks.amazonaws.com/role-arn pointing to an IAM role in account1. Application running in the pod is a .NET6 app with the AWSSDK.DynamoDBv2 nuget package making DynamoDB queries. It worked for a while, until at some point I got this exception: ``` Amazon.Runtime.AmazonClientException: Error calling AssumeRole for role arn:aws:iam::AcccountNumber:role/EKS-ServiceAccount ---> Amazon.SecurityToken.Model.ExpiredTokenException: Token expired: current date/time 1680295159 must be before the expiration date/time1680281898 ---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown. ``` I do see doing a kubectl describe on my pod these information: ``` Environment: AWS_ACCESS_KEY_ID: AWS_SECRET_KEY: AWS_STS_REGIONAL_ENDPOINTS: regional AWS_DEFAULT_REGION: us-east-1 AWS_REGION: us-east-1 AWS_ROLE_ARN: arn:aws:iam::AcccountNumber:role/EKS-ServiceAccount AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token Mounts: /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mq27b (ro) Volumes: aws-iam-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 86400 ``` I also found [this page](https://docs.aws.amazon.com/eks/latest/userguide/pod-configuration.html) mentioning it should renew at 80% expiration time and [this page](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html) with the minimum required SDK version. I can confirm I use AWSSDK.DynamoDBv2, AWSSDK.SecurityToken and AWSSDK.Core all version later than that (3.7.100.14). I was expecting the EKS cluster to automatically renew the token from the OIDC provider. Why isn't it doing it?
0
answers
0
votes
13
views
Dunge
asked a day ago
Possibly related to https://repost.aws/questions/QUqYIZ6_LdQomBCbJz0_63Uw/jdbc-enforce-ssl-doesnt-work-for-cloudformation-type-aws-glue-connection As described [here](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-glue-connection-connectioninput.html), JDBC_ENFORCE_SSL is an optional property when creating a Glue Connection. However, if this value is left unspecified the created connection does not receive a default value of 'false', and any attempts to use the connection result in the following error: ``` JobName:ExampleGlueJob and JobRunId:jr_12345 failed to execute with exception Unable to resolve any valid connection (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: InvalidInputException; Request ID: abcde-12345; Proxy: null) ``` Editing the connection and saving it via the Web GUI results in the `JDBC_ENFORCE_SSL: false` property being set on the connection, and it can be used without further errors. Example CFN Template: ``` rGlueConnection: Type: 'AWS::Glue::Connection' Properties: CatalogId: !Ref 'AWS::AccountId' ConnectionInput: ConnectionType: JDBC ConnectionProperties: JDBC_CONNECTION_URL: !Ref pJDBCConnectionURL USERNAME: !Sub '{{resolve:secretsmanager:${pSecretsManagerName}:SecretString:username}}' PASSWORD: !Sub '{{resolve:secretsmanager:${pSecretsManagerName}:SecretString:password}}' Name: !Ref pGlueConnectionName PhysicalConnectionRequirements: SecurityGroupIdList: !Ref pSecurityGroupIds SubnetId: !Ref pSubnet ``` Connection after creation (no JDBC_ENFORCE_SSL specified, jobs with connection attached fail to run): ``` ConnectionProperties: JDBC_CONNECTION_URL: jdbc:redshift://example.com:5439/example PASSWORD: 123 USERNAME: abc ConnectionType: JDBC CreationTime: '2023-03-23T13:40:36.839000-07:00' LastUpdatedTime: '2023-03-23T13:40:36.839000-07:00' Name: ExampleConnection PhysicalConnectionRequirements: SecurityGroupIdList: - sg-1234 SubnetId: subnet-12345 ``` Connection after opening and re-saving in Web Console (JDBC_ENFORCE_SSL:false specified, no error on job run): ``` ConnectionProperties: JDBC_CONNECTION_URL: jdbc:redshift://example.com:5439/example PASSWORD: 123 USERNAME: abc JDBC_ENFORCE_SSL: 'false' ConnectionType: JDBC CreationTime: '2023-03-23T13:40:36.839000-07:00' LastUpdatedTime: '2023-03-23T13:40:36.839000-07:00' Name: ExampleConnection PhysicalConnectionRequirements: SecurityGroupIdList: - sg-1234 SubnetId: subnet-12345 ```
0
answers
0
votes
15
views
asked a day ago
In my DynamoDb stream object, I have a field that is an array of strings, i.e. attribute type SS. ``` "FOO": {"SS": ["hello"]}, ``` I want to filter out the event if any string in that array matches one of "x", "y", or "z" (placeholder values). I can't figure out the correct filter pattern syntax here, but it does seem possible based on the answer in https://repost.aws/questions/QUgqGseyltTceWNYpMF_2tXw/how-to-create-dynamo-db-stream-event-filter-for-a-field-from-array-of-objects. Here's what I've tried: ``` "FOO": { "SS": { "anything-but": ["x","y","z"] } } ``` Can anyone advise on what the filter pattern should look like?
1
answers
0
votes
16
views
asked 2 days ago
Hi, I followed a youtube video and setup a OPENVPN EC2 and tunneled my home network through that and it was working fine. Now after a month later, the VPN server is still running fine and I am seeing payment amount increasing and forcasted for next month but when I log into my aws account and go to EC2 I don't see any instances running. 0 instance. but the VPN is working fine. So, I wonder how to be sure that it is my VPN is what I am using and not a hacker's VPN now. And why the bill is adding up? Any help for this novice will be appreciated. Thanks, Repost
1
answers
0
votes
7
views
asked 2 days ago
I tried to find a solution somewhere but didn't find a response for my case. I already have a Compute Environment, Job Queue, and Job Definition created with the required configuration. I can successfully submit a job manually, and it works as wanted. My Job Queue and Compute Environment go DISABLED automatically when they are Idle, I think that's how AWS Batch works to optimize costs (maybe ?) I configured a rule (cron) in EventBridge to submit a job (using the job queue, and job definition mentioned above), and it works fine, but I have to ENABLE manually the Compute Environment and Job Queue every time (which is not something I wanted), I thought of creating another rule in EventBridge to run a lambda function that enables my resources before submitting the job, but I think that is overengineered for such a simple task, I think I'm missing something here, can you give me suggestions, or correct me if I'm missing something in this simple use case? Thanks!
1
answers
0
votes
6
views
asked 2 days ago
Our SageMaker Studio service is broken in one of our AWS accounts in some deep way. Our original domain encountered this issue of "Update_Failed" when attempting to attach a new custom docker image. Using describe-domain via the CLI, we see that the "FailureReason" is just "InternalFailure". This issue also somehow effects brand new, entirely separate SageMaker Studio domains that we create. This is only an issue in our one (data science development) AWS account. Repeating the process in other accounts works as expected.
0
answers
0
votes
15
views
Everett
asked 2 days ago
Hello there AWS team! I'm looking around for the correct way to provision my devices to AWS IoT core. It seems provision by claim can do the trick, but I'm using ESP32 with the Arduino platform. That means I don't have access to ESP-IDF. It is possible to do provision by claim in the Arduino environment? if so, can you share me the link or documentation about it Thanks a lot in advance :)
1
answers
0
votes
18
views
asked 2 days ago