All Questions
Content language: English
Sort by most recent
We have a load balancer serving traffic on us-east-1 region. In order to improve latency for european users, I was thinking to use Global Accelerator anycast and create another load balancer in one European regio (e.g. eu-central). In order to save costs, I would like not to duplicate the infrastructure in the new region, basically I want to redirect traffic from the european load balancer to the machines running in us-east-1.
I would like to clarify the following points:
1. Does it make sense to use the Global Accelerator in such scenario? My reasoning is that using anycast a european user will be redirected to an AWS entrypoint closer to its location and from that point it is inside the AWS backbone (which should be faster than reaching the US entrypoint via normal internet).
2. Is it possible to redirect traffic from a load balancer in one region to machines (target group) in another region?
Hi
With freetier I created iam roles under my root user, logged on to iam roles,created EC2, security group for the same. Created dynamodb too..but I copied the
public ipv4 url on chrome/mozilla browser ,I'm getting reload error. What might be wrong, provide solution????
Note: Operating system I selected is windows server free tier fyi
It's possible to get a renewal at a later time?
I didn't have time to use it and I would like to use it later when I could.
I know it's somewhat unreasonable, I shouldn't have registered when I had not the time to use it.
Thank you in advance.
I have installed the latest version of AWS CLI. While I am trying to open the installed CLI, it's popping up windows for a second and disappearing instantly. I don't know what is the possible reason for that.
Here is the output while I am entering the command: aws --version
aws-cli/2.11.8 Python/3.11.2 Windows/10 exe/AMD64 prompt/off
N:B: I am using Windows 10 Pro Education (64-bit)
The action failed because either the artifact or the Amazon S3 bucket could not be found. Name of artifact bucket: codepipeline-ap-south-1-521473578238. Verify that this bucket exists. If it exists, check the life cycle policy, then try releasing a change.
I tried to install AWS CLI v2 on raspberry pi 4 model b+ with Raspbian GNU/Linux 10 following step.
But I confronted `/usr/local/bin/aws: No such file or directory` error when checking aws cli verison by using `aws --version` command.
Is it possible to install AWS CLI v2 on raspberry pi 4 model b+ with Raspbian OS for ARM64 ?
Install step
```
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin
```
Error message
```
./aws/install: 78: ./aws/install: /home/pi/aws/dist/aws: not found
You can now run: /usr/local/bin/aws --version
```
```
$ aws --version
/usr/local/bin/aws: No such file or directory
```
Supplimental information is as follow for futher analysis.
```
$ uname -a
Linux rapsberrypi4 6.1.20-v8+ #1638 SMP PREEMPT Tue Mar 21 17:16:29 GMT 2023 aarch64 GNU/Linux
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
```
I want to prevent my Elastic Beanstalk application from triggering degraded health when there are 404 responses.
This has been [answered on StackOverflow](https://stackoverflow.com/a/38233065/3130281) and I can see the setting indeed:

However, once I click the "Edit" button, this setting is nowhere to be found:

If I switch over to the "new Elastic Beanstalk console", that setting is missing there as well.
How do I turn off health degradation on 404 responses?
After compiling the RTL, I can get a DCP. Then, a AFI will be generated using the aws ec2 create-fpga-image API. I wonder if a bitstream is generated during this process. If generated, whether the bitstream is encrypted.
Hi AWS,
I have created an EC2 instance and its key pair using terraform code. The terraform code for the same is:
```
resource "aws_instance" "test_ec2_instance_production" {
ami = var.ami_id
instance_type = var.instance_type
subnet_id = aws_subnet.public_subnet.0.id
vpc_security_group_ids = [aws_security_group.test_security.id]
tags = {
Name = "${var.default_tags.project_name}-${var.default_tags.environment}-ec2-instance"
}
key_name = var.generated_key_name
associate_public_ip_address = true
monitoring = true
}
// Create key-pair for EC2 instance
resource "tls_private_key" "prod_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.generated_key_name
public_key = tls_private_key.prod_key.public_key_openssh
provisioner "local-exec" {
command = <<-EOT
echo '${tls_private_key.prod_key.private_key_pem}' > test-prod-keypair.pem
chmod 400 test-prod-keypair.pem
EOT
}
}
```
I have generated the keys using the command ssh-keygen -t rsa -m PEM.
Now I am trying to provide the private key in the SSH server configuration setting of Jenkins and I am getting this error: **jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]**
Also I am not able to login into the EC2 using SSH connection command as the key is broken and getting this error:
**ec2-user@ec2-x-xxx-xx-xxx.us-east-2.compute.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic)**
Now the issue is this is a production environment and the key is broken. Is there any way to replace the key with a new one without terminating the instance as long way down I need to have a proper RSA key which I can add in the Jenkins SSH remote host to build my pipeline. Also you know Jenkins don't accept Open SSH key format.
Also I need to know the steps to generate the rsa key and to copy the key file into the .pem file which we are going to use for ssh connection with EC2. Please help!
How open port 25 from instance
Configured the ingress controller using the following configuration:
```
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user
namespace: frontend
annotations:
alb.ingress.kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
# ingressClassName: alb
rules:
- host: "app-dev.marcelo.ai"
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: user-app
port:
number: 80
```
When checking the logs, I am getting the following error:
```
{"level":"error","ts":1680300069.0612311,"logger":"controller.ingress","msg":"Reconciler error","name":"user","namespace":"frontend","error":"ValidationError: 1 validation error detected: Value 'app**' at 'tags.2.member.value' failed to satisfy constraint: Member must satisfy regular expression pattern: ^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$\n\tstatus code: 400, request id: 8c37758c-ba2d-4fea-825b-62f60df0a426"}
```
I have an existing virtual interface connected to direct connect connection.
While I try to create a new virtual interface, I am getting this error
"Exceeded the maximum number of virtual interfaces on [Connection_ID]. The limit is 1"
How do I increase this limit ?
It is very strange because this is what I see it in the documentation
"You can create 50 VIFs per Direct Connect connection, allowing you to connect to a maximum of 50 VPCs (one VIF provides connectivity to one VPC). There is one BGP peering per VPC."