Questions tagged with Amazon EC2

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I am new to starting up an ec2 server. I set up the AMI and then started up the server. But shortly after, I see a second server starting up that calls itself a DockerReact-env server. What is DockerReact-env and why is it starting up?
1
answers
0
votes
11
views
asked 18 days ago
Hello Team, I have migrated CEDR to DRS. Launched Test Server successfully. Now want to perform Reverse Data Replication from Recovery Instance which is failing with below mentioned error: ![Enter image description here](/media/postImages/original/IM4hvA3jI4RGKuTG3j7WNBDg) I have checked whether DRS Agent is running on the Recovery Agent and the output is: ![Enter image description here](/media/postImages/original/IM3kpUP0JRQHWFHmDd-hw8kA) And I am able to reach internet. I have configured my Target Region DRS Replication Settings for the same. Please guide me to fix this issue. Do I need to install DRS Agent on the Recovery Instance for Reverse Replication? I appreciate you help. Thank you.
0
answers
0
votes
17
views
asked 18 days ago
We have made a new RDS Database and we have a public DNS record of it to connect. How can I find its private IP to connect from within the AWS network in same zone.
2
answers
0
votes
33
views
asked 18 days ago
Hello, We have 2 app servers in us-east-1b AZ. As part of DR plan, app team wants a redundancy based on availability zone. Hence, they want to move one of the app server from one AZ to another AZ (from us-east-1b to us-east-1c) in same region. As per my understanding I will performing below actions 1. Shutdown existing app server 2. Take AMI backup of the server 3. Use the above AMI and launch new server in different AZ by selecting subnet from another AZ 4. complete the movement my query is, while I perform above steps, does AWS copies entire AMI to another AZ ? by doing this will new server (compute and ebs volumes both ) will run in new AZ ? so that as per requirement 1 server will run in us-east -1b and another server in us-east-1c Note: App team do not want region level redundancy.
2
answers
0
votes
38
views
asked 18 days ago
I've installed CVAT in the EC2 instance but when I go to the public IP address nothing comes up. I copied and pasted the code from the cvat installation guide so it should be pretty straight forward. Any advice please?
1
answers
0
votes
34
views
asked 19 days ago
Hello, I have a on-demand c6g.xlarge EC2 instance running. Now I have decided to purchase a Standard RI or an EC2 Instance Savings Plan. But before buying that, I would like few things. 1) Which of them allows to change OS in future. 2) Does either of them allows to change instance from c6g to c7g or from c6g to c6a? 3) If I buy a c6g.xlarge standard RI for one year today, and want to resize it to 2xlarge after say 6 months, (as per aws docs, I can do this by buying another c6g.xlarge, say I buy it for 3 years) what will be the validity of the newly generated c6g.2xlarge RI, 6 months or another three years from the day of latter purchase? 4) What will happen in case of EC2 saving plan for the same scenario as Que. 3? Thanks
2
answers
1
votes
23
views
asked 19 days ago
![t2.micro](/media/postImages/original/IMEQAisvnPRha7WAm-gHo6Qw) [official docs only says t2.micro Network Performance is Low to Moderate](https://aws.amazon.com/ec2/instance-types/), I think it is very confused. I want to know specific amount: 1Mbps, 1Gbps or others? Because I want to do TCP optimize, tune my debian server, like [this tutorial](https://cloud.google.com/architecture/tcp-optimization-for-network-performance-in-gcp-and-hybrid) and [this tutorial](https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2)
5
answers
0
votes
73
views
angle N
asked 19 days ago
I'm following this [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-access-aws-services/) to make ECS tasks access other AWS services using task role credentials. When I run `curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` it returns a JSON document that looks something like this JSON document: ``` { "RoleArn":"arn:aws:iam::718304...", "AccessKeyId":"ASIA2...", "SecretAccessKey":"BNZD...", "Token":"IQoJ...", "Expiration":"2023-03-03T17:56:46Z" } ``` My ECS instances are long running which means they will outlive the expiration timestamp provided in the above result. **Do I need to poll that endpoint regularly to avoid expired credentials? if so is there a way to extend the credentails life?**
1
answers
0
votes
37
views
asked 19 days ago
Hello, For my work, I need to simulate phishing attacks with my clients. This is an activity that I am contractually authorized to do. I have several questions regarding the use of Amazon SES and its services: - Is such activity allowed by Amazon SES (knowing that the account will never be used to send spam to anyone other than those designated by the client company)? - Is it possible to have a dedicated IP address even for a low volume of email (it is necessary to have a fixed dedicated IP so that customers can whitelist this IP in their corporate email anti-spam system)? - Won't sending few e-mails be a problem for the delivery of the messages (it doesn't matter if the message takes a few minutes to arrive, as long as it arrives)? Amazon blocks SMTP port 25 on EC2 instances by default. - If it is not possible to use the SES service, is my activity allowed and legal to allow the unblocking of SMTP port 25 on my EC2 instance that will be responsible for sending the phishing campaign simulations to my customers? Thank you !
2
answers
0
votes
22
views
asked 19 days ago
I have one t3.micro instance and it has java sample application running java is consuming approx. 60% of mem where as some other stuff has around 20% totally instance is having 80% of memory utilization. When I saw my instance it says that under provision then i dig in details in metric tab i see mem is under provision it has 2 graphs current is up to 85-87% and option1 is 40-43%. I got confused because with same setup i have one more instance which is not showing as under provision what should i do now should i upgrade it.
0
answers
0
votes
7
views
asked 19 days ago
I have a django app which uses celery beat to scan the DB and trigger tasks accordingly. I want to deploy this to elastic beanstalk, but simply applying leader_only to my celery beat invocation won't be enough as we need a way to implement protection such that the beat instance is not killed during autoscaling events. So far I've found the following options online Run a separate ec2 that runs celery beat - not ideal but I could make this a cheap instance since the functionality required is so simple and lightweight. I assume that if I point this at an SQS queue and have my workers pulling from that queue everything will work fine. However, it's not clear to me how to have this instance discover the tasks from my Django app short of deploying it again to the second instance and then having that beat instance interact with my queue. Use some sort of leader selection lambda like described here (https://ajbrown.org/2017/02/10/leader-election-with-aws-auto-scaling-groups.html) for my EB autoscaling group. This seems a bit extra complicated, in order to implement this I'm guessing the idea is to have a script in my container commands that checks if it is the leader instance (as assigned by the leader tag in the above tutorial) and only execute celery beat if this is the case. Ditch SQS and use an Elasticache Redis instance as my broker, then install the redbeat scheduler (https://github.com/sibson/redbeat) to prevent multiple instances of a beat service from running. I assume this wouldn't affect the tasks it spawns though correct? My beat tasks spawn several tasks of the same 'type' with different arguments (would like an idiot check on this if possible). My question is, can anyone help me assess the pros and cons of these implementations in terms of cost and functionality? Is there a better, more seamless way to ensure that celery beat simply runs on one instance alone, while my celery workers scale with my autoscaling infrastructure? AWS newbie so would greatly appreciate any help!
0
answers
0
votes
11
views
asked 19 days ago
I am trying to run batch files to stop/start console apps within a windows ec2 instance upon deployment using CodeDeploy. For testing, I wrote a batch script to run a program that produces a simple text file. This batch script is listed under the ApplicationStart hook of my appspec.yml file. For some reason the text file is not being produced. My assumption is that the script isn't being run. Not sure on how to resolve this issue. appspec.yml: ``` version: 0.0 os: windows files: - source: / destination: /source/app hooks: ApplicationStart: - location: .\application_start.bat timeout: 300 ``` application_start.bat ``` START C:\source\app\ConsoleApp1\bin\Release\ConsoleApp1.exe ``` I checked the script after deploying it and was able to run it manually just fine. What might I be missing?
0
answers
0
votes
37
views
asked 19 days ago