Questions tagged with Amazon EC2
Content language: English
Sort by most recent
Hi AWS, I know this might not be a right question for the community here but the point is my VPC is having IPv4 CIDR block of 172.31.0.0/16 and the Atlas VPC CIDR block is 10.8.0.0/21. The peering connection is available and I have even allow access from anywhere in MongoDB Atlas UI for the cluster but still experiencing the same issue. EC2 instance is the VPC with a public subnet.
I have tried every way possible but still same issue persists. Please help.
I have a .net windows application (exe) which I would like to run on my windows ec2 instance.
I can run it on my computer at home but it doesnt even load (no error or any message) when I try to run it.
The application connects to a remote api and database. (it has no publisher credentials)
I am new to this. What are the general /most common settings I am missing here to get it to work? Both aws and windows firewall settings for example.
We have a CentOS Linux server in AWS that runs monitoring software. This server has run exceptionally well for a few years until a few weeks ago where it started experiencing high loads even when sitting idle. The previous time this occurred, we troubleshot it, upgraded the software and the OS, and felt like it was potentially fixed. With a high load yet low CPU, low memory, and zero swap used, the assumption was maybe some disk I/O issue in the server farm that resolved itself. The problem resurfaced late yesterday and after digging more this evening, we noticed the CPU steal time ('st' in the top command) is absurdly high; CPU steal time is where the hypervisor or other servers in the shared virtual environment are causing this system to wait on CPU time. If the system is simply rebooted, the problem remains. The only way we have resolved it both times is if the instance is stopped, we wait a few mins, and then start it again. Is it possible the instance is switched to a different server farm *without* a resource hog when we perform the extended shutdown? Does this mean AWS has over-allocated resources? Are there any other ways to prevent this from happening again beyond paying for a dedicated server? Any help would be appreciated!
top - 21:47:37 up 10 days, 1:30, 2 users, load average: 18.77, 11.27, 11.48
Tasks: 155 total, 5 running, 150 sleeping, 0 stopped, 0 zombie
%Cpu(s): 18.3 us, 4.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 77.6 st
KiB Mem : 3878444 total, 1144048 free, 910772 used, 1823624 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 2483948 avail Mem
top - 21:47:46 up 10 days, 1:30, 2 users, load average: 17.20, 11.18, 11.45
Tasks: 157 total, 9 running, 148 sleeping, 0 stopped, 0 zombie
%Cpu(s): 19.0 us, 4.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 76.7 st
KiB Mem : 3878444 total, 1120580 free, 934224 used, 1823640 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 2460496 avail Mem
My EC2 instance has gone down a few times in recent months. It works again every time after rebooting it. I am running a m5.2xlarge Ubuntu instance. The memory usage is 9%.
Any help regarding the possible cause would be most appreciated, thanks.

Hi,
my current setup:
* EC2 with ARM
* Docker installed in EC2
* Spring + Java app in one container
* MySQL in another container
When I run it all in the EC2 it works like charm, but problem occurs when I am trying to connect mysql storage to an attached EBS.
my docker run command for mysql:
`docker run -d -p 3306:3306 -v /dev/xvdf/mysql:/var/lib/mysql:rw -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=erdeldb mysql:8`
When setting volume as `/dev/sdf/mysql` I get an error saying `it is not a directory`. I also can not open that directory in console with same error `cd /dev/sdf/` returns `not directory error`
When setting volume as `/dev/xvdf/mysql` I get storage issue, not enough space.
When I check storage of /dev/xvdf after I have attached the EBS, I see 4.0 MB

I am not sure what am I doing wrong. I haven't deployed before, just learning. Any inputs, thanks.
hello Im trying to process excersice 3 for creation of EC2 instance as part of online training. all isntructions are described here :
https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/exercise-3-compute.html
Im able to succesfully create EC2 instnace but page which shoudl be available is till unreacable
Im able to connect to linux server where below code should be executed as part of instance createion (according to isntructions)
#!/bin/bash -ex
wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/FlaskApp.zip
unzip FlaskApp.zip
cd FlaskApp/
yum -y install python3 mysql
pip3 install -r requirements.txt
amazon-linux-extras install epel
yum -y install stress
export PHOTOS_BUCKET=${SUB_PHOTOS_BUCKET}
export AWS_DEFAULT_REGION=<INSERT REGION HERE>
export DYNAMO_MODE=on
FLASK_APP=application.py /usr/local/bin/flask run --host=0.0.0.0 --port=80
however nothing fromt this is executed on the server.
When I try to execute this manualy one by one I have to additionaly use sudo command and install individualy pip3 and still amazon-linux-extras is not executable.
At the and page is still not reachable.
I think that the provided instructution for this excercise are realy outdated.
So can anybody help me to successfully complete this excercise so page be reachable? What is the peroper set of commands whcih have to be executed?
And why non of them is executed duritng boot as they are part of user data box durign instance creation ?
Thank you anybody for the answer.
Regards
I am having trouble setting up a working wire guard vpn server on an ec2 instance, I created the `wg0.conf` file with the following contents
```
[Interface]
Address = 10.10.0.1/24
ListenPort = 10001
PrivateKey = <server_private_key>
SaveConfig = false
PostUp = /etc/wireguard/helper/add_nat.sh
PostDown = /etc/wireguard/helper/del_nat.sh
[Peer]
PublicKey = <removed>
AllowedIPs = 10.10.0.2/32
```
the contents of `add_nat.sh`
```
#!/bin/bash
IPT="/sbin/iptables"
IN_FACE="ens5" # NIC connected to the internet
WG_FACE="wg0" # WG NIC
SUB_NET="10.10.0.0/24" # WG IPv4 sub/net aka CIDR
WG_PORT="10001" # WG udp port
## IPv4 ##
$IPT -t nat -I POSTROUTING 1 -s $SUB_NET -o $IN_FACE -j MASQUERADE
$IPT -I INPUT 1 -i $WG_FACE -j ACCEPT
$IPT -I FORWARD 1 -i $IN_FACE -o $WG_FACE -j ACCEPT
$IPT -I FORWARD 1 -i $WG_FACE -o $IN_FACE -j ACCEPT
$IPT -I INPUT 1 -i $IN_FACE -p udp --dport $WG_PORT -j ACCEPT
```
then i enabled port forwarding by setting `net.ipv4.ip_forward=1` in `/etc/sysctl.conf`, I also allow the port 10001 on UDP using the command `ufw allow 10001/udp` and I added that port rule to the inbound rules in ec2 security group
on my laptop I configured `wg0.conf` like so
```
[Interface]
PrivateKey = <laptop_private_key>
Address = 10.10.0.2/24
DNS = 8.8.8.8
[Peer]
PublicKey = <server_public_key>
AllowedIPs = 10.10.0.0/24
Endpoint = <ec2_elastic_ip>:10001
PersistentKeepalive = 10
```
Trying to ping the server from my laptop results in 100% packet loss same as for the server side.
Is there something I missing or is there any errors in my configuration?
Hello, I need to solve this problem for my bachelor thesis. I want to run ubuntu ami on EC2 and use it as a remote DHCP server. This has to be within the free tier so I can't use elastic IPs and also NAT. I'm running Ubuntu 22.04 LTS with public IPv4 assigned and an access security group configured. Should I use EC2 public IPv4 address as default gateway when configuring DHCP? In theory it should work with the configured ip helper address from the remote router (my local router). Do I need to worry about AWS because they only route unicast? If you have any advice I will be grateful. Thank you very much for any answers.
I just created an AWS account for the purpose of using AWS EC2 virtual machine at some point in the future.
However I don't need to use it yet (probably in the next 6-12 months). How can I pause the 12 month free-tier until I am ready to use EC2?
Thanks,
We currently are using Amazon Elastic Compute Cloud t2.large with Windows Server 2012 R2. Microsoft support for Windows Server 2012 R2 ends October 10, 2023. Does that mean we’ll need to upgrade our current AWS setup? If so, what is the cutoff date for when this needs to get done.
I have a sample nodeJS application running on ElasticBeanStalk on a single instance without the load balancer. I read through the Free Tier documents to make sure i am following the guidance to stay in free tier but i still keep getting these small charges despite the changes i have made.
I have single instance running t2micro . I have attached the breakdown of billing below. For some reason i am charged per/hr for running t2micro in a single instance as well .
Is there anything i am doing wrong? i know the charged amount is small but still it shud fall under free tier. Any help will be appreciated.
I have attached the screenshot of EC2 dashboard and Billing details below


Hello Everyone,
I have a Private EKS cluster. I want to access my cluster from a new Ec2 instance having kubectl and aws cli installed. Previously, everything is fine means i am able to access my Eks cluster and performing kubectl commands. But accidentally, i deleted aws-auth-cm.yml file. Then after It gives error : "You must be logged in to the Cluster (Unauthorised)".
After that, i created a new eks cluster, with the same name, configuration and roles. And deleted previous one. Kindly, requesting or guide me how to access my eks cluster now step by step.
I studied lot of articles and posts. But problem not solved.