Questions tagged with Amazon EC2

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

i kill it and it restart itself. i cannot work!! please help emergency
0
answers
0
votes
2
views
asked an hour ago
Hello Team, In New Amazon Linux AMI AMI ID ami-02f3f602d23f1659d (al2023-ami-2023.0.20230315.0-kernel-6.1-x86_64), which they launched on 15th March,2023 the Instance Metadata Service comes with version 2 by default where HttpTokens is mandatory. Direct curl http://169.254.169.254/latest/meta-data/instance-id command won’t work here. For IMDSv2, we have fetch the data through token authentication right. For reference https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html My question is For the new AMI, I am seeing that by default it comes up with the Instance Metadata Service version as 2. Could you please confirm me that hereafter the NEW AMI released from Amazon will it be having the IMDS version with default version as 2?
0
answers
0
votes
4
views
asked 2 hours ago
I am performing the below activity. I am collecting memory metrics of ec2 instance via cloud watch agent. I will then use that metrics to make dashboards, please let me know if this activity will extract any charges.
1
answers
0
votes
6
views
asked 3 hours ago
I have used **ml.g4dn.2xlarge** instance on SageMaker to test GPT-J6B model from HuggingFace using Transformer. I am using `revision=float16` and `low_cpu_mem_usage=True` so that the model is only of 12GB. It is downloaded but ***after*** that it suddenly crashes the kernel. Please share the workaround. The memory of that instance is 32 GB wit 4 vCPU. ```python !pip install transformers from transformers import AutoTokenizer, AutoModelForCasualLM model = AutoModelForCasualLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", low_cpu_mem_usage=True) # It crashes here tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") ``` It downloads 12GB model but after that, it crashes. I tried to follow this thread [here](https://repost.aws/questions/QUsO3sfUGpTKeHiU8W9k1Kwg/why-does-my-kernal-keep-dying-when-i-try-to-import-hugging-face-bert-models-to-amazon-sage-maker) but still there I can't update the sentencepiece. Please help. Thanks
0
answers
0
votes
4
views
EM_User
asked 4 hours ago
How could I find out the four new name servers for our hosted zone on Route 53? Currently the old name servers can't find the hosted record via nslookup. And the hosted website is not responding on the Internet. Thank you.
2
answers
0
votes
12
views
asked 6 hours ago
Hi AWS, I know this might not be a right question for the community here but the point is my VPC is having IPv4 CIDR block of 172.31.0.0/16 and the Atlas VPC CIDR block is 10.8.0.0/21. The peering connection is available and I have even allow access from anywhere in MongoDB Atlas UI for the cluster but still experiencing the same issue. EC2 instance is the VPC with a public subnet. I have tried every way possible but still same issue persists. Please help.
0
answers
0
votes
5
views
profile picture
asked 6 hours ago
I have a .net windows application (exe) which I would like to run on my windows ec2 instance. I can run it on my computer at home but it doesnt even load (no error or any message) when I try to run it. The application connects to a remote api and database. (it has no publisher credentials) I am new to this. What are the general /most common settings I am missing here to get it to work? Both aws and windows firewall settings for example.
1
answers
0
votes
13
views
asked 6 hours ago
We have a CentOS Linux server in AWS that runs monitoring software. This server has run exceptionally well for a few years until a few weeks ago where it started experiencing high loads even when sitting idle. The previous time this occurred, we troubleshot it, upgraded the software and the OS, and felt like it was potentially fixed. With a high load yet low CPU, low memory, and zero swap used, the assumption was maybe some disk I/O issue in the server farm that resolved itself. The problem resurfaced late yesterday and after digging more this evening, we noticed the CPU steal time ('st' in the top command) is absurdly high; CPU steal time is where the hypervisor or other servers in the shared virtual environment are causing this system to wait on CPU time. If the system is simply rebooted, the problem remains. The only way we have resolved it both times is if the instance is stopped, we wait a few mins, and then start it again. Is it possible the instance is switched to a different server farm *without* a resource hog when we perform the extended shutdown? Does this mean AWS has over-allocated resources? Are there any other ways to prevent this from happening again beyond paying for a dedicated server? Any help would be appreciated! top - 21:47:37 up 10 days, 1:30, 2 users, load average: 18.77, 11.27, 11.48 Tasks: 155 total, 5 running, 150 sleeping, 0 stopped, 0 zombie %Cpu(s): 18.3 us, 4.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 77.6 st KiB Mem : 3878444 total, 1144048 free, 910772 used, 1823624 buff/cache KiB Swap: 0 total, 0 free, 0 used. 2483948 avail Mem top - 21:47:46 up 10 days, 1:30, 2 users, load average: 17.20, 11.18, 11.45 Tasks: 157 total, 9 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 19.0 us, 4.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 76.7 st KiB Mem : 3878444 total, 1120580 free, 934224 used, 1823640 buff/cache KiB Swap: 0 total, 0 free, 0 used. 2460496 avail Mem
0
answers
0
votes
6
views
asked 7 hours ago
My EC2 instance has gone down a few times in recent months. It works again every time after rebooting it. I am running a m5.2xlarge Ubuntu instance. The memory usage is 9%. Any help regarding the possible cause would be most appreciated, thanks. ![Enter image description here](/media/postImages/original/IM64Qdr-HnQxW3Cd4bClI_QQ)
1
answers
0
votes
9
views
asked 17 hours ago
Hi, my current setup: * EC2 with ARM * Docker installed in EC2 * Spring + Java app in one container * MySQL in another container When I run it all in the EC2 it works like charm, but problem occurs when I am trying to connect mysql storage to an attached EBS. my docker run command for mysql: `docker run -d -p 3306:3306 -v /dev/xvdf/mysql:/var/lib/mysql:rw -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=erdeldb mysql:8` When setting volume as `/dev/sdf/mysql` I get an error saying `it is not a directory`. I also can not open that directory in console with same error `cd /dev/sdf/` returns `not directory error` When setting volume as `/dev/xvdf/mysql` I get storage issue, not enough space. When I check storage of /dev/xvdf after I have attached the EBS, I see 4.0 MB ![dir size](/media/postImages/original/IMLO9fiaDsTPu-KQE8KCPFvw) I am not sure what am I doing wrong. I haven't deployed before, just learning. Any inputs, thanks.
1
answers
0
votes
14
views
asked 18 hours ago
hello Im trying to process excersice 3 for creation of EC2 instance as part of online training. all isntructions are described here : https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/exercise-3-compute.html Im able to succesfully create EC2 instnace but page which shoudl be available is till unreacable Im able to connect to linux server where below code should be executed as part of instance createion (according to isntructions) #!/bin/bash -ex wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/FlaskApp.zip unzip FlaskApp.zip cd FlaskApp/ yum -y install python3 mysql pip3 install -r requirements.txt amazon-linux-extras install epel yum -y install stress export PHOTOS_BUCKET=${SUB_PHOTOS_BUCKET} export AWS_DEFAULT_REGION=<INSERT REGION HERE> export DYNAMO_MODE=on FLASK_APP=application.py /usr/local/bin/flask run --host=0.0.0.0 --port=80 however nothing fromt this is executed on the server. When I try to execute this manualy one by one I have to additionaly use sudo command and install individualy pip3 and still amazon-linux-extras is not executable. At the and page is still not reachable. I think that the provided instructution for this excercise are realy outdated. So can anybody help me to successfully complete this excercise so page be reachable? What is the peroper set of commands whcih have to be executed? And why non of them is executed duritng boot as they are part of user data box durign instance creation ? Thank you anybody for the answer. Regards
0
answers
0
votes
11
views
asked 19 hours ago
I am having trouble setting up a working wire guard vpn server on an ec2 instance, I created the `wg0.conf` file with the following contents ``` [Interface] Address = 10.10.0.1/24 ListenPort = 10001 PrivateKey = <server_private_key> SaveConfig = false PostUp = /etc/wireguard/helper/add_nat.sh PostDown = /etc/wireguard/helper/del_nat.sh [Peer] PublicKey = <removed> AllowedIPs = 10.10.0.2/32 ``` the contents of `add_nat.sh` ``` #!/bin/bash IPT="/sbin/iptables" IN_FACE="ens5" # NIC connected to the internet WG_FACE="wg0" # WG NIC SUB_NET="10.10.0.0/24" # WG IPv4 sub/net aka CIDR WG_PORT="10001" # WG udp port ## IPv4 ## $IPT -t nat -I POSTROUTING 1 -s $SUB_NET -o $IN_FACE -j MASQUERADE $IPT -I INPUT 1 -i $WG_FACE -j ACCEPT $IPT -I FORWARD 1 -i $IN_FACE -o $WG_FACE -j ACCEPT $IPT -I FORWARD 1 -i $WG_FACE -o $IN_FACE -j ACCEPT $IPT -I INPUT 1 -i $IN_FACE -p udp --dport $WG_PORT -j ACCEPT ``` then i enabled port forwarding by setting `net.ipv4.ip_forward=1` in `/etc/sysctl.conf`, I also allow the port 10001 on UDP using the command `ufw allow 10001/udp` and I added that port rule to the inbound rules in ec2 security group on my laptop I configured `wg0.conf` like so ``` [Interface] PrivateKey = <laptop_private_key> Address = 10.10.0.2/24 DNS = 8.8.8.8 [Peer] PublicKey = <server_public_key> AllowedIPs = 10.10.0.0/24 Endpoint = <ec2_elastic_ip>:10001 PersistentKeepalive = 10 ``` Trying to ping the server from my laptop results in 100% packet loss same as for the server side. Is there something I missing or is there any errors in my configuration?
1
answers
0
votes
35
views
Salem
asked 2 days ago