Questions tagged with Amazon EC2

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

so I am looking to get MySQL install if I specify parameter environment as prod and if dev mariadb get installed from userdata script of this cloudformation template but it is not happening. Please guide on this how to do it? AWSTemplateFormatVersion: "2010-09-09" Parameters: Environment: Description: "The environment to deploy to (dev or prod)" Type: String Default: "dev" Resources: EC2Instance: Type: "AWS::EC2::Instance" Properties: InstanceType: "t2.micro" ImageId: "ami-0f8ca728008ff5af4" KeyName: "devops" SecurityGroupIds: - "sg-02464c840862fddaf" SubnetId: "subnet-0b2bbe1a860c1ec8f" UserData: !Base64 | #!/bin/bash if [ "${Environment}" == "prod" ]; then # Install MySQL on production instances sudo apt-get update sudo apt install mysql-server -y sudo systemctl restart mysql sudo systemctl enable mysql elif [ "${Environment}" == "dev" ]; then # Install MariaDB on development instances sudo apt-get update sudo apt install mariadb-server mariadb-client -y sudo systemctl enable mariadb fi Tags: - Key: "Name" Value: "MyNewInstance"
1
answers
0
votes
22
views
user01
asked 6 days ago
i kill it and it restart itself. i cannot work!! please help emergency
1
answers
0
votes
17
views
asked 6 days ago
Hello Team, In New Amazon Linux AMI AMI ID ami-02f3f602d23f1659d (al2023-ami-2023.0.20230315.0-kernel-6.1-x86_64), which they launched on 15th March,2023 the Instance Metadata Service comes with version 2 by default where HttpTokens is mandatory. Direct curl http://169.254.169.254/latest/meta-data/instance-id command won’t work here. For IMDSv2, we have fetch the data through token authentication right. For reference https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html My question is For the new AMI, I am seeing that by default it comes up with the Instance Metadata Service version as 2. Could you please confirm me that hereafter the NEW AMI released from Amazon will it be having the IMDS version with default version as 2?
1
answers
0
votes
22
views
asked 6 days ago
I am performing the below activity. I am collecting memory metrics of ec2 instance via cloud watch agent. I will then use that metrics to make dashboards, please let me know if this activity will extract any charges.
1
answers
0
votes
15
views
asked 6 days ago
I have used **ml.g4dn.2xlarge** instance on SageMaker to test GPT-J6B model from HuggingFace using Transformer. I am using `revision=float16` and `low_cpu_mem_usage=True` so that the model is only of 12GB. It is downloaded but ***after*** that it suddenly crashes the kernel. Please share the workaround. The memory of that instance is 32 GB wit 4 vCPU. ```python !pip install transformers from transformers import AutoTokenizer, AutoModelForCasualLM model = AutoModelForCasualLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", low_cpu_mem_usage=True) # It crashes here tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") ``` It downloads 12GB model but after that, it crashes. I tried to follow this thread [here](https://repost.aws/questions/QUsO3sfUGpTKeHiU8W9k1Kwg/why-does-my-kernal-keep-dying-when-i-try-to-import-hugging-face-bert-models-to-amazon-sage-maker) but still there I can't update the sentencepiece. Please help. Thanks
0
answers
0
votes
27
views
EM_User
asked 6 days ago
How could I find out the four new name servers for our hosted zone on Route 53? Currently the old name servers can't find the hosted record via nslookup. And the hosted website is not responding on the Internet. Thank you.
2
answers
0
votes
17
views
asked 6 days ago
Hi AWS, I know this might not be a right question for the community here but the point is my VPC is having IPv4 CIDR block of 172.31.0.0/16 and the Atlas VPC CIDR block is 10.8.0.0/21. The peering connection is available and I have even allow access from anywhere in MongoDB Atlas UI for the cluster but still experiencing the same issue. EC2 instance is the VPC with a public subnet. I have tried every way possible but still same issue persists. Please help.
1
answers
0
votes
12
views
profile picture
asked 6 days ago
I have a .net windows application (exe) which I would like to run on my windows ec2 instance. I can run it on my computer at home but it doesnt even load (no error or any message) when I try to run it. The application connects to a remote api and database. (it has no publisher credentials) I am new to this. What are the general /most common settings I am missing here to get it to work? Both aws and windows firewall settings for example.
2
answers
0
votes
26
views
asked 6 days ago
We have a CentOS Linux server in AWS that runs monitoring software. This server has run exceptionally well for a few years until a few weeks ago where it started experiencing high loads even when sitting idle. The previous time this occurred, we troubleshot it, upgraded the software and the OS, and felt like it was potentially fixed. With a high load yet low CPU, low memory, and zero swap used, the assumption was maybe some disk I/O issue in the server farm that resolved itself. The problem resurfaced late yesterday and after digging more this evening, we noticed the CPU steal time ('st' in the top command) is absurdly high; CPU steal time is where the hypervisor or other servers in the shared virtual environment are causing this system to wait on CPU time. If the system is simply rebooted, the problem remains. The only way we have resolved it both times is if the instance is stopped, we wait a few mins, and then start it again. Is it possible the instance is switched to a different server farm *without* a resource hog when we perform the extended shutdown? Does this mean AWS has over-allocated resources? Are there any other ways to prevent this from happening again beyond paying for a dedicated server? Any help would be appreciated! top - 21:47:37 up 10 days, 1:30, 2 users, load average: 18.77, 11.27, 11.48 Tasks: 155 total, 5 running, 150 sleeping, 0 stopped, 0 zombie %Cpu(s): 18.3 us, 4.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 77.6 st KiB Mem : 3878444 total, 1144048 free, 910772 used, 1823624 buff/cache KiB Swap: 0 total, 0 free, 0 used. 2483948 avail Mem top - 21:47:46 up 10 days, 1:30, 2 users, load average: 17.20, 11.18, 11.45 Tasks: 157 total, 9 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 19.0 us, 4.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 76.7 st KiB Mem : 3878444 total, 1120580 free, 934224 used, 1823640 buff/cache KiB Swap: 0 total, 0 free, 0 used. 2460496 avail Mem
0
answers
0
votes
14
views
asked 6 days ago
My EC2 instance has gone down a few times in recent months. It works again every time after rebooting it. I am running a m5.2xlarge Ubuntu instance. The memory usage is 9%. Any help regarding the possible cause would be most appreciated, thanks. ![Enter image description here](/media/postImages/original/IM64Qdr-HnQxW3Cd4bClI_QQ)
1
answers
0
votes
17
views
asked 7 days ago
Hi, my current setup: * EC2 with ARM * Docker installed in EC2 * Spring + Java app in one container * MySQL in another container When I run it all in the EC2 it works like charm, but problem occurs when I am trying to connect mysql storage to an attached EBS. my docker run command for mysql: `docker run -d -p 3306:3306 -v /dev/xvdf/mysql:/var/lib/mysql:rw -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=erdeldb mysql:8` When setting volume as `/dev/sdf/mysql` I get an error saying `it is not a directory`. I also can not open that directory in console with same error `cd /dev/sdf/` returns `not directory error` When setting volume as `/dev/xvdf/mysql` I get storage issue, not enough space. When I check storage of /dev/xvdf after I have attached the EBS, I see 4.0 MB ![dir size](/media/postImages/original/IMLO9fiaDsTPu-KQE8KCPFvw) I am not sure what am I doing wrong. I haven't deployed before, just learning. Any inputs, thanks.
1
answers
0
votes
28
views
asked 7 days ago
hello Im trying to process excersice 3 for creation of EC2 instance as part of online training. all isntructions are described here : https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/exercise-3-compute.html Im able to succesfully create EC2 instnace but page which shoudl be available is till unreacable Im able to connect to linux server where below code should be executed as part of instance createion (according to isntructions) #!/bin/bash -ex wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DEV-AWS-MO-GCNv2/FlaskApp.zip unzip FlaskApp.zip cd FlaskApp/ yum -y install python3 mysql pip3 install -r requirements.txt amazon-linux-extras install epel yum -y install stress export PHOTOS_BUCKET=${SUB_PHOTOS_BUCKET} export AWS_DEFAULT_REGION=<INSERT REGION HERE> export DYNAMO_MODE=on FLASK_APP=application.py /usr/local/bin/flask run --host=0.0.0.0 --port=80 however nothing fromt this is executed on the server. When I try to execute this manualy one by one I have to additionaly use sudo command and install individualy pip3 and still amazon-linux-extras is not executable. At the and page is still not reachable. I think that the provided instructution for this excercise are realy outdated. So can anybody help me to successfully complete this excercise so page be reachable? What is the peroper set of commands whcih have to be executed? And why non of them is executed duritng boot as they are part of user data box durign instance creation ? Thank you anybody for the answer. Regards
1
answers
0
votes
23
views
asked 7 days ago