Questions tagged with Compute
Content language: English
Sort by most recent
Hello Team,
In New Amazon Linux AMI AMI ID ami-02f3f602d23f1659d (al2023-ami-2023.0.20230315.0-kernel-6.1-x86_64), which they launched on 15th March,2023 the Instance Metadata Service comes with version 2 by default where HttpTokens is mandatory.
Direct curl http://169.254.169.254/latest/meta-data/instance-id command won’t work here.
For IMDSv2, we have fetch the data through token authentication right.
For reference
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
My question is
For the new AMI, I am seeing that by default it comes up with the Instance Metadata Service version as 2. Could you please confirm me that hereafter the NEW AMI released from Amazon will it be having the IMDS version with default version as 2?
I have used **ml.g4dn.2xlarge** instance on SageMaker to test GPT-J6B model from HuggingFace using Transformer.
I am using `revision=float16` and `low_cpu_usage=True` so that the model is only of 12GB.
It is downloaded but ***after*** that it suddenly crashes the kernel.
Please share the workaround. The memory of that instance is 32 GB wit 4 vCPU.
```python
!pip install transformers
from transformers import AutoTokenizer, AutoModelForCasualLM
model = AutoModelForCasualLM.from_pretrained("EleutherAI/gpt-j-6B") # It crashes here
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
```
It downloads 12GB model but after that, it crashes.
I tried to follow this thread [here](https://repost.aws/questions/QUsO3sfUGpTKeHiU8W9k1Kwg/why-does-my-kernal-keep-dying-when-i-try-to-import-hugging-face-bert-models-to-amazon-sage-maker) but still there I can't update the sentencepiece.
Please help.
Thanks
My EC2 instance has gone down a few times in recent months. It works again every time after rebooting it. I am running a m5.2xlarge Ubuntu instance. The memory usage is 9%.
Any help regarding the possible cause would be most appreciated, thanks.

Hi there,
I am currently struggling to communicate between my lightsail container instances. In one container i have a React app, the other a java spring boot backend. I can curl commands from my local to the backend api and i get a success response, but when i try to do this programatically within the front end, the same call fails.
The documentation is super unclear around this point, can anyone guide me in the right direction?
I have tried using the public domain of the container, the private, with and without the open port. None of this has worked, and it's always unable to resolve the domain.

We currently are using Amazon Elastic Compute Cloud t2.large with Windows Server 2012 R2. Microsoft support for Windows Server 2012 R2 ends October 10, 2023. Does that mean we’ll need to upgrade our current AWS setup? If so, what is the cutoff date for when this needs to get done.
when I try to log in to FileZilla by using credentials that I created in FTP in an EC2/Lightsail (Ubuntu) instance, it was not connected. otherwise, I use a keypair, it will work.
Please give a reason and give a proper procedure for creating FTP Server by using an EC2/Lightsail.
Thank You in Advance
I wanted to remove the python 3.7 version from my amazon-Linux completely and reinstall the python 3.9 version with yum commands
Why is Fail2Ban completely missing from AL2023 repos? Are there instructions, including dependencies for hand installation on AL2023? Why would Amazon leave this standard component of basic hacker prevention and security out of the stack?
I have a string type for date and in that column, it has the word 'None'
My query for casting the date is below - *getting only the Month and Year on it*,
date_format(cast(c.enddate as date), '%M') as "Month",
date_format(cast(c.enddate as date), '%Y') as "Year"
ERROR prompted
INVALID_CAST_ARGUMENT: Value cannot be cast to date: None-
Can somebody help me with this problem, so that I can still get the Month and Year only?
Thank you in advance!
Hi all!
We have a t3.2xlarge EC2 instance running Windows Server 2019 in the Europe (Frankfurt) region. We want to downsize it to a t3.xlarge, but this is refused with the following error message:
"Failed to modify instance type for instance xxxx.
The instance configuration for the AWS Marketplace product is not supported. Please see the AWS Marketplace site for more information about supported instance types, regions, and operating systems."
I can succesfully change it to a m5.xlarge, so I don't understand why not to a t3.xlarge.
I'm doing this from the web interface.
I've looked into different documentation pages about this type of instances but cannot find this limitation.
Any advice would be welcome. I thank you in advance.
Hello,
I am working on deploying an application that is packaged using Docker onto Elastic Beanstalk with a single EC2 instance currently.
I have a multi-stage Dockerfile that is as small as I could possibly make it. Initially, I tried to deploy it to Elastic Beanstalk by deploying my Dockerfile, but the builds took too long so it would fail.
So currently, I am building my image locally, pushing it to an AWS ECR repository, then deploying to elastic beanstalk using a Dockerrun.aws.json file. This, however, still gets timeout errors on deployment! When looking at the logs, it appears the build gets stopped because the command used to pull my pre-built image takes too long to download for some reason. So is there any way to increase this timeout?
I have already tried running eb deploy with the --timeout flag, but it doesn't seem to change anything. I have also tried making a config file to increase the timeout:
.ebextensions/increase-timeout.config
```
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 1800
```
But that also fails to change the 300 second timeout.
Does anyone have any idea of how I could fix this? Thanks!
Hi there,
I have some linux instances running on AWS but sometimes when we try to connect to the instance its say "Network connection failure" and same when I check on AWS EC2 console its says 1/2 status check failed.
and after rebooting the instance sometimes its works perfectly and sometimes it got completely disconnected and after this we have to recover the data from EBS volumes and required to create new instance. But the previous instance was not in work.
Please provide me a solution, why it is happening?