Questions tagged with Amazon EC2
Content language: English
Sort by most recent
I'm trying to install code deploy agent. As prerequisite code deploy requires ruby. When i tried to install ruby using this command yum install ruby it has 3.2 version. but code deploy agent do not support 3.2. Is there any other way to install code deploy agent.
I have installed aws instance scheduler via cloudformation stack. I have also configured a schedule and a period as well as added a tag to the ec2 instance. i have verified all the settings are correct and i see the job being run on the schedule but it isnt stopping/starting the instance.
Hello I created this instance https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#InstanceDetails:instanceId=i-0ca9edd25728c4f62
The goal is trying to create AD user from that instance, therefore both the AD and EC2 are under the same VPC.
Question 1: I couldn't connect to the EC2 instance with RDP. I configured both the subnet and EC2 to accept RPC calls with ACL, no effect.
Question 2: eventually, I'd like to use this https://us-east-1.console.aws.amazon.com/transfer/home?region=us-east-1#/servers/s-d0e008162fc04aa1a to recieve FTP file drops from the AD's user.
Is the network correctly configured?
Thank you!
I need to know my vCPU usage across all my accounts and regional deployment
I created instance, how to connect it by using putty?
and also guide me if there is another method...
I just started using AWS and I didn't know how to start it and use it.
I am student, I am not familiar with AWS
Can anybody help me and guide me how to start?
I am having issues in regards to adding storage onto my instances I have already created and have been using for awhile. My storage is low so I decided to add more storage. I added 30GB of storage to each instance (General Purpose SSD (gp2) changed to 60 GiB from 30 GiB, when I sign onto the server it doesn’t show up in the storage but it shows up in my disk management that I have 30GB unallocated. What do i need to do to get the 30GB I added to the instance onto the server itself?
Hello every body, i subscribed for Docker Engine - Enterprise for Windows Server 2019 and followed this video
https://www.youtube.com/watch?v=7eObt3MSzWw&ab_channel=CloudInfrastructureServices
but i encountered the the below error :
Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
False Maybe Failed {}
Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed.
1. Hyper-V cannot be installed: The processor does not have required virtualization capabilities.
Please advise.
i have attached private subnet 1c to public application load balancer. what would happen with the server which is in public subnet 1c. will the traffic reach to that server ?
I follow this blog to try the hudi connect: [Ingest streaming data to Apache Hudi tables using AWS Glue and Apache Hudi DeltaStreamer](https://aws.amazon.com/cn/blogs/big-data/ingest-streaming-data-to-apache-hudi-tables-using-aws-glue-and-apache-hudi-deltastreamer/).
But when I started the glue job, I always got this error log:
```
2023-03-28 12:39:33,136 - __main__ - INFO - Glue ETL Marketplace - Preparing layer url and gz file path to store layer 8de5b65bd171294b1e04e0df439f4ea11ce923b642eddf3b3d76d297bfd2670c.
2023-03-28 12:39:33,136 - __main__ - INFO - Glue ETL Marketplace - Getting the layer file 8de5b65bd171294b1e04e0df439f4ea11ce923b642eddf3b3d76d297bfd2670c and store it as gz.
Traceback (most recent call last):
File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module>
main()
File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main
res += download_jars_per_connection(conn, region, endpoint, proxy)
File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection
download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header)
File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer
layer = send_get_request(layer_url, header)
File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request
response.raise_for_status()
File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://prod-us-east-1-starport-layer-bucket.s3.us-east-1.amazonaws.com/6a636e-709825985650-a6bdf6d5-eba8-e643-536c-26147c8be5f0/84e9f346-bf80-4532-ac33-b00f5dbfa546?X-Amz-Security-Token=....Ks4HlEAQcC0PUIFipDGrNhcEAVTZQ%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230328T123933Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=%2F20230328%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=c28f35ab3b3c
Glue ETL Marketplace - failed to download connector, activation script exited with code 1
LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details.
Exception in thread "main"
java.lang.Exception: Glue ETL Marketplace - failed to download connector.
at com.amazonaws.services.glue.PrepareLaunch.downloadConnectorJar(PrepareLaunch.scala:1043)
at com.amazonaws.services.glue.PrepareLaunch.com$amazonaws$services$glue$PrepareLaunch$$prepareCmd(PrepareLaunch.scala:759)
at com.amazonaws.services.glue.PrepareLaunch$.main(PrepareLaunch.scala:42)
at com.amazonaws.services.glue.PrepareLaunch.main(PrepareLaunch.scala)
```
I guess the root cause is:
1. The Glue job cannot pull the connect image from AWS maketplace.
2. The connector image cannot store into the S3 bucket.
So I try these methods:
1. Give permissions to the IAM role of the job. I give `AWSMarketplaceFullAccess, AmazonEC2ContainerRegistryFullAccess, AmazonS3FullAccess`, I think these permissions are enough definitely.
2. Make the S3 bucket public. I turned off the `Block public access` of the related S3 bucket.
But even I did these, I still got the same error. Can someone give any suggestions?
Hello, I would like to expand a partitation on an instance, but for some reason the command growpart does not exist on this instance?
I am proceeding as usual according to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html, and this has also worked for the other instances in the past, only with the current one I fail at point 2c, as the command "growpart" obviously does not exist on my instance?!?
This is the current situation at this instance:
-------------
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 5G 0 part /
xvdb 202:16 0 15G 0 disk
`-xvdb1 202:17 0 15G 0 part /IOL
-------------
So the xvda drive has already been increased from 5 to 8 GB.
But when I try to enlarge it using growpart, the result looks like this:
-----------
sudo growpart /dev/xvda 1
sudo: growpart: command not found
-------
And indeed, there is obviously none:
--------
which growpart
/usr/bin/which: no growpart in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin)
----------
Maybe you can imagine the look on my face when I saw this message? 8:o
Where did that go?! Are there alternative ways to increase the partitation to the full drive size without growpart?
I received an email from Amazon and was informed that the instance scheduled for retirement. After I started the instance, I found that the network interface, security group, and elastic IP information associated with the instance could not be found.(The instance was officially stop by Amazon)
How can I find my information back.