Questions tagged with Amazon EC2
Content language: English
Sort by most recent
I've created a new EBS volume, formatted as ext4 and mounted to a Ubuntu instance. No significant read/write volumes, about 1% of the volume used, nothing special at all. After 1 week, it has reported an I/O error and was made read-only. The following details reported in the syslog: _______________________________ end_request: I/O error, dev xvdf, sector 0 Buffer I/O error on device xvdf, logical block 0 lost page write due to I/O error on xvdf EXT4-fs error (device xvdf): ext4_journal_start_sb:327: Detected aborted journal EXT4-fs (xvdf): Remounting filesystem read-only EXT4-fs (xvdf): previous I/O error to superblock detected _______________________________ **This is a second time this happens over last 2 weeks.** The above volume has replaced an older volume with the same issue. The server has been in service for a few years with no such issues in the past. What might go wrong? What could I try?
Hi! We currently have 5 Elastic IPs. I am trying to request 1 additional Elastic IP to have a total of 6. However, I am confused as to what I should put in the "New limit value". I'm afraid that if I put "1", our EIP will be reduced to just 1. Or if I put "6", we will end up with a total of 11 EIP (5+6). Sorry for my ignorant question and thank you for answering. :)
Hi: I had a snapshot of an old server. I needed to spin it back up. So I created an image from the snapshot, then created an EC2 instance from the image. Now Windows won't activate. The activation page in Windows does show the last 5-digits of Product Key 8XDDG, but says it cannot activate, providing Error code: 0xC004F074. It previously ran Windows Server 2016 Datacenter and still does, i.e. I just re-created the EC2 instance from image, I did not change Windows version. More info. that might be helpful: I've followed the guide at https://aws.amazon.com/premiumsupport/knowledge-center/windows-activation-fails/ to activate manually. But in step 6, telnet to 169.254.169.250 fails and PowerShell command: Test-netconnection 169.254.169.250 -Port 1688 also fails. I've verified in step seven that the registry keys are correct for KeyManagementServiceName (169.254.169.250) and KeyManagementServicePort (1688). Telnet and Test-netconnection to .251 also fails. The Security Group assigned to the instance has the default Outbound rule that allows everything. Thank you!
Hi. We're running four T3 instances in two Availability Zones in the Asia Pacific (Sydney) region. The AWS docmentation on instance types (https://aws.amazon.com/ec2/instance-types/) states that the CPUs for T3 instances will be "Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL)". Our t3.small and two t3.micro instances have the Cascade Lake 8259CL processor. Our t3.2xlarge instance - which does most of our work - has a Skylake 8175M processor. From my simple benchmark tests it looks like the Cascade Lake 8259CL processor is something like 2-3 times faster than the Skylake 8175M processor. I'd dearly love the T3.2xlarge instance to acquire the Cascade Lake 8259CL processor; the performance benefit for our application would be terrific! What determines the selection of processor type for T3 instances? How can I endeavour to have our t3.2xlarge instance to procure the faster processor? Thanks for any help!
Looking at the Resource summary page for EC2, I just find out that I have VPCs, subnets and security groups active in various AWS regions. I honestly don't remember how I created them so I wonder if they gets created automatically in some way? Do I need them? I only have EC2 instances in us-east-1. Am I going to be charged for them? If so, how can I do some clean-up? Thanks
I have an app with in instances EC2, the request or consult to the app is using ALB, the ALB is public for internet but, I would like restrict that only ip´s from MEXICO access to the APP. What service of AWS is recommended for our architecture?
I cannot make my website live. The instance is running perfectly. I've followed all the necessary steps but couldn't find the problem. Is there something in the settings to make it public?
One of the few things I do not like about the AWS EC2 service is that all available images (AMIs) used to to launch new instances have a single partition where the root filesystem is mounted on. In my opinion, this approach is not appropriate, there are also a few security standards requiring specific partitioning. Is there some doc about creating an instance/AMI (by Terraform or CloudFormation or Packer) with a good partitioning scheme?
Hey all! Hope your are doing well. I have been trying to write a query service for some internal databases in my VPC. My current setup is API Gateway with a Lambda that queries the database which works fine, but unfortunately I ran into two issues: - API Gateway default timeout is 30s which is not very long for queries. - Lambda response size limit is 6mb, which is fine but also not suitable for the biggest queries. Are there any serverless services I can use to solve this problem? I do require custom domain / authentication. Some solution I thought of were: - Chunking request, which should work fine but I think 30s is still not very long. It is a temporary solution for now. - Using ALB as a "api" to trigger lambdas, which would fix the timeout, but response size limit is still 6mb. - Hosting my own API on a EC2/Container, which I can do but I like serverless solutions. - Using websockets, but it seems harder to attach existing apps to a WS compared to a REST api. If somebody has some input would really appreciate it! Thanks in advance.
Hello, We just transformed a system into a multi-tenancy one and we encountered an issue with Beanstalks automatic updates. There is one backend per tenant hosted in Beanstalk. We have a shared load balancer that uses one target group per instance to forward their traffic there based on custom ports. Everything worked fine until a platform update was applied to all Beanstalk instances, it removed all targets from the target groups and thus no instance was available after that. The targets had to be manually added back to each target group. This is not a sustainable solution in the long-term. I could set the Beanstalks update window on Sunday and then have a Lambda function verify that the target groups are not empty after it and populate them again if needed but I'd rather avoid using some custom logic here. Would there be a way to make the target groups always include these Beanstalk instances even after their platform updates?
Do I need to be in the Managerment account to use System Manager / Patch Manager to patch instances across an Organization
I see the blog posts about being able to patch across an AWS Organization; I'm just wondering if you need to do that from the Management account or can you do it from a different account? So far it seems like you need to do it from the Management account and it looks like you need to enable a few other services ( like Config ) which I can do; but I already have a delagated account for Config so I would need to move that back to the Management account if I have to patch from there.
How can I configure the same pool of instances for multiple ALBs, using target and autoscaling groups?
My SaaS hosts multiple domains. Each application load balancer can host up to 20 domains. Therefore I have to create multiple load balancers for my application (1 for each 20 domains). What I want is a single pool of EC2 instances available to autoscale for traffic coming from such multiple ALBs. Here is what I did to avoid complexities of setting up an NLB (and additional cost of NLBs). ALB-1 -> Autoscaling Group-1 -> Target Group-1 -> Instance-1 ALB-2 -> Autoscaling Group-2 -> Target Group-2 -> Instance-1 E.g. both target groups have the same instance, but serving different load balancers and autoscalers. The reason I do this is, why should I use seperate pools of instances that would be idle, sharing and autoscaling from one pool is more efficient. Would this design work? What type of issues would I run into? **Edit: Problem is solved by using a single ASG for both target groups as follows:** Autoscaling Group-1 -> Target Group-1 and Target Group-2 ALB-1 -> Target Group-1 -> Instance-1 ALB-2 -> Target Group-2 -> Instance-1 **Here is a key follow up question:** Are there implications if I place the same instance "Instance-1" into 2 target groups associated with 2 ALBs? The primary reason I do this is, the first ALB/ASG/Target Group/Instance is configured by Elastic Beanstalk. I target the same instances from other ALBs, because if I manually add a separate instance to the 2nd target group the instances will not have the application backend stack that gets auto-installed by Elastic Beanstalk. Also, at what point would the network traffic saturate and I need to add an NLB in this design? Thanks!