Questions tagged with Availability
Content language: English
Sort by most recent
AWS us-west-2 server connecting to db have become slow
We got a mail from AWS that they will change the underlying hardware for one of our server and that did happen. However after that we are seeing a latency to connect to db from the server. Even for command line sql client we can see a difference between when the sql prompt comes back. The other server returns quickly but this server which has been migrated to new server shows a visible lag. In fact the server that is showing issue is in same availability zone as the rds. This is in west 2. Anyone else is seeing that? Also what are the ways to debug and find the root cause. Nothing has changed from our side for at least 2 months and the only new event is the change of underlying hardware.
Cannot modify RDS SQL Server Web instance class in us-east-1e
We've been trying for weeks to change the RDS instance class from ``db.t2.medium`` to ``db.t3.medium`` or ``db.t3.large`` but keeps on getting the same error. Anyone knows when will the t3 instances will be available for us-east-1e? Thank you. ``` Engine: SQL Server Web Edition Engine version: 12.00.6433.1.v1 Region: us-east-1e ``` Error Message: ``` We're sorry, your request to modify DB instance <instance name> has failed. Cannot modify the instance class because there are no instances of the requested class available in the current instance's availability zone. Please try your request again at a later time. ```
Amazon GameLift now supports AWS Local Zones
Hello GameLift Devs, Today, the GameLift team is excited to announce the general availability of AWS Local Zones. With this update, you can seamlessly provide gameplay experiences across 8 new AWS Local Zones in Chicago, Houston, Dallas, Kansas City, Denver, Atlanta, Los Angeles, and Phoenix. Along with the updated support for Local Zones, we are adding new instance types specifically supported in the various Local Zone Regions, including C5d and R5d instance types. Additionally we are adding support for the next generation [C6a](https://aws.amazon.com/ec2/instance-types/c6a/) and [C6i](https://aws.amazon.com/ec2/instance-types/c6i/) instance types. Amazon EC2 C6i instances are powered by 3rd Generation Intel Xeon Scalable processors and deliver up to 15% better price performance compared to C5 instances for a wide variety of workloads and are ideal for highly scalable multiplayer games. You can find updated pricing on the [GameLift pricing page](https://aws.amazon.com/gamelift/pricing/) as well as in the [AWS Pricing Calculator](https://calculator.aws/#/addService/GameLift). For more information, please refer to our [Release Notes](https://docs.aws.amazon.com/gamelift/latest/developerguide/release-notes.html#release-notes-summary) and [What’s New post](https://aws.amazon.com/about-aws/whats-new/2022/08/amazon-gamelift-supports-aws-local-zones/). Mark Choi, GameLift PM
using of NLB for HA
Hi Team, In my architecture I will use NLB : API GW => VPCLink => NLB => ECs fargate, for high availability in the prod environment do I need to spin up 2 NLBs, on each AZ, so my NLB is not a single point of failure? or is AWS NLB highly available by default? so I need only one NLB in my architecture for the whole region Thank you.
Do health check requests coming from the ELB cost money?
I have an application load balancer that routes traffic to two EC2 instances in a target group. The application load balancer periodically checks the availability of the EC2 instances by doing a health check. It makes a request to a healthcheck endpoint and when it gets status code 200 it knows the instance is still functioning. The default interval for these healthchecks is 30 seconds, but i thought that was a bit too long so i've set the health check interval to 5 seconds instead. This made me wonder; do these health checks cost money? Is it more expensive to do health checks every 5 seconds as opposed to the default 30 seconds? I would also like to know what the optimal healthcheck interval would be when processing requests for a social media website. Thanks
Failed to send request - Lambda can't connect to External API
I have two lambda functions. One connects to an ECS instance using a VPC. The second should also connect to that same ECS instance and an API (api.twitter.com). The first function works perfectly, i've been able to write and read from files within the instance. However, the second can connect to the API without the VPC, but immediately the VPC is added it gives me a ``` "errorMessage": "Failed to send request: HTTPSConnectionPool(host='api.twitter.com', port=443) ``` **About my VPC** - I have 2 public subnets and 2 private subnets - in two separate availability zones - I have an Internet gateway attached to my VPC - I have 2 route tables(public and private - My internet gateway is routed to my public route table - I also have an NAT gateway connected to my public subnet - That NAT gateway is routed to my private route table I have gone back and forth with all the connections highlighted above trying to solve this issue. I need my function to access ECS and also connect to the internet API. Please help me.
Which Opensearch instance type to choose for a new webapplication with little data?
Amazon recommends running an opensearch domain that contains 3 regular nodes and 3 master nodes distributed accross 3 AZ zones. The lowest instance type that is still suitable for a production environment is the `t3 medium.search` instance type. I've run this setup for about 1.5 days, using `t3 small.search` instead of `medium`. When i looked at the bill afterwards i could see that running such an instance for merely 1.5 days already costs 9 dollars. That's way too expensive for me. According to the [amazon cost calculator](https://calculator.aws/#/addService/OpenSearchService) the monthly cost for this setup would have been well over 350 dollars. My web application will use the open search server only for serving autocomplete suggestions and finding documents whose coordinates reside within a certain geographical area. When the webapplicatoin is launched the open search server will start out with only 5 indexes containing only a small number of documents, no more then 200 mb in total. Of these only one index is used to preform geospatial queries on. I don't think i need a t3 medium instance for this. So my question is: With what kind of open search instance can i start out with? The setup needs to be economical because it will take a while before my web application starts making money. I was thinking about setting up a `t2 micro.search` domain service with 2 micro master instances and 2 micro worker instances. That would cost me about 50 dollars a month in total. Could this be a good setup to start with? If so then i would like to know how i can setup a domain that uses `t2 micro.search` instances. When i go to the domain creation page in my aws console i'm not able to select `t2 micro.search` from the instance type list. The smallest i can select is `t3 small.search` but thats already too expensive for me because i want to run nodes in atleast two availability zones. I could opt for running only one `t3 small.search` master node and one worker node, which would cost 50 dollars a month as well, but then the domain service is no longer highly available. If the availability zone it sits in crashes then i can't serve autocomplete suggestions anymore, nor can i return documents based on their coordinates. I'd love to hear your opinions on this. Thank you
AWS Cloud Design - Ps review
I am working on building a design for an enterprise and looking for input. **Use Case Scenario:** An organization running the system uses a LAMP stack in On-Prem data center. They are building a new service using similar technologies, expected to have rapid growth in traffic in the next 1 to 3 months. Organization wanted to have system address the following • scalability • self-healing infrastructure • Lack of a DR capability. • High availability, and is low cost. • Security of data at rest and in transit. Need to consider the data migration from onprem **Objective **to Migrate to a new AWS platform while delivering business value along the way Build a scalable, elastic and redundant architecture that allows application to organically scale with the increase of its use. **Solution Options:** System Design#1 ![System Design #1](https://repost.aws/media/postImages/original/IMFwm7NJSIRyqpCWXnkJjlfw) And System Design#2 ![System Design#2](https://repost.aws/media/postImages/original/IMetkTOHgiT3Ke82P1Ji0vbw) Please review if services has been used wisely and provide your comments.
RDS Postgres + cluster + pglogical?
I have an on-prem database which I am trying to replicate (using `pglogical`) to RDS, preferably an RDS cluster. I was able to replicate to a "Single DB instance" , by setting `rds.logical_replication` in the parameter group, but my question is whether I can replicate to a "Multi-AZ DB Cluster"'s writer. When I create the cluster, I notice that the setting `rds.logical_replication` is not available in the "DB Cluster Parameter Group". Is there an equivalent setting for a cluster? I was able to add pglogical to `shared_preload_libraries` in the DB Cluster Parameter Group, but I'm not sure if all of the settings I need are available. Since starting down this journey, it seems that Postgres 14.2 is not available for the cluster, so my question is not as critical. But I'd still like to know, because we might want to use a cluster once 14.2 is available. Thanks for any help.
m1.small capacity issues in use1-az6
There seems to be some issue with capacity at least for m1.small instances in use1-az6 availability zone. I had several scheduled retirements for instances, and when I went to try to start the instances again after stopping them to clear the retirement, I got InsufficientInstanceCapacity errors. It hadn't occurred to me to create capacity reservations before, we're fairly small and have never had capacity issues before. I have created a capacity reservation now with as many m1.small as it would let me (which is actually still fewer than are currently running). Mainly I'm posting here to make sure folks at AWS are aware of this issue and hopefully are working to reallocate capacity to alleviate this. I had to switch some instance types around to get my required instances running again after stopping them for the retirements, and I'd like to get them back to their proper types ASAP. Also never know when more retirements will come in.
[EC2 ICE] Is c5a series more prone to capacity issues? Is there an EC2 capacity heatmap?
Today I encountered Insufficient Capacity Error whilst launching 3x c5a.8xlarge in Sydney region (AZ apse2-az2). After some re-trying I managed to launch c5a.12xlarge in the same AZ. A month or two ago I encountered the same issue with c5a.8xlarge in Mumbai region. Managed to switch some instance to c5 (without the a). My usecase is online events. I need to spin up a number of EC2 instances on short notice. Max so far has been about 10 EC2 instances (I don't think that a big number for AWS). I am using different regions depending on location of the end user. I am now using c5a series (usually c5a.8xlarge) instead of c5 to save on cost. However if another instance series is more reliable (i.e. not getting ICE'd) I would consider changing to a different series. Is the c5a series more prone to ICE? Is there a 'capacity heatmap' available, where I can see the EC2 availability across instance series, regions, and AZs? Ideally I would have access to a 'weather forecast' where I can see the current & expected capacity forecast. In my current architecture I can make changes to instance series / AZ a few days in advance of an event, but it's stressful to make changes within the hour before my customer's event starts.
Our CloudSearch Domain now has 0 instances and no way to increase them
We have a CloudSearch 2011 domain which has been operating fine, but recently lost all it's instances. It now reports "Your domain is deployed on a total of 0 search.m2.2xlarge instance". As 2011 is self-managing, any ideas how to get it to allocate a node at this point? We havv