Encountering Double latency after deploying to Elastic Beanstalk from EC2

0

Our services were initially deployed manually on EC2 instances. We recently migrated to Elastic Beanstalk while maintaining the same number of instances and configuration.

Since the migration, we've observed a significant increase in API latency, in some cases almost doubling.

Here's a summary of our previous and current configurations:

RDS: default VPC, private subnet.

Old EC2 Configuration:

  1. 4 EC2 instances (c5n.4xlarge) in the default VPC and public subnet.
  2. Load Balancer in the default VPC and public subnet.

Elastic Beanstalk Configuration:

  1. 4 EC2 instances (c5n.4xlarge) in the production VPC and private subnet.
  2. Load Balancer in the production VPC and public subnet.
  3. VPC peering to connect to the database

Let me know what time suits you to discuss this further.

Thanks,

2 Answers
0

The increased latency you're experiencing after migrating to Elastic Beanstalk could be due to several factors related to your new network configuration and deployment setup. Here are some potential causes and suggestions for investigation:

  1. Network Path: Your new setup involves VPC peering and private subnets, which may introduce additional network hops compared to your previous configuration. This could contribute to increased latency, especially if the VPC peering connection isn't optimized.

  2. Cross-AZ Communication: If your Elastic Beanstalk instances and RDS are in different Availability Zones (AZs), this could lead to higher latency. Cross-AZ communication typically incurs additional latency compared to same-AZ communication.

  3. Load Balancer Configuration: Ensure that your new Load Balancer in the production VPC is properly configured. Check its health checks, timeout settings, and ensure it has sufficient capacity to handle your traffic.

  4. Instance Network Performance: While you're using the same instance type (c5n.4xlarge), verify that the network performance settings are identical to your previous setup. Ensure that Enhanced Networking is enabled and the Elastic Network Adapter (ENA) is properly configured.

  5. Application Configuration: Review your application's configuration in Elastic Beanstalk. Ensure that any environment variables, connection strings, or other settings are optimized for the new environment.

  6. Database Connection: Check the connection between your Elastic Beanstalk instances and the RDS database. Ensure that the VPC peering is correctly set up and that there are no unnecessary network hops or security group restrictions.

  7. Monitoring and Metrics: Use Amazon CloudWatch to monitor the performance of your Elastic Beanstalk environment, including CPU utilization, network performance, and application-specific metrics. This can help identify bottlenecks.

  8. Elastic Beanstalk Specific Issues: Sometimes, Elastic Beanstalk's additional layer of abstraction can introduce overhead. Review your Elastic Beanstalk configuration to ensure it's optimized for your use case.

To troubleshoot further:

  1. Use tools like AWS X-Ray to trace requests and identify where the latency is occurring.
  2. Temporarily test with instances in public subnets to isolate whether the private subnet configuration is contributing to the latency.
  3. Check application logs for any warnings or errors that might indicate issues with connections or resource utilization.
  4. Consider using Amazon ECS or EKS for more granular control over your container deployments if Elastic Beanstalk proves to be a bottleneck.

Remember to test any changes in a staging environment before applying them to production. If the issue persists after investigating these areas, it may be worthwhile to engage AWS support for a more in-depth analysis of your specific setup.
Sources
High Latency When Querying Aurora MySQL from Cross-AZ EC2 Instance | AWS re:Post
Latency issue in Peered connection | AWS re:Post
Troubleshoot high response times and ECS latency issues | AWS re:Post
Troubleshoot latency on calls or requests to Amazon ECS tasks | AWS re:Post

answered 7 days ago
0

Hi Vinit,

Latency jumps after migrating from EC2 to Elastic Beanstalk can definitely be frustrating — especially when instance types and counts remain unchanged. Given your setup, here are several areas worth double-checking and tuning:

  1. VPC Peering Overhead Even though VPC peering is fast and private, it still adds an extra hop vs. staying within the same VPC. If your old EC2 setup and RDS lived in the same VPC (and now don’t), you’re likely incurring minor additional latency on every DB query.

Recommendation: Try running a temporary EC2 instance in the new Beanstalk VPC and benchmark RDS query latency to isolate VPC peering as a factor.

  1. AZ Mismatch Are your RDS and Beanstalk EC2 instances in different Availability Zones? Cross-AZ traffic isn’t just more expensive — it’s slower.

Recommendation: Check subnet mappings and availability zones across both environments. If the new architecture scatters traffic across multiple AZs, consider pinning resources in the same AZ for performance-critical paths.

  1. Load Balancer Health & Settings Elastic Beanstalk may have spun up a new ALB with different timeout thresholds or listener rules. Double-check health checks, idle timeouts, and routing rules.

  2. Enhanced Networking + ENA Drivers Make sure your c5n.4xlarge instances in Beanstalk are leveraging enhanced networking. While the instance type supports it, the AMI and config must explicitly enable the Elastic Network Adapter (ENA).

  3. Beanstalk Overhead or App Config Drift Elastic Beanstalk adds abstraction. Make sure your environment variables, container settings, and startup scripts match your original EC2 deployment.

Tip: Watch for things like connection pooling configs or hardcoded IPs in older EC2 scripts that may not map well to Beanstalk’s lifecycle.

  1. Tracing & Telemetry Use AWS X-Ray to trace API calls across the new architecture. Also enable CloudWatch custom metrics and alarms for:

Latency (across all hops)

Database connection time

Response time variance by AZ

What You Can Try Now:

Launch a quick EC2 in the Beanstalk VPC, test latency to RDS directly.

Run a simple load test against the Beanstalk ALB (vs the old EC2 LB) to compare head-to-head.

Validate whether autoscaling, load distribution, or target response times changed during the migration.

If latency is mission-critical and Beanstalk doesn’t give you the control you need, you might consider migrating to ECS Fargate or EKS, where you can define the networking layer in more granular terms.

Let me know if you'd like help interpreting X-Ray traces or performance metrics!

— (Shared to help others who may face performance shifts post-Beanstalk migration.)

answered 5 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions