What are the optimal settings for using Apache or NGINX as a backend server for ELB?

3 minute read
0

I want to use an Amazon Elastic Compute Cloud (Amazon EC2) instance running Apache or NGINX as my backend server for Elastic Load Balancing (ELB). But, I don't know what settings to use for the best performance.

Resolution

The best settings for a load balancer depend on your use case. For the best performance, analyze the response times of your backend application and the requirements of your clients.

If the backend application is running Apache or NGINX, then review the following parameters:

Client header timeout (Timeout in Apache; client_header_timeout in NGINX):
Set your application timeout to a higher value than the idle timeout value of the load balancer. Do this to make sure that the load balancer properly closes down idle connections. If the backend server terminates a connection without proper notification to the load balancer, then you might receive a 504 error.

Keep-alive (KeepAlive in Apache; keepalive_disable in NGINX):
Turn on keep-alive to reduce CPU utilization, and improve response time. With keep-alive on, the load balancer doesn't need to establish a new TCP connection for every HTTP request.

Keep-alive timeout (KeepAliveTimeout in Apache; keepalive_timeout in NGINX):

When the keep-alive option is turned on, choose a longer keep-alive timeout than the load balancer idle timeout.

Read timeouts (RequestReadTimeout in Apache; client_header_timeout and client_body_timeout in NGINX):
Set read timeouts that fit your application response times. Do this to make sure that your load balancer keeps the connection open long enough to receive both the header and body of the request.

Warning: Make sure that the load balancer idle timeout value is lower than the backend timeout.

Maximum number of keep-alive requests (MaxKeepAliveRequests in Apache; keepalive_requests in NGINX):
This option sets how many requests a single TCP connection serves when keep-alive is on. For better resource usage, set the maximum number of keep-alive requests to 100 or higher.

AcceptFilter (AcceptFilter in Apache; accept_filter in NGINX):
AcceptFilter is turned on by default, and instructs Apache to use the TCP_DEFER_ACCEPT option for the connections. This setting can cause the TCP socket to be in a "half-open" state. In this state, the load balancer presumes that the connection is established, but the backend instance doesn't have the connection established. Half-open connections are more common in low-volume load balancers, where connections have time to age before being used.

Logging: Turn on the %{X-Forwarded-For}i option so that Apache displays the ELB x-forwarded-for header in its logs for each request. This header contains the IP address of the original client. The %D option adds the time that it takes to complete each request to the access logs:

LogFormat "%{X-Forwarded-For}i %h %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" combined

Apache: The Apache MPM event module can prematurely close connections from load balancers. Prematurely closing connections generates HTTP 502 errors for Application Load Balancer and HTTP 504 errors for Classic Load Balancer. It's a best practice to use the MPM worker module instead to decrease this behavior.

Note: After you update your configuration, restart Apache or NGINX.


Related information

Registered instances for your Classic Load Balancer

Configure your Classic Load Balancer

AWS OFFICIAL
AWS OFFICIALUpdated a year ago