Perform these tests through the Application Load Balancer. Then, perform the tests while bypassing the Application Load Balancer to targets. This approach helps to isolate the component that's inducing latency. For more information on curl features, see How to use curl features.
3. Check the Averagestatistic of the Amazon CloudWatch TargetResponseTime metric for your Application Load Balancer. If the value is high, there's likely a problem with the backend instances or application dependency servers.
4. Determine which backend instances are experiencing high latency by checking the access log entries for your Application Load Balancer. Check target_processing_time to find backend instances with latency issues. Also, review the request_processing_time and response_processing_time fields to verify any issues with the Application Load Balancer.
5. Check the CloudWatch CPUUtilization metric of your backend instances. Look for high CPU utilization or spikes in CPU utilization. For high CPU utilization, consider upgrading your instances to a larger instance type.
6. Check for memory issues by reviewing the Apache processes on your backend instances.
7. Check the MaxClient setting for the web servers on your backend instances. This setting defines how many simultaneous requests the instance can serve. For instances with appropriate memory and CPU utilization experiencing high latency, consider increasing the MaxClient value.
Compare the number of processes generated by Apache (httpd) with the MaxClient setting. If the number of Apache processes frequently reaches the MaxClient value, consider increasing the value.
8. Check for dependencies of your backend instances that might be causing latency issues. Dependencies might include shared databases or external resources (such as Amazon S3 buckets). Dependencies might also include external resource connections, such as network address translation (NAT) instances, remote web services, or proxy servers.
9. Use the following Linux tools to identify performance bottlenecks on the server.
uptime – Shows load averages to help determine the number of tasks (processes) waiting to run. On Linux systems, this number includes processes waiting to run on the CPU, as well as processes blocked in uninterruptible I/O (usually disk I/O). This data provides a high-level look at resource load (or demand) that must be interpreted using other tools. When Linux load averages increase, there's a higher demand for resources. To determine which resources are in higher demand, you must use other metrics. For example, for CPUs you can use mpstat -P ALL 1 to measure per-CPU utilization, or top or pidstat 1 to measure per-process CPU utilization.
mpstat -P ALL 1 – Shows CPU time breakdowns per CPU, which you can use to check for an imbalance. A single hot CPU might be evidence of a single-threaded application.
pidstat 1 – Shows per-process CPU utilization and prints a rolling summary that's useful for watching patterns over time.
dmesg | tail – Shows the last 10 system messages, if there are any. Look for errors that might cause performance issues.
iostat -xz 1 – Shows the workload applied for block devices (disks) and the resulting performance.
free -m – Shows the amount of free memory. Check that these numbers aren’t near-zero in size, which can lead to higher disk I/O (confirm using iostat), and decreased performance.
sar -n DEV 1 – Shows network interface throughput (rxkB/s and txkB/s) as a measure of workload. Check if any limits have been reached.
sar -n TCP,ETCP 1 – Shows key TCP metrics, including: active/s (number of locally-initiated TCP connections per second), passive/s (number of remotely-initiated TCP connections per second), and retrans/s (number of TCP retransmits per second).
iftop – Shows the connections between your server and a remote IP address that are consuming the most bandwidth. n iftop is available in a package with the same name on Red Hat and Debian-based distributions. However, with Red Hat-based distributions, you might instead find n iftop in a third-party repository.