- Newest
- Most votes
- Most comments
Based on the information provided, there are a few potential reasons why you're seeing CPU/Memory metrics for the DataDog agent container but not for your main Kong container:
-
Container Insights behavior: Container Insights doesn't automatically generate metrics at the individual container level. It typically provides metrics at the cluster, service, and task levels. The fact that you're seeing container-level metrics for the DataDog agent might be due to its specific configuration or how it's reporting metrics.
-
Task definition configuration: In your task definition, there's no specific configuration that would prevent Container Insights from collecting metrics for the Kong container. However, you might want to ensure that the Kong container has the necessary permissions and configurations to report metrics.
-
ARM64 architecture: There are no known widespread issues with Container Insights on ARM64 architecture that would cause this specific problem. However, it's always possible that there could be some edge cases or specific configurations that might affect metric collection.
-
Debugging metrics collection: To debug what metrics are being collected and published for specific containers, you can use CloudWatch Logs Insights. Container Insights stores its data as performance log events in CloudWatch Logs. You can query these logs to see what data is being collected for each container.
Here are some suggestions to troubleshoot and potentially resolve the issue:
-
Check if the metrics are available but not visible in the default dashboard. You can use CloudWatch Logs Insights to query the raw data and see if metrics for the Kong container are being collected but not displayed.
-
Ensure that the Kong container is not being throttled or restricted in a way that prevents it from reporting metrics. Check its resource allocations and any potential constraints.
-
Verify that the task execution role has the necessary permissions to publish metrics for all containers in the task.
-
Consider using the CloudWatch agent as a sidecar container in your task definition. This can provide more detailed metrics at the container instance level, including CPU and memory metrics for individual containers.
-
If you're using custom metrics or the CloudWatch embedded metric format, ensure that these are properly configured for the Kong container.
If the issue persists after trying these steps, you may need to contact AWS support for further assistance, as there might be an underlying issue with how Container Insights is interacting with your specific setup.
Sources
Metrics for Amazon ECS - AWS Prescriptive Guidance
Monitor Amazon ECS containers using Container Insights with enhanced observability - Amazon Elastic Container Service
Viewing Container Insights metrics - Amazon CloudWatch
Relevant content
- asked a year ago
- asked a month ago
- AWS OFFICIALUpdated 10 months ago
