1 Answer
- Newest
- Most votes
- Most comments
2
- Identify Affected Instances: Begin by identifying which instances or services are exhibiting unusual behavior, particularly those that relate to the high RAM usage and the security incident reported by AWS.
- Review Security Groups and Network ACLs: Check your security groups and network ACLs for any rules that might allow unauthorized access. Tighten these configurations to limit access only to trusted IPs and essential ports.
- Audit Logs: Examine logs from your EC2 instances, VPC flow logs, and any application logs that might give insights into unauthorized access or unusual activities. Look for patterns or IP addresses that might be linked to the exploit. CloudTrail logs can be particularly helpful for understanding API calls that have been made in your account, including those that started or stopped services.
- Memory and Process Analysis:
Since you're observing high RAM usage despite stopping your services, it’s important to directly inspect the processes running on your system.
Connect to your instance using SSH and use tools like
top
,htop
, orps aux
to check which processes are consuming excessive memory and are not recognized or expected. - Check for Malware: The specific exploit mentioned (CVE-2019-9082) relates to vulnerabilities that could potentially allow remote code execution or other malicious activities. Run a thorough scan using an updated anti-virus or a security tool like AWS Inspector to check for malware or other intrusions.
- Patch and Update: Ensure that all software on your instances is up-to-date, particularly focusing on patches for known vulnerabilities (like CVE-2019-9082). Update your operating system, applications, and any dependencies to their latest secure versions.
- Isolate Affected Instances: If possible, isolate affected instances to prevent potential spread or further exploitation. This could involve network isolation or temporarily taking the instance offline.
- Forensic Analysis: If you have snapshots or backups, consider mounting them in an isolated environment for a detailed analysis. This might help in understanding what was compromised without risking further damage to your production environment.
- Recovery and Rebuild: For compromised instances, it might be safer to rebuild them rather than trying to clean them up. This can help ensure that no traces of the attacker remain. Restore from known good backups if necessary and apply strict security measures before reconnecting them to the network.
- Notify AWS Support: Inform AWS of the steps you've taken and any findings. This can be important both for compliance and for potential assistance from AWS.
- Enhance Monitoring and Security Posture: Going forward, enhance your monitoring capabilities using AWS CloudWatch and other security tools. Regularly review your security policies and practices to prevent similar issues.
Relevant content
- asked a year ago
- asked 2 years ago
- asked 2 years ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
Hi Thank you for the reply
I have taken backup and recovered our application on new instance we have not opted for monitoring from aws like cloudwatch and inspector Unfortunately not able to get cause, top command sums upto 500mb of ram , no trace of ip in access log of nginx ,journal logs has this error ETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaksApr 20 05:47:17 ip-xx kernel destination_ip: 13.212.145.90:80 GET /?=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=r8yd2zqa HTTP/1.1 host: 13.212.145.90:80