- Newest
- Most votes
- Most comments
I suggest finding the tomcat / catalina.out logs for your application which should reveal more specific error messages - see here for log locations. It is possible it is resource related and some of the tomcat memory parameters need updating eg max heap size but you need to be sure this is the cause.
Some more information: Response from ps xo rss,user,args | sort -nr 4368 ec2-user sshd: ec2-user@pts/0
3768 ec2-user ps xo rss,user,args
3756 ec2-user -bash
916 ec2-user sort -nr
RSS USER COMMAND Response from cat /proc/meminfo is: MemTotal: 1006128 kB
MemFree: 64844 kB
MemAvailable: 156516 kB
Buffers: 0 kB
Cached: 219336 kB
SwapCached: 0 kB
Active: 743940 kB
Inactive: 120988 kB
Active(anon): 644676 kB
Inactive(anon): 12776 kB
Active(file): 99264 kB
Inactive(file): 108212 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 36 kB
Writeback: 0 kB
AnonPages: 645648 kB
Mapped: 61564 kB
Shmem: 13420 kB
Slab: 43328 kB
SReclaimable: 23708 kB
SUnreclaim: 19620 kB
KernelStack: 7100 kB
PageTables: 10140 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 503064 kB
Committed_AS: 3829616 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 47104 kB
DirectMap2M: 1001472 kB
Relevant content
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Thank you for your response. I do not see catalina.out in tomcat folder when taking a full log of elastic beanstalk. However there is warning that might indicate a memory leak in catalina.2022-03-09.log file. Mar 9 13:00:19 ip-[] server: 09-Mar-2022 13:00:19.795 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] appears to have started a thread named [mysql-cj-abandoned-connection-cleanup] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread: Mar 9 13:00:19 ip-[] server: java.base@11.0.14.1/java.lang.Object.wait(Native Method) Mar 9 13:00:19 ip-[] server: java.base@11.0.14.1/java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:155) Mar 9 13:00:19 ip-[] server: com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:91) Mar 9 13:00:19 ip-[]server: java.base@11.0.14.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) Mar 9 13:00:19 ip-[] server: java.base@11.0.14.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) Mar 9 13:00:19 ip-[] server: java.base@11.0.14.1/java.lang.Thread.run(Thread.java:829) Mar 9 13:00:19 ip-[] server: WARNING: An illegal reflective access operation has occurred
I will research this particular warning further but seems like this might be the culprit.