1 Answer
- Newest
- Most votes
- Most comments
0
Ouch, I hate running into memory problems. I ask myself why can't these memory issues automatically allocate more space for my objects!!
First you should figure out the cause of this issue:
- Are you replicating large data files or A LOT of files?
- Can you make sure that your agent is optimized for the specific workload you're on?
- Check out the source server and look to see if there is enough physical memory or CPU resources.
Some Solutions to look into:
- you can try and increase the heap size by modyfing the startup script or configuarion file of the Elsastic Disaster Recovery Agent. e.g.,
java -Xmx4g -jar cloudendure-agent.jar
- Make sure the hardware is upgraded to handle the load!
- Consider segmenting. I know....more work...it sucks, but try and see if you can process the data into smaller chunks to reduce memory usage.
- Use a Java profiler to monitor heap usage or try using a garbage collector.
-XX:+UseG1GC
- Last, but not least, turn on more detailed logging in the agent you're using.
If all else fails, just blame it on me. But in all seriousness, reach out to AWS Support.
answered 2 months ago
Relevant content
- asked 2 years ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago