Our local atom has been crashing on a fairly regular basis. At this point, we need to reboot it daily just to keep it functioning.
I suspect a memory leak due to the symptoms. It just slows down more and more over time until it stops responding at all and causes the entire server to stop responding. IN our situation, we're just finishing rolling out an implementation of SuccessFactors, so over the last few months, as we've rolled it out, we've seen the issue get worse as more documents are flowing through the system each time another district gets rolled out. The point of that is just that the symptoms are worse with increased demand. When we had two districts, we needed to reboot about once every month and a half. Now that we have 26, we need to reboot daily.
On top of this, we have a second test local atom with the exact same configuration. The server is set up exactly the same. The only difference between the two is that the test atom currently has nothing scheduled. It's just sitting there running. Even when there is no demand on it at all, you can open up the Windows Task Manager and just watch the memory climb.
I shut down the service an hour ago, and started it back up, and it initally ran at about 104MB of RAM being used. In the last hour, I've watched it steadily climb to 114.8MB and it's still climbing. With no integrations scheduled or running.
I'm wondering if anyone else is having similar issues, and if anyone has ideas on how to solve it, or even begin troubleshooting it.
Details on our system, in case it's helpful:
Our Boomi version is 63088,
2015-09-11 09:05:57 AM
03/03/2016 08:49 AM UTC-6
Windows Server 2012 R2
Java HotSpot(TM) 64-Bit Server VM