EHC ❯ Optimizing mem-usage on Flushing Elements to DiskStore
-
Bug
-
Status: Closed
-
-
Resolution: Fixed
-
-
-
drb
-
Reporter: sourceforgetracker
-
September 21, 2009
-
0
-
Watchers: 0
-
September 22, 2009
-
September 22, 2009
Description
We encountered strange OutOfMemory errors in our DB- Cache using ehcache 1.0. We use a lot of caches for various methods. Some of those cached methods (big- data elements) can become rather big. We have measured the max sizes per method and set the maxElementsInMemory and the expiryThreadInterval to have at least some good-enough reserve on Heap-Space (taking into account that serializing takes place in memory so you know how big your Storage-Block must be. Periodically we were getting out-of-memory errors and were stuck.
So i looked into the way you serialize and write to disk:
final ObjectOutputStream objstr = new ObjectOutputStream(outstr); objstr.writeObject(element); objstr.close();
– so far so good, but
buffer = outstr.toByteArray();
– here a copy of the byte[] in the ByteArrayOutputStream is created
– i understand that this copy is needed to write the block to the File
randomAccessFile.seek(diskElement.position); randomAccessFile.write(buffer);
– but wouldn’t it make more sense in terms of sparing that byte[] buffer by using a custom OutputStream which writes the ByteArrayOutputStream directly to the File:
rafOS = new RandomAccessFileOutputStream (randomAccessFile); outstr.writeTo(rafOS);
– That Random AccessFileOutputStream would just write through to the RandomAccessFile.
That could reduce the memory usage on serializing big Elements. That could save a lot if you have a lot of disk stores running in one VM.
Another improvement would be the reuse of the ByteArrayOutput Stream using reset() - at least during on flush-run. I know that this might have consequences on memory as well - but that could be handled by checking against some maximum-size (and then deciding on “reuse” over “release and reallocate”.
regards Jens Kleemann
–
Sourceforge Ticket ID: 1478444 - Opened By: nobody - 28 Apr 2006 14:53 UTC
Re-opening so that I can properly close out these issues and have correct Resolution status in Jira