java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?

The Problem

The Java Virtual Machine (JVM) crashed. The fatal error log file, hs_err_pid[pid].log, shows the error message in bold below:

# A fatal error has been detected by the Java Runtime Environment:
# java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
#  Internal Error (allocation.cpp:166)
#  Error: ChunkPool::allocate
# JRE version: 6.0_23-b05
# Java VM: Java HotSpot(TM) Server VM (19.0-b09 mixed mode solaris-x86 )

The Solution

The malloc() call in the ChunkPool::allocate() function failed to allocate new memory, causing the JVM process to exit with the error “ChunkPool:allocate”. In other words, the HotSpot JVM ran out of native memory in the space reserved in native heap (C-heap) for compiling Java methods.

The vast majority of service calls are due to the application reaching the 4 Gb address space limit for 32 bit processes, rather than to a shortage of installed memory or swap space. On Windows and on Linux, there can be 2 Gb and 3 Gb process address space limits.

Reasons for reaching this limit include:

  • Memory leaks in calls to native libraries
  • Loading many native libraries that fill up process address space
  • Suboptimal JVM configurations due to changed memory requirements for an application
  • Direct Buffers that are allocated outside of the garbage-collected heap (by java.nio.ByteBuffer.allocateDirect)
  • Mapped Buffers, created by mapping a region of a file into memory (using
  • mmap files using native implementations
  • Any native resources that will tie up the address space

It is possible to have crashes due to native memory allocation (malloc) for ChunkPool::allocate when running on Solaris 11 with a huge heap (<28G) and the G1 garbage collector. The crashes are due to a lack of free space for further native [heap] expansion. This is because the G1 heap, as [anon] segment, is allocated next to native [heap] instead of at higher addresses FFFFFFF*.

Although a shortage of installed memory or swap space is not usually the cause of the problem described in this article, it is worthwhile checking the system’s memory load and capacity, and adjusting it if necessary. Such checks are usually straightforward, while a thorough investigation of an application that appears to be leaking native memory is usually non-straightforward. Utilities such as pmap(1) can be applied to the running JVM to check whether its overall memory usage is in the region of the 4Gb process size limit.

If the problem appears to be the result of approaching this limit then the following suggestions may apply:

  • Find any native memory leaks in the application and fix them
  • Find any native code in the application that consumes native memory and optimize it
  • Monitor Direct Buffers and Mapped Buffers
  • Decrease the Java heap (-Xms/-Xmx)
  • Decrease the permanent generation space (-XX:MaxPermSize)
  • Decrease the stack size for Java threads (-Xss)
  • Decrease the number of threads
  • Increase HotSpot’s code cache space

For Java 6 the default minimum code cache size is 500 KB and the reserved code cache size is 32 MB. Sometimes these values are too small. You could try increasing the values by setting specific JVM options, example:

-XX:CodeCacheMinimumFreeSpace=2M -XX:ReservedCodeCacheSize=64M

With process space being limited, running a demanding application under high load often involves finding an appropriate trade-off between native heap availability, Java heap availability, and threads. Reducing the Java heap size (-Xmx or -XX:MaxPermSize) will make more native heap available, but at the expense of reducing the number of Java objects or classes that can be accommodated. Reducing the stack size (-Xss) will also reduce the load on the native heap, but will reduce the available call stack depth, which might cause heavily recursive code to fail. Reducing the number of threads is usually only possible by reducing the load on the application.

If an application or library is leaking memory in the native heap, then tuning or reconfiguration will usually only delay the failure, not prevent it. If the failure is not the result of a memory leak, and the application cannot be further optimized to reduce memory needs, then you might consider a move to a 64-bit JVM. However, this will inevitably use more memory than the same application on a 32-bit JVM, and the system memory must be sized appropriately.