0

I'm running a live CD linux distro and I'm getting out of memory exceptions.

>java -version #Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000646e00000, 264241152, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 264241152 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/hs_err_pid50274.log 

I ran free -m command and it shows ~250Mb of free RAM and 19Gb used for cache.

>free -m total used free shared buffers cached Mem: 24128 23827 301 0 15 18929 -/+ buffers/cache: 4881 19247 Swap: 0 0 0 

Here is the memory dump:

--------------- S Y S T E M --------------- OS:RapidLinux 20151103 uname:Linux 3.18.22 #1 SMP Fri Oct 9 19:28:11 UTC 2015 x86_64 libc:glibc 2.21 NPTL 2.21 rlimit: STACK 8192k, CORE infinity, NPROC 96487, NOFILE 4096, AS infinity load average:2.08 1.73 1.30 /proc/meminfo: MemTotal: 24708040 kB MemFree: 307572 kB MemAvailable: 173696 kB Buffers: 15612 kB Cached: 19383916 kB SwapCached: 0 kB Active: 3784768 kB Inactive: 19327244 kB Active(anon): 3742084 kB Inactive(anon): 19303520 kB Active(file): 42684 kB Inactive(file): 23724 kB Unevictable: 15016 kB Mlocked: 15016 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 96 kB Writeback: 0 kB AnonPages: 3727472 kB Mapped: 55972 kB Shmem: 19327344 kB Slab: 671580 kB SReclaimable: 116376 kB SUnreclaim: 555204 kB KernelStack: 23664 kB PageTables: 24588 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12354020 kB Committed_AS: 28666748 kB VmallocTotal: 34359738367 kB VmallocUsed: 738156 kB VmallocChunk: 34346400260 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB DirectMap4k: 11748 kB DirectMap2M: 2072576 kB DirectMap1G: 23068672 kB Memory: 4k page, physical 24708040k(307572k free), swap 0k(0k free) 

I tried to clear the cache by running sync ; echo 3 | sudo tee /proc/sys/vm/drop_caches as a sanity check and surprise surprise the cache did not go down at all, but the command completed successfully.

There was a ton of old logs that I deleted (from the aufs / which should be in RAM), ran the command to clear the cache - still nothing.

The rest of the file system takes only ~9Gb. How can I force my cache to clear?

3
  • Please edit your question and show us the output of free -m. Also, tell us who, exactly, is giving the out of memory exceptions. The cache shouldn't be an issue here at all. That's memory that is still available for programs if any ask for it. Commented Mar 14, 2016 at 18:25
  • Likely relevant information can be found at linuxatemyram.com Commented Mar 14, 2016 at 18:30
  • Looked at linuxAteMyRam already. The twist in my case is that this is a live CD - there is no hard drive. (Actually there is, but the 'distro' is running off of RAM, the HDD is just for test data) Commented Mar 14, 2016 at 18:36

2 Answers 2

1

You have 19GB of RAM free for programs to use. There is no need to clear the disk cache: the system will reclaim it if it needs the memory for some other purpose such as running a program. The only thing you can do by clearing the disk cache is make your machine slower.

Disk space is irrelevant. Deleting files won't help you.

You have 19GB of RAM and the program claims to fail to allocate 26MB. Do the math: 26MB < 19GB. This is a bug in the program, either in the way it allocates memory or in the way it reports errors. Check /tmp/hs_err_pid50274.log to see if it holds more clues.

1
  • Highly unlikely the JRE would have a bug that will be manifested by running "java -version". Disk space is irrelevant - it's a live CD so there isn't any, the filesystem is entirely in memory. Commented Mar 15, 2016 at 15:58
1

Shmem: 19327344 kB

You have 19GB in tmpfs, or some other shared memory object.

It's often tmpfs. Check df -h -t tmpfs.

System V shared memory can be shown by ipcs -m.

Some, but not all, other shared memory can be found by scanning /proc. There is a small python script in my answer here: Can I see the amount of memory which is allocated as GEM buffers?

EDIT: in your case, you might have something holding open a deleted file. This would mean the space usage would show up in df, but not in du. It is possible to scan /proc/ for deleted files... I don't have such a handy script for this, but maybe you can do ls -lR /proc | less, and search for the string (deleted) using the / command of less. It seems possible you would see your deleted log files were still being held open by some process.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.