“Linux Is Eating My Ram” Myth Busted

by Maaz Shah  October 21, 2014

In my experience, as a system administrator, I have been asked a lot of questions about system eating up all the memory. In all fairness, it is not uncommon for an operating system to occupy the memory and release it when required.

linux-ate-my-ram-banner

There is a saying in Linux community: “Free memory is the waste of memory.”

What happens in Linux OS is that it borrows unused memory for disk caching. This makes memory to be apparently consumed by the operating system, but in real essence it is not “eating up RAM.”

But, the question stands: Why do we think Linux is mean when it devours RAM?

Linux is not mean. What it does is for the sake of speed. It makes caching much faster. It acts like a “bank” which keeps all the memory and a chunk of it is released when the need arises. Whenever an application needs more memory, a chunk is borrowed immediately and it is returned once the function is complete.

The common utilities in Linux to check free memory explain it a bit differently. Check out the screenshot below. In it, you can see that the amount of free memory is 168 MB whereas the cached processes are using 1261 MB of RAM.

You don’t really need to clear disk cache but if you insist you can do this by using the following code:

Once the cache is emptied, you can see the amount of free memory has increased.

memory_usage_2

Another question that I get (and I don’t know why I get it) is: How can we stop it?

My answer is simple: “WHY in the world you want to stop it?”

This free memory is usually invested in a technique that improves performance. Stopping this process will definitely decrease performance. Disk cache makes applications load faster and run smoother, but it NEVER EVER takes memory away from them! Therefore, there’s absolutely no reason to disable it!

Let’s try an experiment:

Here, we have a small script that will keep on consuming memory. Let us see how it goes. First, we will check the free memory.

Here you can see the free memory shown is 156 MB. OOM Killer should just end this process and hopefully the rest will remain undisturbed. We need to disable swap for this.

You will observe something like this.

Even though it showed 156 MB “free”, it didn’t stop the application from grabbing 1347 MB.

Afterwards, the cache is pretty empty, but it will gradually fill up again as files are read and written.

However, you do need to understand that you cannot run on the same amount of RAM as you grow. Each visitor to your website uses a minute amount of RAM when they are visiting your website. As your website becomes popular, you will get more visitors and there will come a time when you will need more RAM. On Cloudways, you can increase RAM by scaling your server size from the Vertical Scaling section inside the Server Management tab.

Vertical Scaling Cloudways

At Cloudways, we want to provide the easiest cloud hosting platform ever. If you are interested, start your free trial from the banner below.


Many thanks to LinuxAteMyRam for this study.

Start Creating Web Apps on Managed Cloud Servers Now!

Easy Web App Deployment for Agencies, Developers and E-Commerce Industry

About Maaz Shah

Maaz Shah works as System Engineer for Cloudways. His days are spent in tackling technical troubles.

Stay Connected:

You Might Also Like...

  • aaronvincentrobertson

    I have a few servers that I run with “ram disks” where the OS and all applications are stored in RAM. Would disabling this caching improve the performance of such systems?

    • Peter Cralen

      it just can not (my logical thoughts) if it will be read from the fastest SSD disk it will never reach that speed of RAM.
      How is in article:
      ”How can we stop it? My answer is simple: “WHY in the world you want to stop it?”

      • aaronvincentrobertson

        This may help clear up my question.
        http://en.wikipedia.org/wiki/RAM_drive

        The servers in question never actually read or write to a hard drive, they operate fully “in-memory”.

    • knuthf

      Yes – for sure.
      If you know your RAM is large enough, reduce swap pressure or drop swapping. Linux should use whatever is left-over as disk buffers. A disk buffer is a cache with “write through” but read-copy so the RAM buffer will have no effect if you watch a movie (the same page will not be read over and over) – but frequently used data will go much faster. Drop swapping and you remove the LRU paging.

  • Maaz Shah

    I cant say it will improve the performance, but I believe it will not be decreased as well.

    • Peter Cralen

      How you mean this ? I think that anything in RAM improve the performance compared to read it from disk … and improve it drastically. Or miss I something ?

      • Maaz Shah

        You are correct that read/write from primary memory is was faster than of secondary memory
        but Your OS is on ram disks, so I believe it will not be using much of disk cache and most importantly the Os lends the RAM for disk caching and it will take it back as when required.

        • Peter Cralen

          I lost somehow in comments, now I see that your comment probably belongs as reaction to aaronvincentrobertson 🙂

          • Maaz Shah

            My bad I think I too got lost in comments, as you said this was meant for aaronvincentrobertson 🙂

  • TempleOS allocates fixed sizes upfront. It does not use paging, more or less.

  • Sam CDU

    Re, “Disk cache makes applications load faster and run smoother, but it NEVER EVER takes memory away from them!”. My understanding of the “swappiness” setting renders this statement usually untrue. Your test script doesn’t appear to check how much of the RAM assigned to the process has been swapped out over time due to the disk activity going on in parallel. With a default setting of swappiness 60, as soon as you have less than about 60% free memory (memory not consumed by buffers/cache), the oldest application memory pages will start to be swapped out to disk in favour of disk caching. You can see evidence of this behaviour by looking at “top” and configuring it to sort by swap to see which processes have swapped out to disk (even when you still have plenty of free memory).

    This could be real bad news for an application server, particularly a JVM based one that thinks it has plenty of RAM. When the JVM garbage collector kicks in only to find that much of the RAM allocated to the JVM process has been swapped out to disk, this can have a devastating impact. You can actually imagine how allocating more RAM to such a JVM could increase this impact, rather than reduce it.

    The solution isn’t to disable the disk cache though, just make sure you drop the swappiness according to the purpose of the server and factor it in when allocating RAM.

    So in fact, Linux does eat your RAM while you’re not looking and it’s very sneaky about the way it does so. So much so that most sys admins I know are convinced that it does not.