NUMA and database headaches

Posted by | October 14, 2013 | One Comment

NUMA stands for non-uniform memory access. It is a memory architecture for multiple CPU systems where some memory is local to specific CPUs, and accessing remote memory (which is local to another CPU) is slower. This is commonly found in 8-CPU systems.

Apparently many databases have problems using NUMA.

Oracle has built-in support for NUMA servers. However it comes at a cost. Higher CPU utilisation and slower file system utilities. So by default, it is disabled in Oracle databases. This is explained in gory detail by Kevin Closson.

Read Michael Wilson’s blog for instructions for 11gR2. And it’s worse on 10gR2. I don’t know about Oracle 12c but I suspect it is also disabled by default.

FacebookTwitterGoogle+LinkedInEvernoteEmailShare

One Comment

  • Lie says:

    2 CPU systems are actually using NUMA architecture as well, as long as it’s more than 1 CPU.
    Yes, system will significantly slowing down when it perform remote-memory access from another CPU, but this is still better, rather than CPU running out of physical memory and system (like Unix) will get panics, might utilize a lot of paging (high paging activities) and when running out of paging kernel might get panics and start to kill processes randomly! Aha!

    From the hardware perspective to address issues with NUMA:
    1. Intel has implemented dual QPI (Quick Path Interconnected) link, a dual high-speed interconnect between their CPUs for remote memory access; this implemented on their new E5-Sandy Bridge processor. Can wait to see this happen on their E7 series processor!

    2. Follow closely best-practice guideline when you install your DIMM (Dual Inline Memory Module) into the server. Few rules of thumb to remember, especially when you are using Intel platform:
    - Remember, you want to avoid ‘remote-memory-access’ to happen. So, you will need to ensure that you distribute/spread your DIMM, across all memory channels in the system!
    - After distribute all the DIMMs across all channels, all CPUs, when you sum up your DIMM, you will need to ensure that each channel has the same DIMM capacity.
    E.g: If you are using E5 2600 series processor, 4 memory channels,
    CPU1:
    Channel 1: 1x1GB = 1GB
    Channel 2: 2x512MB = 1GB
    Channel 3: 2x512MB = 1GB
    Channel 4: 1x1GB = 1GB

    CPU2:
    Channel 1: 1x1GB = 1GB
    Channel 2: 2x512MB = 1GB
    Channel 3: 1x1GB = 1GB
    Channel 4: 1x1GB = 1GB

    As we can see, each of the channel can have a different number of DIMM (some use 1 piece of 1GB, some can have 2 pieces of 512MB, but each of the channel they all have the same total capacity of 1GB!
    - Always install DIMM from the last slot of each of your memory channel.

Leave a Reply

Your email address will not be published.


six − = 1

For more details... Contact us now!