At Epsio Labs, we develop an incremental SQL engine — in brief, an SQL engine that operates on changes instead of reprocessing the whole dataset each time a query runs. I’ll leave explaining why that’s useful to another blog post of ours — Why We Built a Streaming SQL Engine.
In the early days of Epsio, one of our first ways to stress test our engine was to run the TPC-DS benchmark queries (an industry standard benchmark) with a large data scale (100 TB of records).
We ran one of our newly born tests, and after several minutes, our test machine completely froze for 10 minutes!
Afterwards, we noticed our engine process got killed (some would say brutally murdered). Looking at our system metrics and logs, we saw the system memory was at 99%, which was much higher than expected.
In this blog post, we will navigate the uncertainty of memory utilization, unveil the mystery of our process’s memory bloat, and discuss how we tackled it.
What happens when the machine is out of memory?
Data and memory-intensive programs such as ours often reach high memory usages. If the workload is large enough and no memory regulation is in place, it might cause your system to reach close to 100% memory utilization.
In order to keep our machines running even in such scenarios, Linux gives us two configurable tools:
- Swap memory — A file or partition on disk that’s used as a supplement to physical memory (RAM). When more RAM is needed than what’s available, inactive pages in memory are moved to the swap space in order to free memory space. We chose to disable swap in our test machine because we consider extreme memory utilization a bug; we wouldn’t want to rely on swap to save us due to its performance impact. Also, Epsio might be deployed on a machine that has swap disabled.
- OOM Killer — The Out of Memory Killer is a process ran by the Linux kernel when the system memory is extremely low. It reviews all the running processes and assigns a score to each, based on memory usage and other parameters. It then chooses one or more of them to be killed. [1] [2]
The culprit of our process’s assassination is of course the OOM Killer, who chose to kill the engine process because of its high memory usage.
To ensure that our accusation is true, we can look at the system logs with journalctl (a utility to query various services’ logs):
(You can ignore the nasty command)
Let’s focus on the output:
Our process got killed by the OOM Killer while having 14.7 GB allocated to it in physical memory (see “anon-rss” value in the output).
So OOM Killer was spawned, which means our system was out of memory. When you get that close to the sun, bad things start to happen — my machine was unresponsive via ssh, application logs were not sent, and everything seemed to be frozen. It took 10 minutes until my machine was responsive again, which is weird. You’d think the OOM Killer would kill a process in a millisecond.
Apparently, this is a known phenomenon in Linux that is sometimes called “OOM hangs/livelocks”. The reason for this phenomenon is a bit complicated, but to be concise, when memory is requested and there’s not enough available memory, the system first tries to reclaim memory pages that are backed by a file on the hard disk (memory-mapped files).
If enough memory is reclaimed, the OOM killer is never spawned. This can go on and on, causing the system to be slow and unresponsive.
Interestingly enough, SSDs’ fast performance makes this phenomenon even worse. [3]
We definitely never want to reach an unresponsive state because of OOM hangs. We can avoid this by killing our process before the system memory is at full capacity (see earlyoom). For the simple task of making our stress test machine want to talk to us, we chose to add a CGroup in order to limit the process’s memory (the process will be killed when the limit is exceeded).
This mitigation makes our machine happy; however, the engine process will still be abruptly killed because of memory bloat, which takes us to the root cause of the problem: Why did Epsio need such an extreme amount of memory in that test?
Epsio View And Its Big Pipeline
The failed test creates an Epsio view for a TPC-DS query. An Epsio view is an incremental view; it shows up-to-date results of the query. When an Epsio view is created, a new Epsio engine process is spawned. The engine first calculates the current results of the query — we call this stage population, and its purpose is similar to other SQL engines.
After population, the engine begins to consume row changes (e.g. inserts, deletes, updates) and update the results accordingly — we call this stage CDC Streaming (Change Data Capture).
The specific test that failed created an Epsio view with a query similar to the one below (this is a simplified version):
Our SQL engine constructs a data pipeline for the query. Each node in the pipeline corresponds to an SQL operator. Here is an illustration:
Each root node (catalog_sales table, item table, promotion table) is an input node that yields a batch of rows either from the population or CDC stages. If we processed one batch of rows at a time and waited for it to reach completion, we would waste idle CPU cores and increase the latency of the next batch. That’s why we want to stream multiple batches of rows in parallel.
Now, try to think of all the ways memory might bloat when we stream these batches of thousands of records:
- We might stream too many batches too fast, which is bad because memory will become scarce.
- Each join node might output more rows than the sum of its inputs. For example, if the first join key in our query is cs_item_sk = i_item_sk, for a batch of 10,000 rows from the left side of the join that have cs_item_sk = 1 and another 10,000 rows from the right side of the join that have i_item_sk =1 , the results would be the cross product of the rows (100,000,000 rows!).
- A bottleneck node might cause a “traffic jam” in the pipeline, causing memory to be occupied for more time. This might not seem like a problem, but remember that multiple Epsio views can run simultaneously, and if all of them have lots of batches in memory waiting for the bottleneck to finish, system memory will become scarce.
Those disasters were already foretold, to tackle these situations we:
- Limited the amount of parallel batches.
- Because a node like Join might output larger batches than its input, we introduced a stalling mechanism that will wait for more memory to be available before streaming batches between nodes.
- Added a spill-to-disk mechanism that can choose to move stalled batches from memory to disk.
- Introduced more optimizations to our SQL query planner in order to have a more efficient pipeline. For example, because our query’s filter node has predicates (filters) relevant to the promotion table’s columns, we can move it to be before the join node. That way, we avoid unnecessary joins between rows that would be filtered out later anyway, resulting in less memory usage and better performance. This optimization is known as “Predicate Pushdown” because it pushes the predicates (filters) closer to the data source (table input).
Some of these mitigations are good examples of memory regulation. However, even with the mitigations in place, the test caused our process memory to bloat. The process memory usage was much higher than expected and exceeded our memory regulation limits by far. At this point, we decided to go more in depth.
Malloc And Its Shenanigans
To understand our process memory usage, we have to understand what happens when our process allocates dynamic memory.
In Rust, dynamic memory allocations use the global memory allocator, which in our case is the malloc allocator implemented in glibc 2.35.
So under the hood, our process basically uses malloc and free as any normal C program does.
Malloc Being Greedy
The allocator asks the OS for memory by using a syscall, but it strives to reduce the number of calls for performance reasons. Hence, when free is called, it doesn’t necessarily return the freed memory chunk to the OS, but instead holds several data structs for the bookkeeping of free chunks. On the next malloc call, the allocator looks for a suitable free chunk to return. If found, it will return a pointer to the chunk and mark it as used. Otherwise, it will issue another syscall for memory.
From the OS’ perspective, the user process still occupies the memory in the free chunks.
Memory Profiling
Interesting stuff! Now that we understand a bit more about malloc, let’s profile our process memory usage to see how many free chunks are there. We can use the libc mallinfo2 function (__libc_mallinfo2), which returns a struct containing information about memory allocations.
From the mallinfo man page:
The structure fields contain the following information:
..
uordblks
The total number of bytes used by in-use allocations.
fordblks
The total number of bytes in free blocks.
Let’s create a task that will log mallinfo results every 10 seconds:
When we put uordblks and fordblks on a graph, we get:
So we are actually not even using most of the occupied memory; ~11GB out of 15GB are actually free chunks.
To shed more light on the matter, I used the malloc_info libc function, which prints a huge nasty XML with much more details:
It turns out our process has more than 60 arenas (corresponding to the XML’s heap element). But what the hell is an arena? to explain that we need to understand how malloc deals with multiple threads.
Malloc And Multi-Threading
In order to efficiently and safely handle multi-threaded applications, glibc’s malloc needs to tackle the situation where two threads might access the same memory region at the same time (which would cause parallelization bugs). Therefore, the allocator segments the memory into different regions. These regions of memory are called “arenas”.
Each arena structure has a mutex in it, which is used to control access to that arena. This mutex assures that two different threads won’t access the same memory at the same time. Contention for this mutex is the reason why multiple arenas are created.
The number of arenas is capped at eight times the number of CPU cores by default.
Memory Fragmentation
It’s important to note that each arena has a heap (sometimes multiples) — a contiguous memory region holding the memory chunks (used or free). An heap can be grown or shrunk using the brk and sbrk syscalls.
As our process continues to stream row batches through the data pipeline, many small mallocs and frees are called, and heaps might get fragmented — gaps of free chunks between used chunks start to form. adjacent free chunks are combined into bigger free chunks. But a heap can shrink only by its “top” chunk — the free chunk at the current end of the heap. Free chunks trapped between allocated chunks might be used for the next allocation but will not be returned to the OS. We lovingly refer to this situation as memory fragmentation.
With multiple arenas and heaps, memory fragmentation is even more prevalent because each heap gets fragmented on its own. There are fewer chances to coalesce free chunks and to find a suitable free chunk for an allocation.
To illustrate how costly memory fragmentation can be, I wrote a small C program. The program allocates 1,000,0000 buffers and then frees all of them except the last one:
I compiled (<code-highlight>gcc -o memtest memtest.c<code-highlight>), ran the program, and watched my machine’s memory with <code-highlight>watch free<code-highlight>. Here is the output I got:
What’s even more interesting to see is that if we change the program to free all the buffers except the first one, the memory will be freed back to the OS. Another adjustment that achieves the same outcome is increasing the buffer size to 1MB and reducing the number of buffers to 1000. Because over a certain allocation size (M_MMAP_THRESHOLD), malloc will use mmap (a syscall that can allocate memory outside of the heap), and mmaped buffers are easily freed to the OS with the munmap syscall.
I invite you to try it on your machine (be careful not to trigger an OOM).
Back to our process’s malloc_info XML… Our process had 64 arenas (the maximum number for an 8 core machine), and each one had between 100 and 1000 MB of free chunks. It strongly implies that malloc being greedy + memory fragmentation amplified by our use of multi-threading, are the reasons we have so much memory wasted on free chunks.
Switching Memory Allocator
Memory fragmentation is a hard problem that most memory-intensive programs face. To solve it, we could have tuned malloc with configurations or changed our application’s allocation patterns (which is a great topic for another blog post).
Even so, like many other memory-desperate developers, we started seeking a more adequate allocator.
In an academic research article called “Experimental Study of Memory Allocation for High-Performance Query Processing”, we read about other memory allocators and their pros and cons. The Jemalloc allocator was the star of the show. It was noted that one of its strong suits is memory fairness, which was defined as “returning freed memory back to the OS”.
One of the ways Jemalloc achieves memory fairness is by having a time-based mechanism that releases free chunks back to the OS with the munmap sycall (it strongly relies on mmap for its allocations).
We switched our global allocator to Jemalloc, which is very easy in Rust:
We ran the stress test, and surpassingly, it passed successfully. There was no notable performance difference in any of our stress tests.
We used Jemalloc stats (from the great tikv_jemalloc_ctl crate) in order to monitor memory fragmentation:
resident — Total number of bytes in physically resident data pages mapped by the allocator.
allocated — Total number of bytes allocated by the application. (in-use allocations)
resident minus allocated should be approximately equal the free memory used by our process. So free chunk memory consumption is low, which is great! There’s a small problem with the graph — It keeps growing steadily. This can be explained by the fact that we haven’t reached the limit where we start stalling our data stream (12 GB).
Ultimately, the test passed, and the system memory seemed to be lower by more than half compared to the same test with glibc.
So we just made our stress test pass and the system will succeed in processing one incremental view with billions of records! But will it pass if a thousand incremental views are created?
Multi-Process Architecture Memory Implications
As you probably guessed, we had a stress test that spawned a thousand incremental views, and it crashed miserably because of high memory utilization.
At the time, we were using a multi-process architecture; each incremental view had a dedicated engine process that maintained it — received row changes (CDC), and ran them through the data pipeline.
Unfortunately, the existence of 1,000 engine processes presented a new hurdle for us. While each process had its own memory regulator limiting its usage to 4GB (for example), there was no certainty that other processes wouldn’t also utilize 4GB, potentially resulting in an OOM.
Moving To Single-Process Architecture
It became more and more evident that an architecture change was needed.
A process per incremental view provides isolation between them; however, a single-process architecture holds many benefits for us in terms of performance and capabilities:
- Memory fragmentation is lower.
- Memory utilization does not depend on the number of incremental views.
- Memory regulation in our data pipeline is easier. In a multi-process architecture, different processes are not aware of other processes resource consumption and needs.
- Only one RocksDB embedding.
- Overall decrease in memory overhead.
- Communication between incremental views and with their coordinator is easy and fast.
For these reasons, we decided to move to a single-process architecture where all the incremental views are just asyncio tasks maintained by one process. Now, instead of depending on the number of incremental views, our memory utilization depends on the total number of sql operations.
Is Single-Process Architecture Better For SQL Engines?
It’s important to note that single-process architecture isn’t necessarily better than multi-process, as with everything in life, there are tradeoffs.
Just to name a few:
- It may be harder to implement. e.g. aborting a query (or incremental view in our case) is harder. In multi-process architecture, it is as easy as killing a process. This is less critical for us since incremental views are seldom aborted.
- Different query executions share the same address space, so a memory bug in one query might affect other queries. Fortunately, this is of lesser concern for us, thanks to Rust.
PostgreSQL for example, uses a multi-process architecture for its worker processes (the processes in charge of executing queries).
In the Let’s make PostgreSQL multi-threaded thread, there’s an excellent comment that portrays the negative impact of single-process architecture:
OOM and local backend memory consumption seems to be one of the main
challenges for multi-threadig model:
right now some queries can cause high consumption of memory. work_mem is
just a hint and real memory consumption can be much higher.
Even if it doesn’t cause OOM, still not all of the allocated memory is
returned to OS after query completion and increase memory fragmentation.
Right now restart of single backend suffering from memory fragmentation
eliminates this problem. But it will be impossible for multi-threaded Postgres.
Basically, when a query’s execution is complete, Postgres can choose to kill the process if it detects that it has irregular physical memory usage or memory fragmentation. That way, Postgres eliminates both of these problems.
Having said that, while a query’s execution time is usually between milliseconds and hours, an incremental view’s execution rarely completes — it should be up until the view was dropped by a user or Epsio was shutdown. Therefore, this mitigation is also less relevant for us.
Conclusion
Memory-intensive applications can pose challenges. It’s crucial to consider the internal behavior of your allocator, regulate memory usage, and not rely solely on the OS to save the situation. Experimenting with different inputs and scenarios can cause different allocation patterns that might or might not cause fragmentation or bloating.
If you want to try Epsio’s blazingly fast (and super memory regulated) SQL engine, check us out here.
References:
- https://www.kernel.org/doc/gorman/html/understand/understand016.html
- https://lwn.net/Articles/761118/
- https://developers.redhat.com/blog/2014/10/02/understanding-malloc-behavior-using-systemtap-userspace-probes
- https://news.ycombinator.com/item?id=35469632#:~:text=In%20some%20ways%20Rust%20is,to%20reuse%20previously%20allocated%20memory.
- https://stackoverflow.com/questions/39753265/malloc-is-using-10x-the-amount-of-memory-necessary
- https://linkthedevil.gitbook.io/all-about-vpp/chapter1
- https://engineering.linkedin.com/blog/2021/taming-memory-fragmentation-in-venice-with-jemalloc
- https://linkthedevil.gitbook.io/all-about-vpp/chapter1