CMock Under The Hood: Part 1 - Memory Management

The average user of CMock is never going to need to know how it really works under the hood. There's enough to learn already. If you know it's a strange collection of Ruby scripts that transform header files into Mocks and Stubs to be used in your tests, you've got much of what you need. You might have noticed that CMock is also a C module with a few header files... but there's rarely a need to crack them open and look at the guts.

But, for the three people who DO need this information, and the dozen more who are interested, I present this lovely little series of articles (complete with partial historical context. woo!)

CMock handles two fundamental things in C: Memory Management and Chaining. Wanna know how they work? Today, we're going to talk about the Memory Management.

Aren't there a million memory manager libraries for C out there already? Well yes, but now there is a million and one! The one in CMock is very specialized. By specialized I mean not very featureful. By not featureful I mean it's almost stupid simple.

In the early days of CMock, we used the old C standby: malloc. It worked... fine... most of the time. There were really two problems with our old friend malloc.

(1) We're often writing Embedded Software. How many 8-bit processors have enough room for a HEAP? Even though we typically run tests in a simulator, is our memory map really big enough for this? And then it often gets all fragmented and kludgy; We sit down with our graph paper and draw beautiful diagrams of all the objects we've given to CMock in our assertions and are left wondering why our 80 kilobytes of data won't fit in our 128K heap.

(2) Malloc is so slow! Malloc is a nice old guy, but seriously? You ask him for 8 bytes, so he searches around and finds a nice chunk of 64 bytes for you. He gives you a part of that. A moment later you ask him for another 8 bytes, but he's forgotten all about the last time you chatted. He searches through all his notes all over again, eventually finding what's left of that 64 bytes... or maybe not... He might find you 8 bytes from somewhere else.

The second reason was frustrating, but the first was the big motivator to consider other options. I began to investigate. I had experience writing partition managers and other such interfaces... how hard could it be, right?

While those are often famous last words, in this case I was rescued mid-search by a minor revelation. Memory allocation in CMock followed some constraints that made it very different from most situations where you are dynamically allocating memory:

(1) Fixed Sizes

Once allocated, a block of memory never needed to be resized. CMock allocates memory when we make Expect calls. It allocates memory for notes about the call itself. It allocates memory for a copy of each of the expected arguments. But once allocated, these things never need to be resized! They are allocated and filled. Later they are checked and then discarded. Done. There's never a point where CMock says, "Hey, you know when I said to Expect an integer of 42 for the first argument? I really meant it was an ARRAY of arguments. Can you change that for me please?"

(2) The Big Clear

When using CMock, there is a pattern that shows up 99.999% of the time (Go ahead and double-check that statistic. Sure, I just made it up... but I'm 99.999% confident it's correct. Feel free to call this parenthetical statement recursively until you blow your stack).

Anyway, the pattern is this: Expect, Expect, Expect, Expect, FunctionUnderTest, Repeat (with the next test or next part of this one). The thing is, all the Expects (or Ignores or whatever) allocate memory. The FunctionUnderTest then uses all that memory. Then it's done with them and it can discard them. ALL of them.


This was a huge breakthrough moment in the way CMock handles memory. A normal heap needs to worry about fragmentation. It has to worry about giving you memory chunks in an optimal order to try to maximize the free space left for future segments, while balancing this with the potential that you might ask some of those segments to expand in size at some point. It's a nightmare!

But CMock's Memory Manager doesn't have to worry about this. When CMock asks for 14 bytes of memory, it gives the next 14 bytes. It can treat the entire memory space as one huge contiguous blob with a single pointer which points at the current "end" of the allocated memory. As the Expect and Ignore calls roll in, it just keeps grabbing the next chunk and moving the "end" pointer along. It's easy to know when it's full too... is the next allocation going to move the pointer past the real end? If so, it's full. If not, do it. Fast.

When each test has completed, CMock frees ALL of the memory, preparing itself for the next test. This "free" is as simple as resetting the pointer back to the beginning of the memory block. Done. Super Fast!

The cmock.h API gets a little more complicated than that, but this is because of the second primary feature: chaining. We're going to talk about chaining next time. For now, there are a couple more neat tricks that CMock handles for us in the Memory Manager.


If you've pointed CMock at a simulator before, you've probably already run across the CMOCK_MEM_ALIGN option. This option makes CMock a little smarter about how it hacks off chunks from that big blob of available memory. If you tell CMock that you are 4-byte aligned (that's a setting of CMOCK_MEM_ALIGN = 2), then its Memory Manager will always skip to the next aligned boundary before returning a chunk of data for you.

Let's say we make a call to this function:

Investigate_ExpectAndReturn(uint8 Place, int Times, short Retval)

In addition to some other internal accounting, we know it will need to allocate memory for each of these arguments. With our 4-byte alignment, it's going to properly allocate 12 bytes (5 of which go unused... but that's the price of keeping some processors happy).

Place- X -- X -- X -
Retval- X -- X -

This setting isn't strictly necessary for some platforms, like x86 processors. For targets like an ARM Cortex, though, it's incredibly important (unless you enjoy data access exceptions!).


Often unit tests are built on systems with much more power and memory than our end target. This is good. As you've seen, we've worked hard to make CMock as memory efficient as we can, but let's be real: We're burning CMock memory for almost everything that would usually be pushed onto the stack. This is even worse when we start dealing with situations where we have asked CMock to copy everything it finds at the end of our pointers!

It's convenient to think that we have infinite memory when we are performing tests. We're focused on testing our actual release code. We don't want to waste our time thinking about how much test memory we have.

This is why CMock gives us a few options in where it gets that memory pool from. For smaller targets, we can tell CMock to allocate a static buffer. It will happily create one of any size we wish, so long as our linker agrees. This requires a little guesswork (how much memory will we need vs how much will fit?), but isn't too bad. If we get it long, CMock will abort tests that run over the limit and tell us what happened.

We can avoid this headache on systems that have more memory to spare. CMock's Memory Manager supports using Malloc under the hood... but not the old-school way. Instead, it Mallocs one big chunk of memory. If it burns through all that memory, it uses Realloc to get a bigger chunk. How much it asks for at any time is up to the developer, allowing you to balance speed efficiency with space efficiency. Most likely, though, the defaults will work just fine.


There we are. The million and first memory manager in the wild. We've looked at why CMock has a custom Memory Manager and how it's a simple little animal. It likely isn't portable to many other applications... but it works well for this application. I hope you found this informative, or at least amusing. Join me for part 2 when we look at Chaining.