Skip to content

Spin locks and the vicious multi-threading cycle

Multithreading is a subject that is really hard for me to keep straight in my head. This is a typical timeline of my neuron activity in any multi-threading topic that’s above absolutely basic.

Time X: Awareness of a sore lack of understanding about topic.

Time X+1: Read the wikipedia article and top few Google search results about the topic. Convince myself that I kinda-sorta understand what it all means.

Time X+2: Promise myself to read up more thoroughly on the topic. After a hurried stack overflow or Google search of how to learn multi-threading, try to bookmark various excellent resources on multi-threading like C# threading, the little book of semaphores or the Java concurrency book. Fail on realizing that all of them are already filed under ‘toread’ in delicious / pinboard.

Time X+3: Berate myself for wasting time.

Time X+4: With 33% probability, read the first chapter of one of the materials above. With exponentially decreasing probability, read the next chapter(s), never getting past the third. In any case, promise myself that when the C++ concurrency book is finally out, I will buy and read it.

Time X+(5…N-2): Real life interrupt.

Time X+(N-1): (Read inspiring but confusing blog post about how company X increased performance by multithreading) OR (come across a tough interview question on multithreading) OR (have a discussion with the significant other about spin locks) OR (…)

Time X+N: Time = Time – N

As you can probably guess, I’m in the penultimate step of that cycle. In an effort to take a little break from it without committing the time needed for reading a book, I’ve decided to write a bit about my understanding of spin locks, hoping it at least helps make N larger.

We need locks to guard against concurrent access to a resource or a value. If an ATM reads a value M for your someone’s bank balance while an account transfer request reads the same value, because if that happens, both the ATM and transfer code would let M dollars be withdrawn, resulting in 2*M dollars where there were only M before. That’s a bug. Except if you’re the Federal Reserve, but that’s another story.

Implicit in the need for locks is the fact that the code we’re running is multi-threaded. That doesn’t imply parallel processing — even on a single-core CPU, multiple threads are being switched in to and out of fast enough to give the illusion of parallel processing. The switching is done either by the kernel, or a thread library, in accordance with some scheduling algorithm.

So when thread X is running some code and requests a resource that is locked by some other thread Y, thread X then has to wait for thread Y to release the lock. Waiting can take two different forms, depending on the locking mechanism. If the resource to be acquired has a spin-lock, thread X must continuously poll a boolean guard variable or flag (e.g. is_locked) until it can acquire the lock, a process known as busy-waiting. Conceptually, this is like the following:

while (true) {
 if (!is_locked) {
   is_locked = true; // acquire lock

// work with lock

is_locked = false; // release lock

In reality the code above won’t work, since the check and subsequent lock acquisition isn’t atomic. Proper busy-waiting usually involves an atomic test-and-set operation which has to be supported in hardware. The upshot, however, is that the thread doesn’t really stop, it keeps looping until the resource in question gets released.

The other option is for thread X to sleep on encountering a locked resource and for the kernel or thread library to resume (wake-up) the thread once the resource is available. This is how most multi-threading primitives like mutexes or condition variables operate. Unlike spin-locks, in this case, the thread is not scheduled to run until there is a change in the status of the resource which it was requesting.

Which one is better depends primarily on how long the expected wait time is. On the face of it, spin-locking seems more expensive as the thread keeps running (and therefore consuming CPU cycles) even while waiting; whereas a thread encountering a mutex-locked resource goes into the background until it can do useful work, freeing up the CPU to work on other tasks. There are costs associated with the latter, though; the cost of putting the thread on the suspend queue and bringing it back, and the cost of the kernel keeping track of the resource state in order to wake up the suspended thread. If your thread expects to have an extremely short waiting time, then, a spin-lock may be more efficient.

I don’t have a very good idea of the numbers involved, but a good lower bound for a thread state change is the time it takes to do a context-switch, which is on the order of a few hundred CPU cycles; a really short period of time, about the same as RAM latency. A disk seek, by comparison, has a latency measured in millions of cycles (see this excellent article for more info). The bottom line — you shouldn’t be using spin-locks unless you’re writing very low level code and know exactly what you’re doing; but in that case you don’t need me to be telling you these things!


  1. Very nice post! Your understanding of spin locks and mutexes seems just fine, no need to read any more books :)

    Thursday, January 27, 2011 at 4:27 pm | Permalink
  2. Ashwin wrote:

    Simply put, spinlocks are evil.

    BTW, I bought this, and it is a good resource.

    Thursday, January 27, 2011 at 6:05 pm | Permalink
  3. Deepak wrote:

    On unicore systems, wouldn’t the thread polling the spinlock also be continually swapped in & out of context?

    Thursday, January 27, 2011 at 5:21 pm | Permalink
  4. Deepak wrote:

    Also, a good resource to periodically blow your mind is to read random posts from Jeremy Manson’s blog –

    Thursday, January 27, 2011 at 5:23 pm | Permalink
  5. Anshul wrote:

    @Deepak, @Ashwin, thanks for the resources.

    @Deepak, definitely the thread polling the spinlock will continually swap in and out. In fact IMO it will probably be swapped in and out even of multicore systems unless the scheduler decides to use up a core indefinitely for that thread, which is very unlikely.

    Thursday, January 27, 2011 at 8:29 pm | Permalink
  6. Abhijeet wrote:

    Other than expected busy wait time, there are other reasons that will dictate which primitive to use. For instance, there are certain contexts in which you cannot go to sleep (interrupts), so spin lock is the only option there. Another twist to the story is that a lower priority thread/interrupt can grab a spin lock and be swapped out because a higher priority interrupt fires : the classic priority inversion problem. So there is a variant of spin lock that also disables interrupts in the local CPU. Here are some simple rules:

    If u are in a thread or process context, and critical section is more than sleep time, use mutex and/or semaphores. If you are in contexts that do not allow sleeping like hard and soft interrupts, then use spin lock. If there is chance of priority inversion because say you are synchronizing between a kernel thread and interrupt, use spin lock with interrupt disable.

    Sunday, September 11, 2011 at 1:31 am | Permalink
  7. Hiya! I just wish to give a huge thumbs up for the nice info
    you’ve got right here on this post. I will be coming
    again to your weblog for extra soon.

    Saturday, June 8, 2013 at 5:52 pm | Permalink
  8. What’s up Dear, are you in fact visiting this web site on a regular
    basis, if so after that you will definitely obtain fastidious know-how.

    Wednesday, September 17, 2014 at 12:40 am | Permalink
  9. Thanks for sharing your info. I really appreciate your efforts and I will be
    waiting for your further post thank you once again.

    Wednesday, October 22, 2014 at 6:43 pm | Permalink
  10. tutaj wrote:

    Having read this I believed it was extremely enlightening.
    I appreciate you finding the time and effort to put this informative article together.
    I once again find myself spending a significant amount
    of time both reading and posting comments. But so what, it was still worth it!

    Monday, May 18, 2015 at 6:37 am | Permalink

Post a Comment

Your email is never published nor shared.