Go sync.Pool and the Mechanics Behind It

Phuong Le - Sep 17 - - Dev Community

This is an excerpt of the post; the full post is available here: https://victoriametrics.com/blog/go-sync-pool/


This post is part of a series about handling concurrency in Go:

In the VictoriaMetrics source code, we use sync.Pool a lot, and it's honestly a great fit for how we handle temporary objects, especially byte buffers or slices.

It is commonly used in the standard library. For instance, in the encoding/json package:

package json

var encodeStatePool sync.Pool

// An encodeState encodes JSON into a bytes.Buffer.
type encodeState struct {
    bytes.Buffer // accumulated output

    ptrLevel uint
    ptrSeen  map[any]struct{}
}
Enter fullscreen mode Exit fullscreen mode

In this case, sync.Pool is being used to reuse *encodeState objects, which handle the process of encoding JSON into a bytes.Buffer.

Instead of just throwing these objects after each use, which would only give the garbage collector more work, we stash them in a pool (sync.Pool). The next time we need something similar, we just grab it from the pool instead of making a new one from scratch.

You'll also find multiple sync.Pool instances in the net/http package, that are used to optimize I/O operations:

package http

var (
    bufioReaderPool   sync.Pool
    bufioWriter2kPool sync.Pool
    bufioWriter4kPool sync.Pool
)
Enter fullscreen mode Exit fullscreen mode

When the server reads request bodies or writes responses, it can quickly pull a pre-allocated reader or writer from these pools, skipping extra allocations. Furthermore, the 2 writer pools, *bufioWriter2kPool and *bufioWriter4kPool, are set up to handle different writing needs.

func bufioWriterPool(size int) *sync.Pool {
    switch size {
    case 2 << 10:
        return &bufioWriter2kPool
    case 4 << 10:
        return &bufioWriter4kPool
    }
    return nil
}
Enter fullscreen mode Exit fullscreen mode

Alright, that's enough of the intro.

Today, we're diving into what sync.Pool is all about, the definition, how it's used, what's going on under the hood, and everything else you might want to know.

By the way, if you want something more practical, there's a good article from our Go experts showing how we use sync.Pool in VictoriaMetrics: Performance optimization techniques in time series databases: sync.Pool for CPU-bound operations

What is sync.Pool?

To put it simply, sync.Pool in Go is a place where you can keep temporary objects for later reuse.

But here's the thing, you don't control how many objects stay in the pool, and anything you put in there can be removed at any time, without any warning and you'll know why when reading last section.

The good point is, the pool is built to be thread-safe, so multiple goroutines can tap into it simultaneously. Not a big surprise, considering it's part of the sync package.

"But why do we bother reusing objects?"

When you've got a lot of goroutines running at once, they often need similar objects. Imagine running go f() multiple times concurrently.

If each goroutine creates its own objects, memory usage can quickly increase and this puts a strain on the garbage collector because it has to clean up all those objects once they're no longer needed.

This situation creates a cycle where high concurrency leads to high memory usage, which then slows down the garbage collector. sync.Pool is designed to help break this cycle.

type Object struct {
    Data []byte
}

var pool sync.Pool = sync.Pool{
    New: func() any {
        return &Object{
            Data: make([]byte, 0, 1024),
        }
    },
}
Enter fullscreen mode Exit fullscreen mode

To create a pool, you can provide a New() function that returns a new object when the pool is empty. This function is optional, if you don't provide it, the pool just returns nil if it's empty.

In the snippet above, the goal is to reuse the Object struct instance, specifically the slice inside it.

Reusing the slice helps reduce unnecessary growth.

For instance, if the slice grows to 8192 bytes during use, you can reset its length to zero before putting it back in the pool. The underlying array still has a capacity of 8192, so the next time you need it, those 8192 bytes are ready to be reused.

func (o *Object) Reset() {
    o.Data = o.Data[:0]
}

func main() {
    testObject := pool.Get().(*Object)

    // do something with testObject

    testObject.Reset()
    pool.Put(testObject)
}
Enter fullscreen mode Exit fullscreen mode

The flow is pretty clear: you get an object from the pool, use it, reset it, and then put it back into the pool. Resetting the object can be done either before you put it back or right after you get it from the pool, but it's not mandatory, it's a common practice.

If you're not a fan of using type assertions pool.Get().(*Object), there are a couple of ways to avoid it:

  • Use a dedicated function to get the object from the pool:
func getObjectFromPool() *Object {
    obj := pool.Get().(*Object)
    return obj
}
Enter fullscreen mode Exit fullscreen mode
  • Create your own generic version of sync.Pool:
type Pool[T any] struct {
    sync.Pool
}

func (p *Pool[T]) Get() T {
    return p.Pool.Get().(T)
}

func (p *Pool[T]) Put(x T) {
    p.Pool.Put(x)
}

func NewPool[T any](newF func() T) *Pool[T] {
    return &Pool[T]{
        Pool: sync.Pool{
            New: func() interface{} {
                return newF()
            },
        },
    }
}
Enter fullscreen mode Exit fullscreen mode

The generic wrapper gives you a more type-safe way to work with the pool, avoiding type assertions.

Just note that, it adds a tiny bit of overhead due to the extra layer of indirection. In most cases, this overhead is minimal, but if you're in a highly CPU-sensitive environment, it's a good idea to run benchmarks to see if it's worth it.

But wait, there's more to it.

sync.Pool and Allocation Trap

If you've noticed from many previous examples, including those in the standard library, what we store in the pool is typically not the object itself but a pointer to the object.

Let me explain why with an example:

var pool = sync.Pool{
    New: func() any {
        return []byte{}
    },
}

func main() {
    bytes := pool.Get().([]byte)

    // do something with bytes
    _ = bytes

    pool.Put(bytes)
}
Enter fullscreen mode Exit fullscreen mode

We're using a pool of []byte. Generally (though not always), when you pass a value to an interface, it may cause the value to be placed on the heap. This happens here too, not just with slices but with anything you pass to pool.Put() that isn't a pointer.

If you check using escape analysis:

// escape analysis
$ go build -gcflags=-m

bytes escapes to heap
Enter fullscreen mode Exit fullscreen mode

Now, I don't say our variable bytes moves to the heap, I would say "the value of bytes escapes to the heap through the interface".

To really get why this happens, we'd need to dig into how escape analysis works (which we might do in another article). However, if we pass a pointer to pool.Put(), there is no extra allocation:

var pool = sync.Pool{
    New: func() any {
        return new([]byte)
    },
}

func main() {
    bytes := pool.Get().(*[]byte)

    // do something with bytes
    _ = bytes

    pool.Put(bytes)
}
Enter fullscreen mode Exit fullscreen mode

Run the escape analysis again, you'll see it's no longer escapes to the heap. If you want to know more, there is an example in Go source code.

sync.Pool Internals

Before we get into how sync.Pool actually works, it's worth getting a grip on the basics of Go's PMG scheduling model, this is really the backbone of why sync.Pool is so efficient.

There's a good article that breaks down the PMG model with some visuals: PMG models in Go

If you're feeling lazy today and looking for a simplified summary, I've got your back:

PMG stands for P (logical processors), M (machine threads), and G (goroutines). The key point is that each logical processor (P) can only have one machine thread (M) running on it at any time. And for a goroutine (G) to run, it needs to be attached to a thread (M).

PMG Model

PMG model

This boils down to 2 key points:

  1. If you've got n logical processors (P), you can run up to n goroutines in parallel, as long as you've got at least n machine threads (M) available.
  2. At any one time, only one goroutine (G) can run on a single processor (P). So, when a P1 is busy with a G, no other G can run on that P1 until the current G either gets blocked, finishes up, or something else happens to free it up.

But the thing is, a sync.Pool in Go isn't just one big pool, it's actually made up of several 'local' pools, with each one tied to a specific processor context, or P, that Go's runtime is managing at any given time.

Local Pools

Local pools

When a goroutine running on a processor (P) needs an object from the pool, it'll first check its own P-local pool before looking anywhere else.


The full post is available here: https://victoriametrics.com/blog/go-sync-pool/

. . . . . . . . . . . . . .
Terabox Video Player