Concurrency Patterns in Go: Beyond Goroutines
Exploring advanced concurrency primitives - from worker pools and fan-out/fan-in pipelines to context-driven cancellation - and when each pattern truly shines.
Share
Go’s concurrency model is one of its strongest features, but reaching for go func() everywhere leads to race conditions and resource leaks. Let’s explore patterns that scale.
Worker Pool Pattern
The worker pool bounds concurrency to a fixed number of goroutines processing jobs from a shared channel:
package main
import (
"fmt"
"sync"
)
func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for j := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, j)
results <- j * 2
}
}
func main() {
const numWorkers = 5
jobs := make(chan int, 100)
results := make(chan int, 100)
var wg sync.WaitGroup
for w := 1; w <= numWorkers; w++ {
wg.Add(1)
go worker(w, jobs, results, &wg)
}
for j := 1; j <= 50; j++ {
jobs <- j
}
close(jobs)
wg.Wait()
close(results)
}
Fan-Out / Fan-In
Fan-out/fan-in is ideal when each unit of work is independent and CPU-bound.
Context-Driven Cancellation
Always use context.Context for cancellation propagation:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
select {
case result := <-doWork(ctx):
fmt.Println("Result:", result)
case <-ctx.Done():
fmt.Println("Timed out:", ctx.Err())
}
The rule of thumb: never start a goroutine without knowing how it will stop.