The filestate backend's Upgrade method currently
does some manual goroutine management
to ensure that it uses a fixed number of goroutines
as it attempts to upgrade all stacks in parallel.
This was fine while the upgrade step was just one phase:
Roughly:
for _, stack := range stacks {
go upgrade(stack)
}
// Using a pool instead of a new goroutine for each upgrade.
However, this will not suffice with upcoming changes to address #12600
because the upgrade process will now have multiple phases:
gather information, fill missing information with a prompt, upgrade.
Only the first and last phases of this are parallelizable.
Attempting to do that using the existing pattern in Upgrade
will lead to code that is quite difficult to read,
so this change introduces a simple shared worker pool abstraction.
It operates like a mix of `sync.WaitGroup` and [errgroup][1].
Namely:
- supports multiple `Wait` and `Enqueue` phases (like WaitGroup)
- supports functions that return errors (like errgroup)
[1]: https://pkg.go.dev/golang.org/x/sync@v0.1.0/errgroup
This makes it very easy to adapt code that looks like the following:
wg := &errgroup.Group{}
for _, x := range xs {
x := x
wg.Go(func() error { return f(x) })
}
if err := wg.Wait(); err != nil {
return err
}
wg = &errgroup.Group{}
for _, y := range ys {
y := y
wg.Go(func() error { return f(y) })
}
if err := wg.Wait(); err != nil {
return err
}
Into the following:
pool := newWorkerPool(..)
defer pool.Close()
for _, x := range xs {
x := x
pool.Enqueue(func() error { return f(x) })
}
if err := pool.Wait(); err != nil {
return err
}
for _, y := range ys {
y := y
pool.Enqueue(func() error { return f(y) })
}
if err := pool.Wait(); err != nil {
return err
}
The workerPool-based version looks similar,
but it'll spawn a fixed number of goroutines once at the start
and re-use them for all tasks.