Go Track - Bank Account - Benchmark Issue

The benchmarks in Go - Bank Account incorrectly show that a single global mutex for all accounts is more performant than a mutex per account.

I am happy to open an Issue and PR to fix this, but I am not sure if that is the proper process now that we are on the discourse forum.

In the Benchmarks below, we probably need to move Open and Close within b.RunParallel(func(pb *testing.PB) { so each thread is working on a different account. The way it is now, the benchmark is working on a single account, and we are essentially measuring serialized behavior.

Here are the benchmarks for the Old and Proposed Benchmarks.

BenchmarkAccountOperations-10 40172013 27.08 ns/op 0 B/op 0 allocs/op
BenchmarkAccountOperationsParallel-10 5577504 215.1 ns/op 0 B/op 0 allocs/op
BenchmarkAccountOperationsParallelProposed-10 377330408 3.224 ns/op 0 B/op 0 allocs/op
func BenchmarkAccountOperationsParallel(b *testing.B) {

	a := Open(0)    // to be moved
	defer a.Close() // to be moved

	b.RunParallel(func(pb *testing.PB) {
		for pb.Next() {
			a.Deposit(10)
			a.Deposit(-10)
		}
	})
}

func BenchmarkAccountOperationsParallelProposed(b *testing.B) {

	b.RunParallel(func(pb *testing.PB) {

		a := Open(0)    // moved
		defer a.Close() // moved

		for pb.Next() {
			a.Deposit(10)
			a.Deposit(-10)
		}
	})
}

Thanks! I’m sure @andrerfcsantos (or @june when she’s well) will pick this up!

I have done a number of Issues and PRs in the go track already, but now trying to suss out how we want to handle things going forward.

I owe the go track more cycles than I have been able to give it the past few weeks. Maybe next week I’ll get in some significant work.

2 Likes

Gotcha. I’d say for reguar contributors where PRs are getting merged, and you’re pretty sure the PR is valid, then opening a PR is great.

For first-time or irregular contributors, or people who’s PR’s haven’t been merged, or where the issue warrants discussion rather than a PR, then I’d suggest starting a conversation here first :slightly_smiling_face:

So it sounds like in your case, doing the PR would be great! :)

Thanks for taking a look at this @W8CYE.

Some of the code for these benchmarks and tests is quite old and it’s always nice to revisit them with a more modern and lessons-learned perspective.

After taking a closer look, I suggest to keep the old benchmark, but add the new proposed too. They both have value.

The goal of that particular benchmark is not to show that a single global mutex for all accounts is more performant than a mutex per account - the intent is the exact opposite. The test file has this comment there:

// You will notice that parallelism does not increase speed in this case, in
// fact it makes things slower! This is because none of the operations in our
// Account benefit from parallel processing. We are specifically protecting
// the account balance internals from being accessed by multiple processes
// simultaneously. Your protections will make the parallel processing slower
// because there is some overhead in managing the processes and protections.

BenchmarkAccountOperations benchmarks serialized behavior with locks in a single account. But in this benchmark there isn’t a lot of lock contention, so things are relatively fast.
By working on a single account, BenchmarkAccountOperationsParallel is also testing serialized behaviour. But since multiple goroutines are performing operations on that account, it is slower than BenchmarkAccountOperations, because now some goroutines have to wait for each other, and since there are more goroutines to manage and accessing the lock, the Go scheduler and the mutex code have to work extra time. Like the comment on the tests states, this is exactly the value of this benchmark, show the overhead of concurrency and how it is not always a great idea.
The value your suggested benchmark adds is to show that if you want to reduce the lock contention, but still have concurrency to increase operation throughput, you can let each goroutine have its own account and work on separate accounts concurrently.

So, my suggestion would be:

  • Add the additional benchmark
  • Maybe rename the current benchmarks to better describe them.
    • BenchmarkAccountOperationsBenchmarkSingleAccountSequentialOperations
    • BenchmarkAccountOperationsParallel BenchmarkSingleAccountParallelOperations
    • The new benchmark could be BenchmarkMultipleAccountsParallelOperations
  • In a similar spirit to the comments already on the file, add a comment explaining the intent of the new benchmark. It can also be a good idea to split the big comment that it’s there now and put the appropriate bits next to the functions where they are relevant.
1 Like

That is very insightful. Renaming the test cases will help clarify the intent for the users, who often don’t look inside the test files.

I would also like to remove the following code block as well if you are supportive.

	if testing.Short() {
		b.Skip("skipping benchmark in short mode.")
	}

Don’t remove it for now as part of this PR.

Me and june are also a bit puzzled by why that bit of code is needed. We are trying to ask some people who were involved with the track at the time why that’s needed. Maybe it’s something we are not considering. Until we are sure, let’s leave it there. If and when we do remove it, we want to do it in all exercises, not just one.