A couple days ago in a reddit thread, the author claimed that using panics instead of returning errors was up to 40% faster. The reactions were, as expected, mostly negative, after all two of the go proverbs are “Errors are values” and “Don’t panic”.

I’m not here to argue the merits of using panics vs. errors from a syntactic point of view. I actually believe that if you decided to use panics instead of returning errors, your code aesthetics wouldn’t change much.

That’s because most of the standard and third-party libraries are built using multiple returns, which forces the consumer to check the error and only then panic.

Leaving that aside, the point of this article is to verify the bold claim that using panics would give you a 40% performance improvement.

Before doing any measurement, let’s think about what we should expect:

Armed with this intuition, we should expect that using panics to be faster than code that uses the traditional error handling techniques, at least in cases where there are no errors. To test this hypothesis, I’ve built a function called Divide that divides two integers only if the dividend is not zero and if the numerator if bigger than the denominator. Note: I know the second check is not strictly necessary, but I wanted the function to be non-trivial.

//go:noinline
func Divide(a, b int) int {
	if b == 0 || math.Abs(float64(b)) > math.Abs(float64(a)) {
		return 0
	}
	return a / b
}

This is our baseline: there is no error handling needed and no panics. I wanted to see the effects on performance when adding error handling, so I built a second function that returns an error and called this Divide2.

//go:noinline
func Divide2(a, b int) (int, error) {
	if b == 0 || math.Abs(float64(b)) > math.Abs(float64(a)) {
		return 0, ErrDivideByZero
	}
	return a / b, nil
}

Finally, I’ve added a function that would panic instead of returning an error value. Note that I’ve try to mimic the behavior between this and the previous function.

//go:noinline
func DividePanic(a, b int) int {
	if b == 0 || math.Abs(float64(b)) > math.Abs(float64(a)) {
		panic(ErrDivideByZero)
	}
	return a / b
}

Tests

Using these functions, I set up micro-benchmarks to test the cost of: adding error values, checking errors, using panics, and using panics with defer a block. Each tests performs a thousands iterations and is set up in a way that guarantees no error will occur.

Without further ado, here are the results:

goos: darwin
goarch: amd64
cpu: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
=== RUN   BenchmarkDivide
BenchmarkDivide
BenchmarkDivide-8                 109248             10878 ns/op               0 B/op          0 allocs/op
=== RUN   BenchmarkDivide2
BenchmarkDivide2
BenchmarkDivide2-8                110515             10728 ns/op               0 B/op          0 allocs/op
=== RUN   BenchmarkDivide2Check
BenchmarkDivide2Check
BenchmarkDivide2Check-8           108925             10947 ns/op               0 B/op          0 allocs/op
=== RUN   BenchmarkPanic
BenchmarkPanic
BenchmarkPanic-8                  110127             10797 ns/op               0 B/op          0 allocs/op
=== RUN   BenchmarkPanicRecover
BenchmarkPanicRecover
BenchmarkPanicRecover-8           110226             10724 ns/op               0 B/op          0 allocs/op

Note: at the end of the article you’d find a link to a gist with the benchmark code1

Observations

From the results we can observe the following:

Conclusion

As we suspected, using panic instead of returning errors is slightly faster. However this was measured using a micro-benchmark; for real programs, I suspect you won’t be able to measure any significant difference.

If we are to believe the results of the original article, I am more inclined to think that they are because of other compiler optimizations, rather than the removal of if-statements.