I'm pretty sure my code won't be affected, nor will such trivial examples, but there's a lot of code around. I imagine the problem can occur by accident. Take the following scenario. A program has an array of buffers. Each has a lock. An initial function spins off threads with a buffer for each go routine, hoping to distribute the buffers evenly over the threads to minimize locking. However, because of the range bug, it hands the same buffer to each go routine. Then there's another implementation bug that just works fine because locking the buffer prevents it, or because the buffer contains the correct content, because somewhere, someone forgot to copy it, but there's only one, so no-one ever noticed. Is that so unlikely?
Another comment says that Russ Cox did the experiment, and did encounter a few problems. Not many, but they do exist.
Er... your scenario (or something of that nature) is literally mentioned in the post:
> Of the failures, 36 (62%) were tests not testing what they looked like they tested because of bad interactions with t.Parallel: the new semantics made the tests actually run correctly, and then the tests failed because they found actual latent bugs in the code under test.
And if you want to keep your old bugs in under-tested programs, you can, that's why the new behaviour is opt-in.
Another comment says that Russ Cox did the experiment, and did encounter a few problems. Not many, but they do exist.