The actual rule
- Software is written by Humans.
- Humans are creatures of habit and pattern.
- The bug you're looking at was written by a human.1
- What other places did they (or other humans with similar habits such as yourself) write the same bug?
We expect people to learn, we expect people to be on different points of their individual learning curves. Here, we take advantage of the unpopular realization that programmers are actually people and look for things, not as instances of lack of care but of mislearnings that can be adjusted.
Note that even your most senior developers (and your most
pretentious authors yours truly) can fall into this trap; at higher levels of
experience, this looks more like overgeneralization, or in the
extreme case, generalization that should be right in a better world,
but in this particular instance the details unfortunately don't fit an
otherwise sensible model.
Technically rule 3 is specifically about cases where a person is considering another person's way of thinking - which isn't something we have tooling for. However, it's also worth pondering if there are ways in which tool assistance could also have caught the problem - static type analysis, better unit test automation, etc. - since that would probably have helped the developer realign their mental model earlier (in their own sandbox.) In other words, it's reasonable to apply the rule at a different level: did this problem occur because someone needed to actually think, when it could have been handled mechanically?
How To Apply This
- If you're far enough along (or precocious enough) that you already have a change review process, make it a standard review question - after "what new tests did you write to make sure this stays fixed", ask if the thing the original Human was mistaken about has a recognizable shape, and can you find that shape elsewhere?2 You aren't necessarily on the hook for fixing all of the other cases, but you've just cheaply discovered an entire class of bugs and it's valuable to not just drop that on the floor.
- Sometimes it's as simple as "Oh, the current version of that API disallows this kind of argument" - just look for other uses of that method and see if they have a problem like that. Or perhaps the code is inherited or converted from some other environment - is there other code that came along a similar path that you should check out?
- Sometimes it's broader in scope - did the problem involve an assumption about names? Look at the hugely popular Falsehoods Programmers Believe About Names and see if any of them look familiar.3
- The essence of this concept is that bugs can have patterns the same way anything else in software can - because the patterns are really in the people developing the bugs.
- An interesting side effect of recognizing the pattern is that you (the reviewer) likely also have a mental map of the connection between the shape of the problem and the shape of the repair - so once you find these other "rule 3" instances, you're mentally prepared to fix them with your understanding of the case that triggered the insight in the first place.
-
AI code is not an exception to this; it was merely misappropriated by the AI which adds no new understanding, and conceals both the provenance and any hints at the mental model of the author. ↩
-
sometimes the misunderstanding isn't about how the language or environment works, but is rather a misinterpretation of the requirements (this is particularly an issue when you have unusual business rules or regulatory requirements) but the same approach works for those. ↩
-
There are a number of other "Falsehoods Programmers Believe" guides - names, dates, geography - it's basically this past decade's version of the earlier "(thing) Considered Harmful" articles.) ↩