6 ways to increase Confidence in Code
I want to write about some of the techniques I've learned or honed while working at Chegg. All of these fall under the category of "increasing confidence in code". In some ways, almost all of the work engineers do is done because it increases confidence in code. Let's list some of the things a software engineer does day-to-day:
- Writing code
- Whiteboard brainstorming
- Writing or reviewing design documents
- Writing unit tests
- Reviewing code
- Attending agile meetings
- Lunch
All of these (including lunch), with the exception of actually writing code, could be described as mechanisms to increase the confidence of some code in some way.
- Whiteboard brainstorming and design documents increase your confidence in the ideas behind your present or future code
- Unit tests increase confidence that the object code behaves as expected
- Reviewing code increases the reviewer's confidence in the code along a variety of axes: quality, correctness, conformance to the house style, etc. In turn, the reviewer's comments and discussion should increase the author's confidence both in their code and their ability to write better code.
- Attending meetings (assuming they're not bullshit meetings) increases your confidence that the whole team's code is aligned, or that the purpose of the code is correct.
- I don't think I need to explain why lunch increases confidence. (My post-burrito code has noticeably fewer bugs.)
The eagle-eyed reader may have spotted a little sophistry in the above examples: it seems like "confidence in code" is a vague and flexible notion that can be twisted into any software-related shape. That's a fair criticism, if I were trying to make some profound point about the nature of software engineering, but really I'm just trying to provide a lens through which to look at best practices. Rather than seeing them as ancient wisdom about the mysteries of computers, it's better to see these activities as ways for humans to become more sure about themselves.
Anyway, on to the techniques!
1. Think real hard
I'm serious! Especially when working on something algorithmically complex, I like to draw diagrams, trace the execution of my code by hand, and spend lots of time staring at the code with my brow furrowed.
After finishing a chunk of code, I leave my desk and get a drink of water or something. When I get back, I re-read my code as if someone else had written it. I find that, since adopting this as a deliberate practice, it's driven up the quality of my code. It increases my confidence because I'm able to assess the code from a (short) distance.
2. Write tests first, especially when you don't want to
I've been a proponent of test-driven development ever since I first tried it, but it was only at Chegg I understood that it's true power is unlocked when I really, really don't want to do it.
For me, the greatest value of test-driven development is found when I'm wrestling with a really knotty problem, and I'm struggling to keep a model of the code in my head. This is when I feel like there's no mental room for tests, yet forcing myself to write them anyway often reveals that my code is stuck in a "local maximum" of correctness - by trying to write code that conforms to my convoluted mental model, I've forgotten to check that the model really needs to be that convoluted. Once I simplify it, the code seems to flow more easily. Plus then the tests are already done!
3. Be serious about code reviews
By serious, I mean think as hard about code reviews as you do writing code. When reviewing code, there are many guidelines but in my opinion two things are paramount:
- Ensure it actually does what it's meant to
- Ensure it doesn't make the codebase worse
As I mentioned above, this increases both participants' confidence in their code.
4. Increase your confidence in your unit tests
This is a "second-order" effect: increasing confidence in something that increases confidence in code. "Testing" unit tests might seem pretty far from actually writing useful code, but this can serve as an anchor for your tests, preventing you from drifting out to a comfortable sea of low-efficacy tests, which you will regret when you're eaten by the shark of ill-defined behaviour. Or something.
My point is, make use of code coverage tools for unit test at the very least. It may also be worth considering mutation testing as a more semantic way of checking that your tests cover a wide area in behaviour-space. I personally found it to be too slow to be useful, but YMMV.
5. Replay the past
As a "brute-force" approach to regression testing, it can be useful to just replay entire days or weeks' worth of user requests on a staging environment to check that nothing that used to work has stopped working.
At Chegg Math, I found this was most useful when we ran some version of this "history replayer" overnight, with any alarming results reported to slack the following morning. When nothing came through, I certainly had more confidence that the previous day hadn't introduced hidden bugs. Seeing the number of slack reports decrease over time since their introduction also gave me confidence in our ability to continually learn and improve.
6. Test invariant properties with random property testing
The final technique is something I'd like to write a full blog post about someday. For now, I'll just say that, although often quite frustrating, investing time in writing good property tests using SwiftCheck and Hypothesis has really boosted my confidence in a way that regular old unit tests never could.