Skip to main content

Command Palette

Search for a command to run...

Make educated guesses, even if they turn out to be wrong

Updated
3 min read

One observation I have made working with fellow developers, especially those with less experience or just getting started is the lack of confidence to make educated/informed guesses about things. I encourage junior developers I work with to learn how to make informed guesses, even if they turn out to be wrong, it is an ability that allows one to break down problems by helping you feel a bit more in control of how you acquire knowledge or understand the world. Trying to understand why something isn’t working can often feel daunting or stressful.

Trying to understand why something isn’t working can often feel daunting or stressful.

Code is meant for humans.

Author’s note: Drafted this article before AI agents were a thing. I’m confident that code is still meant for humans but I cannot say to what degree.

When people make libraries or build tools, they generally build for other humans to consume (that is, read or maintain) and for machines to execute. Understanding this, we can be sure of a few things

  1. People are going to try to use familiar conventions, idioms, words or domain language

  2. People are going to try to make things easier to use for either themselves or others

What this leads to is that, for a lot of things, if you can make reasonable guesses about what’s expected to happen you can be right or close to it about 50% of the time (I don’t have hard numbers nor have i done a study on this, just anecdata). And the other time you aren’t on point - you can refer to docs to help align how you understand the library, tool or the world. You will have to correct your assumptions or in another case it could lead to you contributing to a feature to the tool or project (I am thinking particularly of Open Source here) in other cases it could lead to you creating your own spin on the problem and innovating a new approach.

You cannot live on assumptions alone

A most important point to make and emphasize here is that you have to verify your assumptions, usually this will come from using the tool or library and observing it’s output, writing tests, reading the documentation or diving into the source code, if that is available. Assumptions are a first step to help you try to meld your mental model of the thing and how you expect it to work for you.

What happens when AI is writing the code? Should I still make assumptions? Well, kinda.

I’d be remiss if I didn’t talk about how AI fits in or affects this line of thinking. If most of the code you are consuming was generated by AI, it is possible that it may be a bit more difficult to make assumptions since LLMs are …. LLMs. It means you may need to not only make assumptions but also verify more thoroughly as sometimes AI can make mistakes, as we’ve been told countless times by LLM service providers.