Bugs most commonly come from somebody’s failure to reduce complexity. Less commonly, they come from the programmer’s misunderstanding of something that was actually simple.
Other than typos, I’m pretty sure that those two things are the source of all bugs, though I haven’t yet done extensive research to prove it.
When something is complex, it’s far too easy to misuse it. If there’s a black box with millions of unlabeled buttons on it, and 16 of them blow up the world, somebody’s going to blow up the world. Similarly, in programming, if you can’t easily understand the documentation of a language, or the actual language itself, you’re going to mis-use it somehow.
There’s no right way to use a box with millions of unlabeled buttons, really. You could never figure it out, and even if you wanted to read the 1000-page manual, you probably couldn’t remember the whole thing well enough to use the box correctly. Similarly, if you make anything complex enough, people are more likely to use it wrongly than to use it correctly. If you have 50, 100, or 1000 of these complex parts all put together, they’ll never work right, no matter how brilliant an engineer puts them together.
So do you start to see here where bugs come from? Every time you added some complexity, somebody (and “somebody” could even be you, yourself) was more likely to mis-use your complex code. Every time it wasn’t crystal clear exactly what should be done and how your code should be used, somebody could have made a mistake. Then you put your code together with some other code, and there was another chance for mistakes or mis-use. Then we put more pieces together, etc.
Often, this sort of situation happens: the hardware designer made the hardware really complicated. So it had to have a complicated assembly language. This made the programming language and the compiler really complicated. By the time you got on the scene, you had no hope of writing bug-free code without ingenious testing and design. And if your design was less than perfect, well…suddenly you have lots of bugs.
This is also a matter of understanding the viewpoint of other programmers. After all, something might be simple to you, but it might be complex to somebody who isn’t you.
If you want to understand the viewpoint of somebody who doesn’t know anything about your code, find the documentation of a library that you’ve never used, and read it.
Also, find some code you’ve never read, and read it. Try to understand not just the individual lines, but what the whole program is doing and how you would modify it if you had to. That’s the same experience people are having reading your code. You might notice that the complexity doesn’t have to get very high before it becomes frustrating to read other people’s code.
Now, once in a while, something is really simple, and the programmer just misunderstood it. That’s another thing to watch for. If you catch a programmer explaining something to you in a way that makes no sense, perhaps that programmer misunderstood something somewhere along the line. Of course, if the thing he was studying was extremely complex, he had basically no hope of fully understanding it without a PhD in that thing.
So these two things are very closely related. When you write code, it’s partially your responsibility that the programmer who reads your code in the future understands it, and understands it easily. Now, he could have some critical misunderstanding—maybe he never understood what “if” meant. That’s not your responsibility. Your responsibility is writing clear code, with the expectation that the future programmer reading your code understands the basics of programming and the language you’re using.
So, there are a few interesting rules that you can get out of this one:
The simpler your code is, the fewer bugs you will have.
Always work to simplify everything about your program.