Okay, so if we never change our software, we can entirely avoid defects. But change is inevitable! Particularly if we’re going to add new features. And after all, one of our goals was to make software easy to maintain, and to maintain software, it has to be changed here and there. In other words, we will be making changes. So “don’t change anything” can’t be the ultimate defect-reduction technique.
Well, like I said in the my design philosophy it helps to keep your changes small. But if you want to avoid even more defects, and eliminate them even from your small changes, there’s another law that can help you. And it doesn’t just reduce defects–it keeps things maintainable, makes it easy to add new features, improves the overall understandability of your code, and knowing it helps you make better software, all around. This Fourth Law of Software Design is:
The maintainability of a system is inversely proportional to the complexity of its individual pieces.
Where “maintainability” means “ease of maintenance.” Extreme unmaintainability would be the total inability to maintain some part or the whole of a piece of software. Perfect maintainability is impossible, but it’s the goal you strive for–total change or infinite new code with no difficulty.
This law is largely empirical, meaning that I figured it out by observation, not by logic. However, it does have a logical basis:
- The simpler something is, the easier it is to understand. For example, a beach ball is very simple–a single large round object that you throw around–and is something that anybody can understand.
- The more complex something is, the harder it is to understand. For example, a jet plane is very complex, and takes extensive training to use and understand. Complexity is not the only factor that makes things hard to understand, but with enough complexity, anything can become hard to understand.
- The less you understand something, the harder it is to fix or modify it.
- Thus: The more complex something becomes, the harder it is to modify (maintain) it.
However, you’ll notice that I didn’t say anything about the complexity of the whole system, in the law. I only mentioned its individual pieces. Why did I do that?
Well, an average-sized computer program is so complex that no human being could comprehend it all at once in their mind. It’s only possible to comprehend pieces of it. So we actually always have some large, complex structure for our whole program. What then becomes important is that the pieces can be understood when we look at them, and that we understand how the pieces relate to each other. The easier it is to understand the pieces, the more likely it is that any given person will understand them. That’s particularly important when you’re handing your code off to other people, or when you go away from your code for a few months and then have to come back and “re-learn” what you did, by reading your own code.
Let’s make an analogy, to demonstrate the principle. Imagine that you’re building a 30-foot tall steel structure. There are two ways to make it–you could make it out of a bunch of small girders, or you could try to forge three huge pieces of steel and put them together. With the girders approach, it’s easy to make or buy the individual pieces. The three huge pieces, on the other hand, have to be carefully custom-made and worked on extensively. With the girders, if one breaks you just replace it with an identical spare part. With the “huge pieces” approach, when one breaks you have to evacuate the structure, remove 1/3 of it, create a whole new custom piece, and then add that back in without collapsing the whole structure. The girders are simple, the huge pieces are complex.
So why do people sometimes write software with the “huge pieces” approach instead of the girders approach? It’s because there’s a perceived savings of time when you’re first creating the software, with the “huge pieces” method. With a bunch of small pieces, there is a lot of time spent putting them together. You don’t see that with the huge pieces–there’s three of them, they snap together, and that’s it. But the part that’s missed here is that it took way more time to create the three huge pieces than it did to create the girders. When you’re making a huge, complex single piece, any tiny error means that the whole thing has to be fixed or re-worked. And per observation in the practical world of programming, you will spend far more time fixing and re-working those huge pieces than you will putting together the small girders. So even though the time spent creating the “huge pieces” might seem like “productive, important time” and the time spent putting together the girders might seem like “busywork” or “wasted time,” the “girders” approach is actually more efficient.
I could go on and on about this, but you can find out about it for yourself. If you don’t believe me, spend a few years working on a software project where all the parts are very complex. I don’t recommend that you do that, but if you need any proof of this law for yourself, that would be a good (if painful) way to get it. Of course, you could also just apply the law and see if your software keeps on being maintainable–that’s a much less painful demonstration.
So how do we use this law, in the practical world of programming? Well, generally I recommend that people make the individual components of their code as simple as possible. Ideally this would start way down at the assembly language level, but you don’t always get simplicity there. Nor do you always get a simple programming language. But with what you have, strive for simplicity. Make everything as simple as possible. Don’t be afraid to be stupid, dumb simple. There is no limit to how simple you can make something, because if you go too far, your “simplicity” will start becoming complex. (In other words, you’ll be overengineering.) So just be as simple as you can possibly be, and if you overdo it (which almost never happens), it’ll be pretty obvious.