Our next law is, once again, axiomatic, and needs no derivation:
It is impossible to introduce new defects in your software if you do not change anything about it.
This is important–and categorized as a law–because defects violate our purpose of helping people. If something is a defect, by definition it is not helpful to people, and we need to avoid it.
This is also sometimes stated more informally as “You can’t introduce new bugs if you don’t add or modify code.” I’m not sure that “code” entirely covers “anything about it,” so I didn’t state it that way.
Of course, the reverse would be:
It is possible to introduce defects into your software if you change something about it.
Which leads to:
The more changes you make, the more likely you are to introduce a defect.
The funny thing is that this seems to be in conflict with the second law, and in fact it is. It’s the balancing act between the second and third law that requires your intelligence as a software designer.
Combining all three laws, we get:
The best design is the one that allows for the most change in the environment with the least change in the software.
And that, pretty simply, sums up my design philosophy.
However, it’s important to limit that somewhat. Although that may be the best code design, that rule doesn’t necessarily lead to the best user-facing design. An equivalent law for users would be something like, “If you never use the program it won’t break,” but I’m not sure that’s so useful. This third law is about preventing bugs, not about making things work nicely. You still want things to work nicely and do what people want–I’m just telling you here how to avoid bugs.
Another thing to know here is that, given our first two laws, it’s an error to write a system that “does everything we could ever possibly need,” but not make it flexible enough to cope with future change. That might seem like a good way to “avoid future changes in the software”, but really you’re just bringing all that change into the present, introducing the same number of bugs, and then not allowing any room to grow. And no program will do everything you could ever possibly need–there will always be future requirements that you cannot predict. This is covered more in Designing Too Far Into The Future.
On the other hand, you can overengineer to the point where your design is so flexible that creating and maintaining it is extremely difficult. That would be the point where you reach a level of flexibility that is not necessary to the real future (thinking about this in relation to the First Law).
However, overengineering is a much less common error than designing too far into the future. When in doubt, expect change, and plan your code in ways that will make change as simple and small as possible.