So now we know that there is more future time than present time and that software will change as time goes on.
Our next law is, once again, axiomatic, and needs no derivation:
It is impossible to introduce new defects in your software if you do not change anything about it.
This is important–and categorized as a law–because defects violate our purpose of helping people. If something is a defect, by definition it is not helpful to people, and we need to avoid it.
This is also sometimes stated more informally as “You can’t introduce new bugs if you don’t add or modify code.” I’m not sure that “code” entirely covers “anything about it,” so I didn’t state it that way.
Of course, the reverse would be:
It is possible to introduce defects into your software if you change something about it.
Which leads to:
The more changes you make, the more likely you are to introduce a defect.
The funny thing is that this seems to be in conflict with the second law, and in fact it is. It’s the balancing act between the second and third law that requires your intelligence as a software designer.
Combining all three laws, we get:
The best design is the one that allows for the most change in the environment with the least change in the software.
And that, pretty simply, sums up my design philosophy.
However, it’s important to limit that somewhat. Although that may be the best code design, that rule doesn’t necessarily lead to the best user-facing design. An equivalent law for users would be something like, “If you never use the program it won’t break,” but I’m not sure that’s so useful. This third law is about preventing bugs, not about making things work nicely. You still want things to work nicely and do what people want–I’m just telling you here how to avoid bugs.
Another thing to know here is that, given our first two laws, it’s an error to write a system that “does everything we could ever possibly need,” but not make it flexible enough to cope with future change. That might seem like a good way to “avoid future changes in the software”, but really you’re just bringing all that change into the present, introducing the same number of bugs, and then not allowing any room to grow. And no program will do everything you could ever possibly need–there will always be future requirements that you cannot predict. This is covered more in Designing Too Far Into The Future.
On the other hand, you can overengineer to the point where your design is so flexible that creating and maintaining it is extremely difficult. That would be the point where you reach a level of flexibility that is not necessary to the real future (thinking about this in relation to the First Law).
However, overengineering is a much less common error than designing too far into the future. When in doubt, expect change, and plan your code in ways that will make change as simple and small as possible.
“The best design is the one that allows for the most change in the environment with the least change in the software.”
This almost sounds like a definition of “modularity”. 🙂
Hahaha, yeah. 🙂 At the very least, it explains why modularity is a good idea. 🙂
How about switching compilers to a new version? How about a Firefox extension that gets broken with a new update of Firefox? How about an application that has worked perfectly, until MS decided to limit the maximum number of new connections in XP SP2? Of course, you could say that these defects always have been there (but in the case of FF, this really doesn’t have to be true, sometimes the internal API changes and your extension just stops working without you having to do anything). Or maybe the defect isn’t in my software, because I haven’t changed a line, but in “the environment”?
Or maybe I’ve just been overexposed to rigorous mathematical texts that really don’t leave any room for doubts (once you actually manage to understand the 3 page proofs for some seemingly simple theorem, that builds on at least 4 previously proven theorems…), and I should give your laws a little more breathing space. I mean, empirical science (as opposed to pure mathematics) rarely works with axiomatic laws and logical derivations, but it’s laws are “based on a sufficiently large number of empirical observations that it [are] taken as fully verified”. Which is maybe why I’m a bit confused. You use axioms and logical derivation as in pure math (but because it’s more of an empirical science, they lack rigorous proofs), but on the other hand, for it to be an empirical science, it lacks concrete data. I confess, I’ve used Wikipedia’s article on science to put my thoughts into words and (even up to the point of citing it), but it seems to nicely express the source of my doubts.
If you feel that I’m generating too much stop energy, feel free to ignore me 😉 I mean, the resulting law is very interesting, just the process you used to arive at it seems a bit weird. Besides, you could arrive at the last law using other, more empirically proven methods (and methods that have been studied much more deeply), such as some sort of “economics”:
1. If a program satisfies demand, it makes a benefits (or profit, but it doesn’t have to be $$)
2. If a program stops satisfying a demand, it makes less benefit.
3. Less benefits are bad (all of these laws are axiomatic)
4. To keep a program making benefits, it needs to keep satisfy a demand.
5. To keep satisfying a demand, it has to be developed, which has an associated cost (this could be disputed, there are other, more sinister ways to force users to be satisfied by a particular program than making the program better)
6. If we develop it too much, the cost of development will be higher than the benefits (this is bad)
7. We need to develop the program in such a way that it still satisfies a demand, but the cost doesn’t overweight the benefits
8. Ergo, the less we have to develop while still keeping stable benefits, the better.
Now we copuld go into a lengthy analysis of the ways in which a program stops making benefits (defects, changes in the environment, change in demand), or say that there is a correlation between the the design of a program and it’s development costs… but it just comes down to cost benefit analysis (and also explains the recent downfall of AllPeers, which has been stopped because a lack of funding. Would a better “code design” of the AllPeers software made it still exist today?)
Switching the compiler would be changing something about your software. The Firefox one is a good question. You didn’t really introduce a defect. It’s more of a Second Law thing–that you have to change.
Yes, it is fundamentally an empirical science. But for me to say that something is a law, I personally like to have a philosophical basis for it–that is, I like to know why it’s true, not just that it is.
I did actually arrive at all of the laws empirically, and in fact in my original drafts for my book, I called all of these “rules” instead of “laws”, because I couldn’t prove them sufficiently for me. The final test of any engineering law is “Does it work?”, but I personally just like to have a “why” before I say that something is really a law.
I think your economic derivation is very interesting! I suspect many of these laws could be derived in various ways. And of course, as I said, in fact I actually observed them first and derived them second.
I can’t really say much about AllPeers. Did it become unmaintainably expensive because of the code design? I don’t know. I’d guess that operating costs would be high in any case, but I really have no idea. There are definitely factors outside of design that can make a project fail, but that’s outside of the area of these laws. That’d be more of a management issue, and less a technical issue.
Also, I’m less trying to prove these laws by these derivations, and more trying to help people understand them, by giving some logical background for them.
However, it is possible to have previously-hidden defects in your software exposed by other software changing around it, viz. GCC 4.3 and the Direction Flag in the Linux kernel.
[…] understand this principle more thoroughly, check this out. If you don’t feel like reading a longish, (but very informative!), article, the main […]