Ideally, any science should have, as its base, a series of unbreakable laws from which others are derived. What is a law? Well, in the field of science, it’s something that:
- Is universally true, without exception.
- Predicts phenomena that, when looked for, will be found to exist in the real world.
Some of the best laws are axiomatic, a big word meaning “obviously true.” For example, “Yesterday happened before today” is an axiomatic statement–the definition of the word “yesterday” makes that obviously true.
For the science of software design, we are lucky to have an axiomatic basic law which is senior to all others:
There is more future time than there is present time.
This is obviously true. “Now” is an infinitely small moment that quickly becomes another “now.” The future is infinite.
So, since the future is infinitely large and the present is infinitely small, we can derive another obvious statement:
The future is more important than the present.
Now, when we’re talking about an infinitely small present and an infinite future, what I’ve said there is pretty obvious. In the real world, though, we have to ask the question: how much future are we talking about? What’s more important, the next five minutes or the next ten years?
Well, in order to answer that question, we have to make one assumption: that you want your program to continue to exist and be used in the future. If you only want it to exist for the next five minutes, then those are the most important. If you want it to continue to be around for 10 years, then those 10 years are the most important.
So all together, this tells us how good our software design needs to be–it needs to be exactly as good as there is future time in which our software must exist.
If you’re writing a program that’s only going to be used once, you don’t have to worry about its design. If you’re writing a program that’s going to be used and modified by astronauts on a 100-year voyage to Alpha Centauri, you have to be really good.
So, this science of software design is a thing where you have choices–if you follow and understand all of its laws, you will have a program that survives very well into the future. If you follow none of them, your program won’t continue to exist very long, or at the very least, it will become more and more difficult to ensure its continued existence.
Now, keep in mind that sometimes in life, there are situations where what you do for the next five minutes determines whether or not you live or die. Similarly, in the real world of software, there can be situations where what your organization does right now determines whether you go bankrupt or not. These are totally valid decisions according to this law, because if your software falls entirely out of existence right now, then the next ten years don’t matter. There is no future existence for something that no longer exists. (Another axiomatic statement!)
However, such situations are generally the exception, not the rule. If you found yourself constantly in a situation where the next five minutes determined your life or death, wouldn’t you want to get out of that? Similarly, if you find yourself in an organization that always insists that the next five minutes are more important than the next ten years, perhaps you should consider leaving.
Now, don’t pass any of this off as unimportant just because it’s “obvious.” What matters isn’t the statement itself, but the importance placed upon the statement. This is the senior law of software design, from which all other aspects of good design flow without exception. If you know of any exceptions, I’d be happy to hear them so that I could refine the science using a higher primary law. But I’m not aware of any exceptions.
When thinking about this law, I usually apply it from the viewpoint of a programmer, not as a user. Sure, a program that is written now and can still be used in 20 years would be fantastically designed from a user’s perspective. But what I’m concerned about mostly is whether or not a software developer will be able to fix or modify this program 20 years from now. That’s an entirely realistic concern–there are programs that have been around for 20 years or more.
There are also other important things to think about in relation to this law–what you can and can’t know about the future. But that’s a subject for another blog.
Are you familiar with the concept of “present value” in economics? It explains why $200 now might be worth more than $5 every year forever, depending on the interest rate. This idea seems to conflict with your “axiomatic law” that the future is always more important than the present.
Yeah, I think that’s an interesting point. There are definitely times when present investments make more sense, and that’s basically the “next five minutes” scenario I went over. One makes an investment now because we take into consideration the future and realize that this is what will be more valuable in that future–an investment right now.
But isn’t that $200 only important now because of it’s capabilities for the future?
I think that “the next five minutes” scenario happens more often than you give credence to. Any start up has a limited amount of time to prove their worth or they lose their funding. All vendors have a certain amount of time to get things right before the client goes in a different direction.
Well, sure, perhaps for reasons of market pressure, particularly for new organizations this tends to happen. The most valid time it happens is competitive pressure–being “the first kid on the scene” can have a big competitive advantage. Of course, I don’t think that advantage justifies throwing away quality–the Sega Genesis came out before the Super Nintendo, but Sega went out of business and Nintendo is still quite alive.
However, it’s also important to remember that most startups fail. If there is a behavior common to most startups, it is probably (though not definitely) a bad behavior.
And yes, it’s true that there is pressure from the clients to “get that feature out.” However, I’d say that continuously releasing a poorly-designed product is far more dangerous to an organization than holding off a feature until it’s actually ready.
Granted, some engineers have too much perfection in their idea of “ready”, and that has to be balanced by the economic reality of delivering a product. But in my experience, the thing that kills software in the long run is “no attention to the future.”
There are examples quite in my favor, too, going back once again to the video game industry–Half Life 2 and Team Fortress 2 took quite a long time to get out, but their attention to detail made them very successful when they were released. They didn’t release a half-done product just to “get something out there now”, they really waited until they had something polished that would survive into the future.
To make a science, won’t you need scientific definitions of terms? By making assumptions about software design, and not havingt a good, scientific, definition of what software design is, you’re building on brittle foundations.
You even introduce a metric – the goodness of a software design (and postulate that it is in correlation with amount of future time a program “exists” and is “in use”). BTW, what does “in use” (even from a programmers standpoint) mean? How many people? How many companies?
While your articles are really interesting (and I’ll be thinking about them a lot), I wouldn’t call them science. You could easily substitute the goodnes of software design with “the strength of love” and program with “partnership” – the result is equally scientifically sound.
Sure, I think a scientific definition of terms is a good idea! 🙂 I’ll think about that. 🙂
You’re right that I have not scientifically defined the term “software design” other than what it already means in the field of software engineering–the fashion in which your software is constructed.
Your analogy is slightly flawed in that “love” and “relationship” are not necessarily related, whereas “the design of my software” and “my software” are more clearly related.
“In use” means that it’s “in use.” A program that’s not “in use” is not being used by anybody, but…I think that sometimes I perhaps make assumptions that other people fundamentally think with systems of logic that are not widely known. For example, anything could be more or less “in use”, just as anything could be more or less “good.” To me that is obvious and doesn’t need to be stated, but perhaps that’s not the case for everybody.
Also note that I don’t use any of those words in the laws themselves, only in my discussion of them. That is, the stuff that follows the laws is somewhat more “an application of this law to show you how it can be used” than “an extension of this law.”
My criteria for a science are usually that it’s applicable and embracive of the field it’s applied to, and I think the law itself is good enough to meet that criteria. One reader, however, pointed out to me that I do need to define the word “important,” which I might do by editing this blog or posting another one. Essentially, important means “deserves more attention than other things.”
It’s true that these might not be the most fundamental laws, but this (and the others that will be coming in future blogs) are the closest that I have been able to get. If somebody evolves something better, I’d definitely be happy to see it.
Reading your comment, however, makes me think that perhaps I should say “be useful” as opposed to “be used.” I’m not sure either really gives us the specificity we’d like to see, but “be useful” might be slightly more descriptive. 🙂
This is one reason I post these things as blogs–to get this sort of feedback! 🙂
Actually, “useful” is even harder to define than “in use”. Is a program useful if it is useful to at least one person? What if only a part of a program is useful to a single person, but the other parts are not useful to anybody?
What I miss from this post probably most is a sound connection between the law and the assumptions you’re making. I could rephrase the basic “law” in terms of numbers, and it would be still true:
There are more natural numbers larger than X than there are numbers with the value X.
The derivative would be:
Numbers larger than X are more important than X.
This is obviously nonsense. So, what is the reasoning for the future being more important than the present? Of course, you could state that “the future is more important than the present” is an axiom, not a derivative – but axioms don’t really have to be always true. Just take the different non-eucleidian geometries. So you could build one theory based on the future being more important, and another one where the future is not more important.
Okay, I think you make a lot of good points, here! I will definitely take all of this into account. 🙂
1) One minute in the present is usually more important than one minute in the distant future – we should remember all those measurements are non-linear properties.
2) In software development, it doesn’t help if your design is good for being understood and further modified in, say, 10 years if you never get to ship a stable release – see the Netscape 5 (or 6) story. [Well, the code is still out there and better than ever, but Netscape died from reworking Mozilla.] The hard things is figuring out a balance between future-proof software design and shippable releases. And that’s a hard thing to get right in many cases.
Hrm, I think 1 is an interesting proposition. I don’t think you can actually separate out “1 minute in the future” and “1 minute 10 years from now”, because time is linear. That is, “1 minute in the future 10 years from now” is actually always “10 years from now.” You can’t jump around in time, so it’s not very useful to look at it in any non-linear fashion.
As far as 2 goes, you’re definitely right! Probably what my laws are missing is “the purpose of software” underneath them, which would be something like “to help people.” That would be the guiding principle, so obviously a piece of software that never ships isn’t nearly as helpful as a piece of software that ships.
My personal experience in that area is that failing to design future-proof software makes it harder and harder to make shippable releases as time goes on, so they’re very much related.
My personal experience in that area is that failing to design future-proof software makes it harder and harder to make shippable releases as time goes on, so they’re very much related.
Hmmm. So as always it’s a matter of trying to find a golden medium between both – it’s very hard 🙂
I though some more about the assumption you made, and tried to apply some math and mathematical logic to it.
So (if I understand it correctly) you’re saying
If I want my software to exist for time t, then the goodnes g of the software’s design needs to be (exactly) G(t).
I think we can reasonably assume that G() is monotonically rising, which means that for two times, x < y it holds that G(x) <= G(y). I myself have written programs, that I specifically designed for a single task that was bothering me at that time. So I wanted my software to exist for time t1, and it’s “goodnes” was G(t1). After a few years (t2), I happened to need the program again – and lo and behold, I didn’t delete it, and it was still there. I was even able to modify the program to fit my new needs. Had I originally wanted the program to be there in a few years (t2), I would have designed it to be G(t2) good. But since it would have been essentially the same program which would be around in t2, solving the single problem I had at that time, then apparently G(t2) is no better than G(t1) and since G is monotonically rising, it follows that G(t1) = G(t2). Since we can’t really say anything specific about t1 and t2 (what we can say would be only some statistical guesswork), this reasoning can be applied to all t1 < t2, this means that G(t1) = G(t2) = const, which means that the goodnes of the design doesn’t depend on the time I want it to exist.
So, I probably instinctively designed the program better than G(), and the original assumption should be “at least as good as G(t)“. Let’s say that the actual goodnes of a program is therefore g = G(t) + p, where p is a goodnes factor by which a program was overdesigned (and which doesn’t depend on the time I want it to exist). Unfortunately, we don’t know anything about this factor (and the law about the future being more important than the present doesn’t help us here, since the factor specifically doesn’t depend on the future). It would be possible, for any two programs, to tweak their respective values of p so, that p >> G() which again makes the contribution of the time we want the program to exist be negligible.
What remains? Apparently, the goodnes function is not only dependent on the time we want the program to exist, but also on the program itself: G(p,t). But the program is dependent on it’s design, which means that p = f(G()). So, to be able to evaluate the “goodnes” function for a program, we need the program – this doesn’t help us with designing programs (the only thing this does is point to iterative software development methodologies – especially agile ones, that “embrace change” and accept the fact that no design is perfect).
So, no matter how I look at the assumption, it doesn’t really work out (and the law from which it stems doesn’t really have any influence on this, so it’s not very scientific, at least in the “mathematics” sense). Besides, since it’s an implication, it doesn’t tell us anything about the situation when we don’t want a program to exist for a specific future time. Neither does it help us in the quite common case where a programmer just supposes that the program will exist “forever” – which would mean that it’s design needs to be “the best possible” – sup(G()).
I could make a different assumption about programs: a program will be used (or be useful) indefinitely (regardless of how good it was designed), unless the hardware changes, the operating system changes, the problem the program solves changes, or a “better” program that solves the same problem is made (or a bunch of other things that can “go wrong” – for example a web service that goes bankrupt). This is in direct contradiction with your original assumption, yet it cannot be disproved (or proved) simply by the fact that the future is more important than the present. Actually, this is “the way of the Microsoft” – they don’t necessarily make the program itself better – sometimes it is enough that there isn’t any competition (sometimes, this doesn’t work out, such as in IE, and I’m not sure how Vista vs. XP fits in)
Using the “time we want the program to exist” is maybe the basis of a new software development mehtodology – “program lifetime expentancy centered design”, but methodologies aren’t science.
Wow, I can see that you really put some thought into this!
Well, if we want to talk about it all this way, you did point out that all that needs to hold is G(x) <= G(y), so if G(x) = G(y), then our condition is still true. It's more of a statistical thing than an absolute--thatPerhaps what would be easier to prove is the reverse--that given a poor enough software design, you will have to rewrite or redesign your software after time t (or your software will no longer be able to continue to exist and be useful after time x).As far as change goes, you're actually leaking right into my second law, and a few other things that I have in draft. 🙂 I think really one of the things you're running into here is that I haven't created a whole logical system yet with just this law, only a single statement without anything to compare it to.As far as "the way of Microsoft", I'd ask where it's gotten them. I think IE5 & IE6 drove them into quite a hole. The time it took to release Vista saw other technologies really impinge upon their market share.By the way, everything you're saying is definitely making me think, and I've already got a new article that "steps back" a bit that will post on Monday.-Max
Yes, but if G(x) and G(y) are equal for all x<y, it would mean that there is only one “goodnes” of design, which is constant. That would mean that “all programs are designed equally good/bad, irrespective of the time they were made to exist”, which doesn’t help us to design or evaluate programs (and actually reinforces my assumption that all programs are used indefinitely).
As for the reverse – there are programs with an extremely bad design (even deliberate), and yet they will probably exist and be used indefinitely – if only for entertainment purposes. The OMGFTF contest displays a host of abominations – and none of them will cease to exist unless the site goes down (or something else, but it will not be because of their bad design). Generally I think that Worse Than Failure is a good site for finding about software that should probably never have been written, yet it continues to exist, be used and is the source of nightmares for users and developers alike. And I’ve come to believe that these “worse than failures” are actually the norm, rather than a statistical outlier (the worst stuff doesn’t happen in open source development).
But I think I’m getting a bit sidetrackeded, so back on topic.
I thought about the difference between software developments and architects and constructors. They have real physical quantitative measures. Will a house stand in 5 years or will it fall down? How much heat does it dissipate? If it rains, does it get wet inside? Those all are real, physical measurable things. Unfortunately, for a program, there is no way to verify anything. Is the program correct? We don’t know, there is no algorithm that can solve this. Will it exist 5 years from now? Maybe yes, maybe no – but unless an equivalent of an earthquake happens, the program will indeed exist 5 years from now. Further – no 2 programs are the same. Architects have a big advantagehere – their experiments are repeatable. If they construct a single good house, they can construct it again the same way – and it will be a net gain. But if I succesfully write a hello world program, what’s the point of writing the same program again?
Well, if you’d rather pay attention to the present than the future, that is absolutely your choice.
This would seem to contradict the principle of “worse is better”, which Mozilla has used quite productively over its ten years (and whose negligence early on nearly doomed the project). It also smacks of premature optimization. There may well be times when it’s better to design like this, but it seems less a law than a strategy whose utility depends upon circumstance.
Finding some principles to help guide developers in determining which situations call for which approaches would be super helpful, however, so it’s a worthy investigation!
Hey Myk! Yes, actually I totally agree with you. 🙂 I’m pretty much in favor of “worse is better.” Actually, as the articles go on, you’ll see how I can derive a philosophy somewhat like that based on this principle. 🙂
Thank you for saying it’s a worthy investigation! 🙂 I hope that I can come up with some helpful things!
Great article!!!, It has brought me to reflexion, which by the way, I think is the goal of this first law.
For me, the first law inspire us, software developers, to make a decision about how good our design (for me, goodness of a design is how easy, in terms of effort, is to change a software to be adapted to its environment, which is by definition dynamic → second law) must be now, for a particular problem.
About the statement “The future is more important than the present. “, I think is really not helping to meet an agreement about the first law. Personally I always doubt about panaceas, or solutions which claims to be good for everything. For me saying “The future is more important than the present”, in this context, sounds dangerously like “The need for a good design is more important than the problem to solve”, which I think everybody will disagree. Personally, I don’t think we have to discuss about if possible future needs, are more important that the current need to release. For me it would be a success, if we all agree that, at least thinking about the future, is a requirement to provide good software, regardless the goodness of the software design we finally implement.
I will keep an eye on your articles, I think is very interesting to reach an agreement on these issues.
Hey Jaime. 🙂 Thanks for all the positive feedback.
Providing excellent software that really helps people should always be the end result of good software design. It’s important to think more about providing that result in the future than in the present, though. All too often what happens is that projects short-sightedly sacrifice long-term excellence for short-term success, and ultimately that leads to their downfall.
I totally agree with you that’s what’s most important, though, is *thinking* about the future. It’s not that we should be trying to *predict* the future or imagine what the future will bring, but simply that we should remember that there is going to *be* a future, and know that we should be striving for our software to be *better* then, not *worse* then. 🙂