I think that I have come up with a way to explain exactly how hard it is to predict the future of software. The most basic version of this theory is:
The accuracy of future predictions decreases relative to the complexity of the system and the distance into the future you are trying to predict.
As your system becomes more and more complex, you can predict smaller and smaller pieces of the future with any accuracy. As it becomes simpler, you can predict further and further into the future with accuracy.
For example, it’s fairly easy to predict the behavior of a “Hello, World” program quite far into the future. It will, most likely, continue to print “Hello, World” when you run it. Remember that this is a sliding scale–sort of a probability of how much you can say about what the future holds. You could be 99% sure that it will still work the same way two days from now, but there is still that 1% chance that it won’t.
However, after a certain point, even the behavior of “Hello World” becomes unpredictable. For example, “Hello World” in Python 2.0 the year 2000:
print "Hello, World!"
But if you tried to run that in Python 3, it would be a syntax error. In Python 3 it’s:
You couldn’t have predicted that in the year 2000, and there isn’t even anything you could have done about it if you did predict it. With things like this, your only hope is keeping your system simple enough that you can update it easily to use the new syntax. Not “flexible,” not “generic,” but simply simple to understand and modify.
In reality, there’s a more expanded logical sequence to the rule above: Continue reading