So, as a little digression from our normal content, I felt like writing a list of the top 10 reasons to work on open-source software…but being a born Californian, I felt I had to pay a little respect to my roots. So here we have the top 10 reasons to work on open-source…as said by, like, a dude from Cali (with translations underneath 🙂 ). Continue reading
There’s a strange sort of social disease going around in technology circles today, and it all centers around this word “innovation.”
Everybody wants to “innovate.” The news talks about “who’s being the most innovative.” Marketing for companies insists that they are “innovating.”
Except actually, it’s not innovation that leads to success. It’s execution.
It doesn’t matter how good or how new my idea is. It matters how well I carry it out in the real world.
Now, our history books worship the inventors, not the executors. We are taught all about the people who invent new things, come up with new ideas, and plough new trails. But look around you in present time and in the recent past, and you’ll see that the most successful people are the ones who carried out the idea really well, not the people who came up with the idea.
Elvis didn’t invent rock and roll. Ford didn’t invent the automobile or the assembly line. Apple didn’t invent the GUI. Webster didn’t invent dictionaries. Maytag didn’t invent the washing machine. Google didn’t invent web searching. I could go on and on and on.
Granted, sometimes the innovator also is an excellent executor (Alexander Graham Bell being an example), but usually that’s not the case. Most inventors don’t turn out to be the most successful people in their field (or even successful at all).
So stop worrying about “coming up with something new.” You don’t have to do that. You just have to execute an already existing idea really, really well. You can add your own flair to it, maybe, or fix it up a little, but you don’t have to have something brand new.
There are so many examples that prove this that it’s hard not to see one if you move your eyes anywhere. Just look, you’ll see.
Now, I’m not saying that people shouldn’t innovate. You should! It’s fun, and it advances the whole human race a tiny step every time you do. But it’s not the path to long-term success for you or for any group you belong to. That’s all in execution.
So, you might have heard that Google released a web browser.
Designing for performance is something that Google does often. Now, designing for performance usually leads to complexity. So, being a major supporter of software simplicity, I’m opposed, in a theoretical sense, to designing for performance.
However, Google is in an interesting situation. Essentially, we live in the Bronze Age of computing (or perhaps the Silicon Age, as I suspect future historians may call this period of history). Our computers are primitive, compared to what we are likely to have 50 to 100 (or 1000!) years from now. That may seem hard to believe, but it’s always hard to imagine the far future. Google is operating on a level far exceeding our current hardware technology, really, and so their design methods unfortunately can’t live in a theoretical fairy-land where hardware is always “good enough.” (However, I personally like to live in that land, when possible, because hardware will improve as time goes on–an important fact to understand if you’re going to have a long-lived software project).
What is it about our computers that makes them so primitive? Well, usually I don’t go about predicting the future in this blog, or even talking about too many specifics, because I want to keep things generally applicable (and also because the future is hard to predict, particularly the far future). But I will talk about some of my thoughts here on this, and you can agree with them or not, as you please. Continue reading
I don’t know if this has become clear to everybody yet, but you really need to design from the start. You need to be working on simplicity and the other Laws of Software Design from the very beginning of your project.
My policy on projects that I control is that we never add a feature unless the design can support it simply. This drives some people crazy, notably people who have no concept of the future. They start to foam at the mouth and say things like, “We can’t wait! This feature is so important!” or “Just put it in now and we’ll just clean it up later!” They don’t realize that this is their normal attitude. They’re going to say the same thing about the next feature. If you give in to them, then all of your code will be poorly designed and much too complex. It’ll be Frankenstein’s monster, jammed together out of broken parts. And just like the friendly green giant, it’ll be big, ugly, unstable, and harmful to your health. Continue reading
I have come up with an analogy that should make the basic principles of software design understandable to everybody. The great thing about this analogy is that it covers basically everything there is to know about software design. Continue reading
So, I just had my talk, Code Simplicity: Software Design In Open Source Projects at OSCON 2008. It went really well!
I’ll be talking on Thursday, July 24, at 3:25pm, in either room E143 or E144 at the Oregon Convention Center.
I’ll tell you basically everything I know and have learned about software design, and then how it applies to Open Source software, all in about 45 minutes. Should be fun and interesting!
Bugs most commonly come from somebody’s failure to reduce complexity. Less commonly, they come from the programmer’s misunderstanding of something that was actually simple.
Other than typos, I’m pretty sure that those two things are the source of all bugs, though I haven’t yet done extensive research to prove it.
When something is complex, it’s far too easy to misuse it. If there’s a black box with millions of unlabeled buttons on it, and 16 of them blow up the world, somebody’s going to blow up the world. Similarly, in programming, if you can’t easily understand the documentation of a language, or the actual language itself, you’re going to mis-use it somehow.
There’s no right way to use a box with millions of unlabeled buttons, really. You could never figure it out, and even if you wanted to read the 1000-page manual, you probably couldn’t remember the whole thing well enough to use the box correctly. Similarly, if you make anything complex enough, people are more likely to use it wrongly than to use it correctly. If you have 50, 100, or 1000 of these complex parts all put together, they’ll never work right, no matter how brilliant an engineer puts them together.
So do you start to see here where bugs come from? Every time you added some complexity, somebody (and “somebody” could even be you, yourself) was more likely to mis-use your complex code. Every time it wasn’t crystal clear exactly what should be done and how your code should be used, somebody could have made a mistake. Then you put your code together with some other code, and there was another chance for mistakes or mis-use. Then we put more pieces together, etc.
Often, this sort of situation happens: the hardware designer made the hardware really complicated. So it had to have a complicated assembly language. This made the programming language and the compiler really complicated. By the time you got on the scene, you had no hope of writing bug-free code without ingenious testing and design. And if your design was less than perfect, well…suddenly you have lots of bugs.
This is also a matter of understanding the viewpoint of other programmers. After all, something might be simple to you, but it might be complex to somebody who isn’t you.
If you want to understand the viewpoint of somebody who doesn’t know anything about your code, find the documentation of a library that you’ve never used, and read it.
Also, find some code you’ve never read, and read it. Try to understand not just the individual lines, but what the whole program is doing and how you would modify it if you had to. That’s the same experience people are having reading your code. You might notice that the complexity doesn’t have to get very high before it becomes frustrating to read other people’s code.
Now, once in a while, something is really simple, and the programmer just misunderstood it. That’s another thing to watch for. If you catch a programmer explaining something to you in a way that makes no sense, perhaps that programmer misunderstood something somewhere along the line. Of course, if the thing he was studying was extremely complex, he had basically no hope of fully understanding it without a PhD in that thing.
So these two things are very closely related. When you write code, it’s partially your responsibility that the programmer who reads your code in the future understands it, and understands it easily. Now, he could have some critical misunderstanding—maybe he never understood what “if” meant. That’s not your responsibility. Your responsibility is writing clear code, with the expectation that the future programmer reading your code understands the basics of programming and the language you’re using.
So, there are a few interesting rules that you can get out of this one:
The simpler your code is, the fewer bugs you will have.
Always work to simplify everything about your program.
Okay, most programmers know the story—way back when, somebody found an actual insect inside a computer that was causing a problem. (Actually, apparently engineers have been calling problems “bugs” since earlier than that, but that story is fun.)
But really, when we say “bug” what exactly do we mean?
Here’s the precise definition of what constitutes a bug. Either:
- The program did not behave according to the programmer’s intentions.
- The programmer’s intentions did not fulfill common and reasonable user expectations.
In The Never-Shipping Product, I mentioned seven ways to add complexity, and one of them was “Lock-In To Bad Technologies.” But what’s a “bad” technology? Is it all just based on opinion? Should we throw our hands up in the air and give in to the whims of our junior developer who thinks writing the application in BASIC is a great idea?
Well, okay, maybe it’s not all opinion. There must be some way to tell a good technology from a bad one (besides looking back after five years of development and saying, “Wow, we really shouldn’t have decided to base our product off of Microsoft Bob.”)
When I’m evaluating a technology for inclusion in one of my projects, I look particularly at the technology’s survival potential, interoperability, and attention to quality. Continue reading