Code Simplicity

Designing for Performance, and the Future of Computing

So, you might have heard that Google released a web browser.

One of the features of this web browser is its JavaScript engine, called v8, which is designed for performance.

Designing for performance is something that Google does often. Now, designing for performance usually leads to complexity. So, being a major supporter of software simplicity, I’m opposed, in a theoretical sense, to designing for performance.

However, Google is in an interesting situation. Essentially, we live in the Bronze Age of computing (or perhaps the Silicon Age, as I suspect future historians may call this period of history). Our computers are primitive, compared to what we are likely to have 50 to 100 (or 1000!) years from now. That may seem hard to believe, but it’s always hard to imagine the far future. Google is operating on a level far exceeding our current hardware technology, really, and so their design methods unfortunately can’t live in a theoretical fairy-land where hardware is always “good enough.” (However, I personally like to live in that land, when possible, because hardware will improve as time goes on–an important fact to understand if you’re going to have a long-lived software project).

What is it about our computers that makes them so primitive? Well, usually I don’t go about predicting the future in this blog, or even talking about too many specifics, because I want to keep things generally applicable (and also because the future is hard to predict, particularly the far future). But I will talk about some of my thoughts here on this, and you can agree with them or not, as you please.

Our Current Computers

First, you have to understand that to me, anything that could be running this web browser right now, that could be showing me my desktop, and that I could be typing into right now–I’d consider that a computer. It actually doesn’t matter what it’s doing underneath, or how it works. A good, offhand definition then of a computer would be:

Any piece of matter which can carry out symbolic instructions and compare data in assistance of a human goal.

Currently computers do this using math, which they do with transistors, digitally. The word “digital” comes from “digit”, a word meaning, basically, “fingers or toes.” For most people, your fingers and toes are separate, individual items. They don’t blend into each other. “Digitally” means, basically, “done with separate, individual numbers.” You know, like 1, 2, 3, 4–not 1.1, 1.2, 1.3. People (normally) have 1 or 2 fingers, not 1.5 fingers.

Said another way, current computers change from one fixed state to another, very fast. They follow instructions that change their state. I don’t really care if we say that the state is “transistor one is on, transistor two is off, transistor three is on…” or “Bob has a cat, Mary has a dog, Jim has a cat…” it’s all a description of a state. What we care about ultimately is the total current state. If there are 1,000,000 possible states (and there are far, far, far more in a current computer), then we can say that “we are at state 10,456” and that’s all we really need to know.

The problem with current computers is Moore’s Law. We don’t need computers that are twice as fast. We need computers that are about a million times faster than the ones we have. We need computers so fast that software engineers never have to worry about performance ever again, and can just design their code to be the sanest, most maintainable thing possible. With that kind of performance, we could design almost any software imaginable.

The problem is that with Moore’s Law, we’re not going to get computers 1000 times faster for about 20 years. We’re going to get to 1,000,000 times faster in about 40 years. And there’s a chance that the laws of physics will stop us dead in our tracks before that point, anyhow.

Future Computers

So, let’s stick with the idea of a machine that changes states for the near future, because I can’t think of any other clever way to make a computer that would follow my definition from above. There are three problems, then:

  1. How many states can we represent?
  2. How many physical resources does it require to represent all those states (including space, power, etc.)?
  3. How quickly can we change between states?

And then “How many states can we represent at once?” might also be a good question–we’re seeing this come up more and more with dual-core and quad-core processors (and other technologies before that, but I don’t want to assume that everybody reading my blog is an expert in hardware architecture, and I don’t want to explain those technologies).

So the ideal answers are:

  1. We can represent an infinite number of states.
  2. It requires no physical resources to represent them.
  3. We can change between them without time.

And then also “We can represent an infinite number of different states at once.”

Currently we theoretically could represent an infinite number of states, we’d just have to add more transistors to our chip. So really, the question becomes, “How many states can we represent with how many physical resources?” Currently we can fit two states into 32 nanometers. (That’s one transistor.)

My suspicion is that the future is not in fitting two states into a continually smaller space, but in fitting a near-infinite number of states into a slightly larger space. Electricity and other force waves can be “on” or “off”, but they also have lots of other properties, many of which are sufficient to represent an infinity (or near-infinity). Frequency of any wave represents an infinity, for example–you can have a 1 Hz wave, a 1.1 Hz wave, a 1.15 Hz wave, a 1.151 Hz wave, etc. So, that basically answers Question 1 ideally–you can have an infinite number of states, you just have to have some device which is sufficiently small, in which a wave can have its properties modified and measured by electronics, optics, or some other such technology.

You’ll notice that we’ve also conveniently answered our bonus question, because we can represent quite a few different states at once, once each individual component of our system can represent an infinity all by itself.

If we want to look a bit further into the future, our second question can be answered by the fact that waves take up essentially no space (only the medium that they vibrate takes up space). Our understanding of physics is not (as far as I know) currently good enough to create structures out of pure force just yet, but such structures would come quite close to taking up “no physical resources.”

And beyond that (how we get the state changes to happen without time), I have no idea. That question may be unanswerable, and may only be resolvable by changing computers to being something other than mathematical devices. (That is, not be involved with states at all, but some other method of following instructions and comparing data.) But the better our components become, the closer we can get to “no time.”

The Roundup

So there’s my thoughts for the day on the future of computing. Sometimes designing software for performance is a necessary evil (but really only in the case where it’s an extreme issue, like with Google’s products, or the great new need for speed in JavaScript nowadays, or in other low-level places), but I hope that future changes in the fundamental architecture of computers will obsolete that necessity.

-Max

14 Responses to Designing for Performance, and the Future of Computing

Leave a Reply