So, you might have heard that Google released a web browser.
One of the features of this web browser is its JavaScript engine, called v8, which is designed for performance.
Designing for performance is something that Google does often. Now, designing for performance usually leads to complexity. So, being a major supporter of software simplicity, I’m opposed, in a theoretical sense, to designing for performance.
However, Google is in an interesting situation. Essentially, we live in the Bronze Age of computing (or perhaps the Silicon Age, as I suspect future historians may call this period of history). Our computers are primitive, compared to what we are likely to have 50 to 100 (or 1000!) years from now. That may seem hard to believe, but it’s always hard to imagine the far future. Google is operating on a level far exceeding our current hardware technology, really, and so their design methods unfortunately can’t live in a theoretical fairy-land where hardware is always “good enough.” (However, I personally like to live in that land, when possible, because hardware will improve as time goes on–an important fact to understand if you’re going to have a long-lived software project).
What is it about our computers that makes them so primitive? Well, usually I don’t go about predicting the future in this blog, or even talking about too many specifics, because I want to keep things generally applicable (and also because the future is hard to predict, particularly the far future). But I will talk about some of my thoughts here on this, and you can agree with them or not, as you please.
Our Current Computers
First, you have to understand that to me, anything that could be running this web browser right now, that could be showing me my desktop, and that I could be typing into right now–I’d consider that a computer. It actually doesn’t matter what it’s doing underneath, or how it works. A good, offhand definition then of a computer would be:
Any piece of matter which can carry out symbolic instructions and compare data in assistance of a human goal.
Currently computers do this using math, which they do with transistors, digitally. The word “digital” comes from “digit”, a word meaning, basically, “fingers or toes.” For most people, your fingers and toes are separate, individual items. They don’t blend into each other. “Digitally” means, basically, “done with separate, individual numbers.” You know, like 1, 2, 3, 4–not 1.1, 1.2, 1.3. People (normally) have 1 or 2 fingers, not 1.5 fingers.
Said another way, current computers change from one fixed state to another, very fast. They follow instructions that change their state. I don’t really care if we say that the state is “transistor one is on, transistor two is off, transistor three is on…” or “Bob has a cat, Mary has a dog, Jim has a cat…” it’s all a description of a state. What we care about ultimately is the total current state. If there are 1,000,000 possible states (and there are far, far, far more in a current computer), then we can say that “we are at state 10,456” and that’s all we really need to know.
The problem with current computers is Moore’s Law. We don’t need computers that are twice as fast. We need computers that are about a million times faster than the ones we have. We need computers so fast that software engineers never have to worry about performance ever again, and can just design their code to be the sanest, most maintainable thing possible. With that kind of performance, we could design almost any software imaginable.
The problem is that with Moore’s Law, we’re not going to get computers 1000 times faster for about 20 years. We’re going to get to 1,000,000 times faster in about 40 years. And there’s a chance that the laws of physics will stop us dead in our tracks before that point, anyhow.
Future Computers
So, let’s stick with the idea of a machine that changes states for the near future, because I can’t think of any other clever way to make a computer that would follow my definition from above. There are three problems, then:
- How many states can we represent?
- How many physical resources does it require to represent all those states (including space, power, etc.)?
- How quickly can we change between states?
And then “How many states can we represent at once?” might also be a good question–we’re seeing this come up more and more with dual-core and quad-core processors (and other technologies before that, but I don’t want to assume that everybody reading my blog is an expert in hardware architecture, and I don’t want to explain those technologies).
So the ideal answers are:
- We can represent an infinite number of states.
- It requires no physical resources to represent them.
- We can change between them without time.
And then also “We can represent an infinite number of different states at once.”
Currently we theoretically could represent an infinite number of states, we’d just have to add more transistors to our chip. So really, the question becomes, “How many states can we represent with how many physical resources?” Currently we can fit two states into 32 nanometers. (That’s one transistor.)
My suspicion is that the future is not in fitting two states into a continually smaller space, but in fitting a near-infinite number of states into a slightly larger space. Electricity and other force waves can be “on” or “off”, but they also have lots of other properties, many of which are sufficient to represent an infinity (or near-infinity). Frequency of any wave represents an infinity, for example–you can have a 1 Hz wave, a 1.1 Hz wave, a 1.15 Hz wave, a 1.151 Hz wave, etc. So, that basically answers Question 1 ideally–you can have an infinite number of states, you just have to have some device which is sufficiently small, in which a wave can have its properties modified and measured by electronics, optics, or some other such technology.
You’ll notice that we’ve also conveniently answered our bonus question, because we can represent quite a few different states at once, once each individual component of our system can represent an infinity all by itself.
If we want to look a bit further into the future, our second question can be answered by the fact that waves take up essentially no space (only the medium that they vibrate takes up space). Our understanding of physics is not (as far as I know) currently good enough to create structures out of pure force just yet, but such structures would come quite close to taking up “no physical resources.”
And beyond that (how we get the state changes to happen without time), I have no idea. That question may be unanswerable, and may only be resolvable by changing computers to being something other than mathematical devices. (That is, not be involved with states at all, but some other method of following instructions and comparing data.) But the better our components become, the closer we can get to “no time.”
The Roundup
So there’s my thoughts for the day on the future of computing. Sometimes designing software for performance is a necessary evil (but really only in the case where it’s an extreme issue, like with Google’s products, or the great new need for speed in JavaScript nowadays, or in other low-level places), but I hope that future changes in the fundamental architecture of computers will obsolete that necessity.
-Max
I’m not so sure designing for performance necessarily means driving up complexity. When you look at Google and their products in the grand scheme, they all exhibit exactly one thing – simplicity. Even internally, they do not rely on huge expensive SAN’s and databases, they “simply” developed their own distributed filesystem to run on top of cheap clusters. Is this complexity or simplicity?
Given how NetScape’s JavaScript, originally a scripting bastard child of Java, has effectively pushed client Java completely out of the picture, it makes sense to do this now, much the same as we saw when Java went from an interpreted mode to a virtual machine mode. The complicated thing comes only when you start to mix and aggregate features.
I prefer the CLR over the JVM for instance because the former is designed under some nice and simple criterias, some of those would be:
– It is never interpreted (always JIT’ed)
– There’s no class loader hierarchy (once a class is loaded, it’s there for good)
– Not tied to a particular language (has a very low level byte code).
Many of these were also used in Google’s Android VM. So I don’t think in this particularly case anyway, we have much to fear. Besides, it might actually make other aspects simpler, one of these most certainly being that of security.
You know, I think you actually make a good point. I usually design with that assumption, actually–that simplicity will lead to performance and security. I suppose it’s that I object to the general notion that “performance” or “security” come first and “simplicity” comes second, instead of the other way around. I’d love to see people advertise that the good qualities of their system come from its essential simplicity, but I suppose that might not make the best marketing.
-Max
There’s one small problem here. Waves do in fact take up space: to talk usefully about a wave you have to have a space of order wavelength over 2. To separate close-by frequencies, you have to be able to detect the beat frequency, which is the difference of the two frequencies. So in fact there are some obvious physical limits for how big your device has to be once you decide on your operational frequency range and resolution.
So the only way to get a large number of distinct wave states into a small device is to make sure that your wavelength is pretty small. In fact, if you want to fit in N different wave states, and assuming that your device boundaries are fixed closed (allowing them to vary from open to closed changes things by a constant factor of about 4), you need your biggest wavelength to be twice the device size and your smallest wavelength will then be 2/N times the device size.
I’m assuming we’re talking electromagnetic waves here, and for those energy scales as 1/wavelength. So the states have energies proportional to things from 1/(2*size) to N/(2*size). Now you want a large N and a small size… which means very high energies and worse yet high-energy transitions (which are likely to be lossy).
It’s that same physics that hits Moore’s law biting us.
That’s a really good point. So our scaling is still limited in terms of physical space. I figured that would always be a limit–that’s sort of the nature of the universe. However, at least we’d have an infinite number of states in that space, instead of just two. If we could have an entire processor in the space that we currently fit a transistor, I don’t think Moore’s Law would fundamentally be a problem anymore.
-Max
To get a large number of states in a small size would, as I said, require each state to have a very high energy. Then you run into the problems of storing that much energy in that small a space without leaking, as well as, of course, creating the energy density to start with.
At current transistor sizes we’re not talking terribly high energies for N == a few. In fact, we’re talking basically visible light energies. But once N == 1000 we’re talking hard X-rays…
Sure. There’s definitely a lot of technical problems that would have to be solved. It’s one of those things that wouldn’t be very operational for the first few years of research and production–it’d have a ways to go before it could actually catch up to current transistor technology.
-Max
I think your allegations about Google are rather unfair, since Google is famously built on the concept that hardware is slow and unreliable. Hence they build everything distributed and fault-tolerant, building the speed and reliability into the software rather than relying on any piece of hardware. Chrome arguably does this too, since the use of processes could mean that in the future each tab could be running on multiple remote machines, giving excellent performance even on mobile devices (since the number crunching would be taking place elsewhere) and high reliability (since one machine exploding would not affect the rest, which would carry on where that one left off). Erlang is an interesting language in this respect, allowing more and more machines to be thrown at a problem until it gets solved.
In regards to the waves-as-storage idea I’m afraid I feel compelled to carry on where Boris left off and show its physical impossibility. Although the argument for packing more and more waves into a space seems to make sense and would work in a classical Universe, the difficult fact to swallow is that the Universe doesn’t really follow common sense and that classical Physics is wrong. Both quantum physics and relativity can destroy your argument in a few different ways (all of which are fundamental properties of the Universe, ie. you can’t just use a better microscope you’d have to make a different Universe held together by different physical laws):
Firstly there would be a limit to how tiny your wavelengths could be due to the Planck length, the size at which trying to read information (either stored or computed) is impossible since it creates black holes which consume the information, matter, energy and everything else in the area. At this length the Schwarzchild radius (below which a black hole is formed) for any particle used to read the information is the same size as the particle, thus it all essentially ceases to exist, including any waves you try to make.
Secondly, the smaller you make your wavelengths the higher the energy they must have as Boris has already said (just like ultraviolet radiation has more energy than infrared). Since E=mc^2 this energy has mass. Once your waves have a large enough amount of energy their mass will be so great that your computer will collapse into a black hole. This upper limit on energy actually coincides with wavelengths of the Planck length, see above 🙂 (Physics fits neatly together like that)
There would also be a maximum wavelength possible since for a wave to exist in your computer it would need to have at least half its wavelength (one peak or one trough) fit inside the computer. As Boris says, your waves could be no bigger than twice the size of your computer. Without this you can’t have the standing wave you need (just try picking up one end of a rope slowly, then put it down again. You don’t get a wave, you just move the rope).
Since there are limits on how big and how small your wavelengths could be you might think about making smaller adjustments to the amount of energy each wave has, cramming more into the space between biggest and smallest. However, here you would come across the Uncertainty principle between energy and time. The more brief the lifetime of the wave, the less certain you can ever know what energy it had. This means that to cram the wavelengths (energies) closer and closer together they would need to hang around for longer and longer in order for them to be distinguished from each other. If you wanted to make them infinitely close to each other then it would take you an infinite amount of time to read a bit.
Quantum mechanics actually shows that the wavelengths you could use occur at discrete energies (ie. energy exists in certain quantities, hence the term quantum). Therefore to put more and more waves into your computer you’d have to step higher and higher up this ladder of possible energies, forcing you to reach the uppermost limits I mentioned first where your computer would turn into a black hole.
Particle-wave duality shows that every wave is also a particle and that every particle is also a wave. Attempting to store more and more waves in a computer is equivalent to attempting to store more and more particles in the computer. Pauli’s exclusion principle means that no two fermions (eg. electrons) can occupy the same quantum state (ie. be in the same place with the same energy, the same angular momentum, the same spin, etc.). This principle is the reason that solid things cannot pass through each other and is the basis for all Chemisty, and would mean that your computer could only fit a certain amount of electrons inside (since there are only two spin states, the angular momentum is limited by the energy, the space is constrained to fit inside your computer and the energy has an upper limit as I said above). This does not prevent you from using bosons (eg. photons) instead, which can exist in the same state as each other. However this would still be constrained by Bose-Einstein statistics which says that two particles/waves in the same state are indistinguishable from each other and thus contain the same information (ie. adding more waves wouldn’t add any more computational power or storage capacity once every state has been filled).
Attempting to contain your waves inside your hypothetical computer would also be impossible, due to quantum tunnelling. As I said before there is always uncertaintly in a wave’s energy unless it stays at that energy for an infinite amount of time. This means that your waves might have enough energy to get out of your computer, thus statistically they manage to (the longer you leave them in there the more definite their energy becomes, but this is exactly counteracted by the longer opportunity they have to break free). Any confinement you try to use, no matter what super future materials or force fields, all it can ever do is change the probability of where your waves/particles are. Part of your waves will always exist outside the confinement and thus be useless to the running of your computer (this leaking of electrons is what makes CPUs get hot). The more energy you give to your waves/particles the more chance they’ll tunnel through your confinement, and the tiny probability of escape becomes significant when huge numbers are involved (in your ideal machine it would be infinite).
Essentially the electronics we use now could do pretty much what you describe with capacitors to store the electrons (which are waves). If we attempted to use visible light to do the same thing then we’d be using wavelengths of about 5*10^-7 metres. The wavelength of atoms is around 1*10^-10 metres. Gamma rays would allow wavelengths of around 1*10^-11 metres. Electrons can be measured accurate to about 2*10^-12 metre wavelengths, below this wavelength the amount of energy is so great that it would create another electron (of mass m=E/(c^2)) and thus any calculation being performed or bit of information being stored would be lost. The same thing happens when trying to separate quarks from one another, since they’re so light that even pulling them apart needs enough energy that it ends up making another quark, which then sticks to the one you were trying to get on its own, thus leaving you with two lumps of quarks and none on their own, hence they couldn’t be used in a computer since they couldn’t be isolated to find out what they’re storing/calculating. This means that for shear information density, electrons are the way to go because they need less room than anything else and are still measurable. Such a machine would be formidable compared to today’s electronics, but not too different. It would still be finite in what it could do, however. The fastest computer would be a black hole, the limit you would reach with short wavelengths. The computational power of a black hole
There’s an interesting and not-too-difficult paper on the subject here http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.6055 covering potential upper-limits on ‘information flux’ (ie. data transfer speed), information storage density, sustained computational speed and the stability of stored information. You may also be interested in reading up on Shannon and information theory.
Hey, thanks for all the information! 🙂 That was very detailed, and yet quite readable.
I agree, my ideal machine may be impossible to achieve, but I do also agree that a wave-based machine has the potential to be quite formidable compared to what we currently have, even if it’s not perfect.
I’ve read a bit of Shannon’s work (or perhaps summaries of his work, I forget) in the past. Definitely interesting.
As far as the black hole bit goes, my understanding is that that is a mathematical theory at this point, yes? That is, can we actually reproducibly create black holes in that fashion?
-Max
It seems that your ideal machine would work in a world governed by classical physics, but in such a world we could also keep making transistors smaller and smaller forever, and never need to change the technology we use. Our world is quantum though, and since this is the problem facing transistor based technology it must also be applied to this wave model of computing.
If we think of a wave based computer with different waves for different states, then this is a classic “potential well” problem in quantum mechanics (and usually the first problem looked at when teaching the subject). For a simplified version see http://en.wikipedia.org/wiki/Particle_in_a_box and more a more realistic version see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/pbox.html .
Essentially only a certain number of wavelengths will fit in any space (like here http://upload.wikimedia.org/wikipedia/en/a/a8/Particle_in_a_box_wavefunctions.png ) and thus only certain frequencies and energies can exist in that space. When the confinement is not infinitely strong (which is the simplification used initially) then some of the wavefunction can leak out. If another box is reached before the particle runs out of energy then it can tunnel to that other box (eg. the bottom part of this http://www.nanoscience.com/education/i/wavefunc.gif ). The amplitude of the wavefunction essentially gives the probability that you’ll find the particle there.
Since a real computer could not have infinitely strong confinement there would be a limit to the energy the waves could have before they would not be confined (eg. http://www.nextnano.de/nextnano3/images/tutorial/1Deffective_mass_vs_8x8kp/5nm_quantum_well.jpg ), and all of the time there would be losses as bits of your waves tunnel out of the computer into the surroundings (ie. you’ll need a processor fan 😛 ).
The fundamental point of all of this, however, is that you cannot fit an infinite number of waves inside a computer, you’ll always have the ground state and then the more energy you add the higher the potential number of states, until it all spills out of the top of the graph (ie. your waves are too energetic to be contained).
For sure. Basically, my thought process went like this:
1) If were were ternary instead of binary, would we get a speedup? Maybe.
2) How far can we extend that principle?
I don’t actually need an infinite number of distinguishable states to make a computer better than what we have right now. I just want the system that will give me the largest number of distinguishable states in the smallest space, as long as we’re going to use states as the “compare data” part of a computer.
-Max
Oops, I meant to say at the end of the second-to-last paragraph that the computational power of a black hole has been calculated to be 10^51 operations per second for a black hole with a mass of 1kg. 1GHz is 10^9 operations per second, so that means a 1kg black hole would be about 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000 times the speed of our current computers.
Also, you may want to look into dimensional analysis (eg. http://en.wikipedia.org/wiki/Dimensional_analysis ). I say this since while some of your assumptions are wrong (for example there are not an infinite number of usable frequencies between, say 1Hz and 2Hz, as a quantum harmonic oscillator will show you), there are others which just don’t appear to mean anything to me. For example “create structures out of pure force” seems to be a meaningless sentence. Force, outside Star Wars, is a convenient name used to denote mass*length/(time^2). Disregarding for a moment the nonsensical word “pure”, how could a ‘structure’ be made out of a quantity of mass*length/(time^2)? It seems nonsense to me, like asking for the length of a ruler in Volts.
Interesting to know about the black holes.
Yes, the force bit is too far in the future to accurately describe. But some waves seem to need/have no medium, and yet can be modified en-route, so there’s some possibility that computation could be done without a solid medium. It would still, at least theoretically, have some mass and take up some space, though.
-Max
Indeed a medium is not required. It does take energy to represent information though. For example, filling a ZFS filesystem to its theoretical maximum size, even with the most efficient storage device that you could conceive, would take more energy to store the information that it would take to boil the oceans ( http://en.wikipedia.org/wiki/ZFS#Capacity ) 🙂
Space-wise a black hole would probably be the smallest computer you could have, since it could exist beyond the Planck length. However getting the results of your calculations out would be a problem (it would rely on information being preserved and on Hawking radiation existing, both of which are theoretical and have no kind of evidence yet).
I disgree on “no kind of evidence” of hawking radiation…
Numerical Evidence of Hawking Radiation (see section 4/page 7)
Also, a slashdot article on the subject.