!!c1QfXUgcGY0

Computers in 2027

williamedwardscoder:

We all remember how fast computers were 15 years ago, right?  What might they be like in 15 years time?  I’ll take a stab at predicting it and what it means for programmers.

15 or more years ago I used a monster of a PC.  A Pentium Pro at 200Mhz with 384MB of RAM.

Could we, 15 years ago, have predicted the computers of today?  That’s like asking if we can predict the computers in 15 years time only with hindsight :)

Could we have predicted that machines now would be faster than 3Ghz?  That entry laptops have 4GB of RAM and the servers 96GB?  Probably; we had Moores Law, after all.  We are just predicting more of the same only more of it.

Other parts of the puzzle are only slightly bigger jumps.  The multi-core is not so hard to imagine; its even easy to understand why, when you can’t scale up, you’d have to scale out.  

Slightly harder to imagine predicting is how complete x86 (and its 64bit rebirth) dominates while 15 years ago the Sparc was exciting.  Could you have predicted GPGPUs?  Perhaps harder; perhaps they aren’t exactly mainstream useful though.  ‘Vector processors’ were the high-end 15 years ago too.

The only thing that’s much the same is disk speed.  Disk isn’t much faster now.  Did we imagine faster disks?  Yes we did.  We’d have imagined them keeping up.  But in a way storage is faster today; its called SSDs and maybe if we’re lenient we can imagine that in just a couple of years it’ll be mainstream in platter-sized quantities?  Therefore, again, storage is perhaps predictable even if its solved using new technology.

Still, I think you could have seen it all, roughly, if you’re counting bogomips.  (Fun fact: I acquired the bogomips user account name on sourceforge.  Don’t use it for much but it does sound cool.)

I wonder perhaps if you’d said it like it turned out to be you’d have been a bit underwhelmed.  Perhaps we hoped more of progress?

My first computer was a ZX-81 with 4KB of RAM.  That was made in 1981; I got it second-hand a bit later, sans casette player (meaning I had to type my games in before I could play them).  Things moved pretty quickly between the ZX-81 and the Pentium Pro.  They moved pretty quickly to the computers of today.

We might have hoped we’d be wearing computers?  I remember working on secret Motorola phone-in-a-watch projects and seeing other dream projects canceled.  Well it all never happens.  The surf-tablet is great for consuming data; your smart-phone is seriously powerful (multi-core and catching up with the desktops whilst the desktops tread water, and managing to do so on a much better power story!) and such, but where is our speech recognition and virtual reality?  The only thing we can use the cloud for is storing things we’d have otherwise stored locally.

So where will they be in 15 years from now?  Let me make some foolish predictions:

First, big disks get bigger.  Not faster, only bigger.  Tape, even, get faster.  Memory is arranged in a hierarchy and there will be more and more disk at the bottom of it all.  Terabytes of it.  Petabytes on servers.

What will we use this space for?  Really, if you imagine the 24 megapixel video we might be watching, how will you distribute it?  Its for human consumption, and humans can only be watching as much as they watch now; so your video replayer really doesn’t have infinite storage needs.  I think the biggest consumer of disk is going to be companies who will be recording everything about you, especially your time in physical shops and online, so they can crunch it and detect that you’re pregnant.

But the second thing about storage is the big deal.  Everything above the big disk level is going to be memristors!  Memristors are a real game changer; RAM that is persistent and fast.  Eventually, fast like L1 is today, and effectively replacing it.  If HP don’t overprice it, they’ll be printing money.  And, excitingly, this will improve everything massively for normal computer use.  It means that if you’re not developing shopping profiling software, you basically never have a disk RAM divide.

This is very interesting on what it means for us programmers though.  I think the current programming model will just extend to it; rather than thinking about RAM differently, you’ll just omit any load/save part!  You don’t need a traditional disk-based b+tree database; your in-memory balanced tree is actually durable.  The tyranny of fsync is over!  Disk doesn’t exist outside the server room and long-term storage.  Most systems will not need spinning platters.

Of course, there will still be traditional file-systems and virtualisation of traditional transient RAM just so our apps of today still run; but it’ll be virtualised over memristors.  But the fun and performance is for those of us who give that up.

This raises interesting questions about program reliability.  Right now, operating systems all don’t actually give you the RAM you’ve asked for until you use it.  Sometimes, rarely, they will not actually have the RAM to give you when you actually try to use it.  You die.  This fallacy in the OutOfMemoryException is my way of forgiving those who don’t actually write robust software ;)  The thing is, if you have no classic load and save steps and your working state is persistent, you have to suddenly make sure you’re much more reliable.

We need much tighter coding practice; we likely need this tightness in the frameworks and even at the language level (someone please write Clojure in Python-style?!) because the one person in the equation we trust least is the app programmer.

Maybe we have enough memristor RAM to have checkpointing systems (built into pauseless GC?).  Not such a big jump.  You can imagine Erlang/Mnesia moving seamlessly into this world.

Goodbye fsync! :)

As you might reasonably expect, Charles Stross has written on this topic once or twice. I don’t have much to say beyond what he’s already said; besides to say I’m a bit more optimistic about fusion power than he is— as I couldn’t help but be, given my long-standing involvement with the hobbyist fusion reactor communityyes there are people who build fusion reactors for fun, it’s the year 2012!

Hard physical limits:

Storage limits are far. Some moderately fancy MNT tricks can get storage density down to one bit per atom: thirteen thousand terabytes per ounce. NAND-Flash MicroSD cards are nowhere near this, we’ve got plenty of room.

Computing limits are near. We hit the frequency limit back in 2006, the physical feature limit is up next, after all, a wire can only be as thin as one atom wide. Barring any really surprising breakthroughs with qbits or reversible logic, (Read that article! It’s interesting as shit!) we’ll hit the buffers with transistor logic around ~2030, and have to come up with something else.

Wireless communication limits are very close. WiFi is already exhausted in any moderately dense environment, as anyone who’s been to a trade show knows. Fancy MIMO tricks only go so far. Stross guesses a ceiling of 2 terabytes/second. You can only use so much frequency: the higher you go, the more complicated things get. There’s a bunch of insanely thorny technical problems posed by line-of-sight laser networking, none of which anymore knows how to solve without liberal application of magic mature machine-phase molecular nanotechnology.

But! That 2 tb/s number assumes cognitive radios with free reign over the entire radio spectrum. And, as anybody who’s been watching the legal knife-fight over white spaces knows, this is extremely unlikely. The ITU does not move with nimble speed, and the radio field is almost a perfect example of an industry dominated by entrenched players, backed up with significant lobbyist punch. The cell phone companies are, nominally, on our side; but they then have the temerity to ask us to pay them money to use their spectrum, and of course they waste a lot of time fighting with each other.

So the cell phones of 2027 will have piles of storage, and have plenty of cores, but won’t be a whole lot faster than today.

8:24am | author:
  1. c1qfxugcgy0 reblogged this from williamedwardscoder and added:
    reasonably expect, Charles Stross has written on this topic once...twice. I don’t
  2. williamedwardscoder posted this