Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by janrinok on Tuesday March 11 2014, @04:13PM   Printer-friendly
from the well-its-worth-a-try dept.

AnonTechie writes:

"Physicist proposes a new type of computing at SXSW (South-by-SouthWest Interactive), known as orbital computing. From the article:

A physicist from SLAC who spoke at SXSW interactive has proposed using the state changes in the orbits of electrons as a way to build faster computers. The demand for computing power is constantly rising, but we're heading to the edge of the cliff in terms of increasing performance - both in terms of the physics of cramming more transistors on a chip and in terms of the power consumption. We've covered plenty of different ways that researchers are trying to continue advancing Moore's Law - this idea that the number of transistors (and thus the performance) on a chip doubles every 18 months - especially the far out there efforts that take traditional computer science and electronics and dump them in favor of using magnetic spin, quantum states or probabilistic logic.

A new impossible that might become possible thanks to Joshua Turner, a physicist at the SLAC National Accelerator Laboratory, who has proposed using the orbits of electrons around the nucleus of an atom as a new means to generate the binary states (the charge or lack of a charge that transistors use today to generate zeros and ones) we use in computing."

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Grishnakh on Wednesday March 12 2014, @10:46AM

    by Grishnakh (2831) on Wednesday March 12 2014, @10:46AM (#15294)

    Even here I see problems with our technological development. We've managed to develop incredibly fast and power-efficient CPUs and memory, however our storage technology leaves a lot to be desired for storing all that "Big Data".

    Right now, we store data mostly in one of 4 ways:
    1) Hard drives, aka spinning rust-covered platters. This is a very mature technology, and fairly reliable, capable of extremely dense storage. It's a bit on the slow side however, and not that reliable which is why we invented RAID. Worse, because it's not "the next big thing", there's only two companies in the world left making the things, largely in a third-world location prone to flooding. This is not a good strategy for meeting the world's data storage needs. It's also not that power-efficient, and since it relies on moving parts, can fail at any time.
    2) (NAND) Flash memory (aka SSDs when used in place of #1). This isn't all that mature, and not all that reliable, but it's very fast for reads and much lower-power than #1. It's "the next big thing" so lots of companies have jumped on the bandwagon, but NAND has fundamental problems, namely a very finite number of write cycles, making wear-leveling algorithms necessary, and also imposing a fixed lifetime to the media (dependent on number of writes). SSDs have also been known to mysteriously die all of a sudden. You'd think something solid state would be far more reliable than spinning platters, but this simply isn't so.
    3) Optical media, aka CD-ROM, DVD-ROM, and BD-ROM. This mostly just sucks. CD-ROM is too small to be very useful these days (700MB), DVD-ROM isn't much better (4.3GB), and BD-ROM isn't all that great either (50GB?) compared to modern hard drive sizes (1TB+). Read speeds are terrible, write speeds are worse, the media is ridiculously fragile (no protective sleeve), and prone to early and/or random failure (you might have a disc that lasts 10 years with no trouble, and another that's unreadable in a few months). The price per GB isn't very good, and for backup purposes it's just too expensive, unreliable, and small (requiring dozens of discs or more for a single backup). Discs like this used to be useful for distributing data, but with so much data delivery moving to the network, people just don't bother much any more. Even USB flash drives are generally far preferable, for both physical size and speed, and reliability and convenience too (it's easy to modify data on a USB stick, not quite as easy or fast for a DVD-RW).
    4) Tape (e.g. LTO4, LTO5, LTO6). This is generally the preferred solution for backups as the sizes are very large (one tape can generally hold at least the capacity of a typical HD, maybe several), and the cost per GB is quite small (significantly less than actual HDs). However, the speed is a bit on the slow side, reads and writes are sequential (can't do rsync on it), and the drives are hideously expensive, so only larger businesses bother with them. Because of this, individuals and small businesses don't bother, and just use extra HDs for backups, which isn't the greatest solution.

    One thing that's bad with all these methods is longevity. Can you put any of these things in a box in the attic, then find them 100 years later, and still be able to read them without errors? I doubt it. We're amassing so much data, but we're storing it on media that has poor reliability and lifetime, which requires rotating backup strategies to avoid data loss. Instead of putting so much effort into faster CPUs, we should be putting some more effort into better storage methods to store and access and archive all this data we've created.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2