Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by janrinok on Tuesday March 11 2014, @04:13PM   Printer-friendly
from the well-its-worth-a-try dept.

AnonTechie writes:

"Physicist proposes a new type of computing at SXSW (South-by-SouthWest Interactive), known as orbital computing. From the article:

A physicist from SLAC who spoke at SXSW interactive has proposed using the state changes in the orbits of electrons as a way to build faster computers. The demand for computing power is constantly rising, but we're heading to the edge of the cliff in terms of increasing performance - both in terms of the physics of cramming more transistors on a chip and in terms of the power consumption. We've covered plenty of different ways that researchers are trying to continue advancing Moore's Law - this idea that the number of transistors (and thus the performance) on a chip doubles every 18 months - especially the far out there efforts that take traditional computer science and electronics and dump them in favor of using magnetic spin, quantum states or probabilistic logic.

A new impossible that might become possible thanks to Joshua Turner, a physicist at the SLAC National Accelerator Laboratory, who has proposed using the orbits of electrons around the nucleus of an atom as a new means to generate the binary states (the charge or lack of a charge that transistors use today to generate zeros and ones) we use in computing."

This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Grishnakh on Tuesday March 11 2014, @04:24PM

    by Grishnakh (2831) on Tuesday March 11 2014, @04:24PM (#14823)

    I have a better idea: leave CPUs where they are except for continuing refinements in power consumption (today's desktop and mobile CPUs are getting better and better this way), and let's concentrate on improving software. Software engineering is a total disaster as a profession, and we could get far better performance and reliability by going through all our software engineering practices and all the code we've built up and optimizing it. For proof, just look at the complete clusterfuck that is the Win32 API. "That's legacy", some might say, but we're all still using that POS. What's the point of improving CPU performance if you're going to saddle it with a crufty mountain of crap code?

    I say we freeze CPU performance altogether, maybe even cut it back some, while improving energy efficiency (no point in wasting energy), to force the software profession to clean up its act.

    • (Score: 5, Insightful) by oodaloop on Tuesday March 11 2014, @04:33PM

      by oodaloop (1982) <jkaminoffNO@SPAMzoho.com> on Tuesday March 11 2014, @04:33PM (#14828)

      Yeah, all those physicists should stop their research and become software developers. Or, you know, maybe we as a society could do both.

      --
      Many Bothans died to bring you this comment.
      • (Score: 4, Funny) by Grishnakh on Tuesday March 11 2014, @04:54PM

        by Grishnakh (2831) on Tuesday March 11 2014, @04:54PM (#14848)

        They could work on something more useful, like better battery technology or artificial gravity.

      • (Score: 3, Insightful) by Boxzy on Tuesday March 11 2014, @05:21PM

        by Boxzy (742) on Tuesday March 11 2014, @05:21PM (#14876)

        But that's Grishnakh's point, while we allow bad programming to be an option there WILL be no improvements.

        I cut my teeth on assembly, I dread to think what the result would be with 21st century programmers forced to optimize, even with todays environments.

        --
        Go green, Go Soylent.
    • (Score: 5, Insightful) by maxim on Tuesday March 11 2014, @04:42PM

      by maxim (2543) <maximlevitsky@gmail.com> on Tuesday March 11 2014, @04:42PM (#14834)

      While I agree with you, this will not help with CPU heavy calculations.
      Plenty of scientific and otherwise CS related tasks don't really use any OS/libary services, but rather concentrate around tightly looped algorithms which just need enormous processing power. Think ray-tracing for instance, or physic simulations, or optimization problems, etc etc etc.

      So I would say that you are both right and wrong in same time. You are right that if we started to care about performance instead of bragging about 'How I implemented this in javascript' it would boost the performance by lot for
      most of everyday task we do.

      But for CPU heavy algorithmic tasks, only increases in CPU power could help making it faster.

      • (Score: 3, Funny) by Grishnakh on Tuesday March 11 2014, @05:01PM

        by Grishnakh (2831) on Tuesday March 11 2014, @05:01PM (#14856)

        Simple solution: a logic block needs to be added to the CPU to detect if the OS is running the Win32 API or other select bits of highly-ubiquitous crap code, and if that's detected, reduce the CPU clock to 1% of its maximum speed.

      • (Score: 4, Funny) by VLM on Tuesday March 11 2014, @05:08PM

        by VLM (445) on Tuesday March 11 2014, @05:08PM (#14863)

        "So I would say that you are both right and wrong in same time."

        Sounds like a quantum computing problem. Better call dwave.

      • (Score: 4, Informative) by wjwlsn on Tuesday March 11 2014, @06:05PM

        by wjwlsn (171) on Tuesday March 11 2014, @06:05PM (#14899) Homepage Journal

        Exactly... a physicist proposing this is probably not concerned with speeding up shitty consumer software and web services. He's more likely looking for significant improvements in the speed of scientific/numerical algorithms. With much higher speeds, we could stop using algorithms based on simplifying assumptions, and/or we could greatly improve the time/space/energy resolution of the algorithms we already have (e.g., shorter time steps, smaller grid meshes, bigger solution domains).

        --
        I am a traveler of both time and space. Duh.
    • (Score: 0, Insightful) by Anonymous Coward on Tuesday March 11 2014, @05:01PM

      by Anonymous Coward on Tuesday March 11 2014, @05:01PM (#14854)

      You can fix bad software with faster CPU and more memory. You can have fast, flexible and cheap software pick any two.

      • (Score: 2) by Grishnakh on Wednesday March 12 2014, @10:53AM

        by Grishnakh (2831) on Wednesday March 12 2014, @10:53AM (#15299)

        Faster CPUs and more memory won't fix bad software, it'll just make it faster. Faster doesn't equal more reliable, and one huge problem I see with software these days is horrifically bad reliability. There's definitely some huge exceptions, such as the Linux kernel, but overall software has very poor quality. It's not helpful to have a faster CPU when all that does is make your program crash faster. Some reliable modules aren't all that useful when other modules are crap. My Android phone has a Linux kernel which I'm sure is very reliable, but the phone itself is crap because the software is so bad, with various crashes, freezes, pauses, etc., which I'm sure can be blamed on higher levels of software, or perhaps some lackluster closed-source device drivers (I'm thinking of the touchscreen driver here, many times it misses touches). Worse, rarely does anyone bother to go back and improve software because they want to move on to the next new thing; everyone wants to do new development, rather than bug-fixing, rigorous testing, comprehensive validation, etc. It's not just developers that are like this, companies are the same way: they want to sell some shiny new bug-ridden POS rather than fix the bugs in the POS they already sold you.

    • (Score: 2, Informative) by frojack on Tuesday March 11 2014, @05:03PM

      by frojack (1554) on Tuesday March 11 2014, @05:03PM (#14859)

      while improving energy efficiency (no point in wasting energy), to force the software profession to clean up its act.

      This is quite true, the layers on top of layers of code, that modern software is built upon often have humongous inefficiencies built in. Sometimes its because the code has to make allowances for different processors, but a lot of the time it is just crappy code pushed into a library and then everyone is linking it in without giving it a second thought.

      But as to this story, using electron orbits to do computing, isn't Schrodinger turning over in his grave at this point? (To say nothing about his cat...)

      The machinery needed to change an electrons orbit might just be a tweak of energy, but to test the state, well then, that gets tricky...

      --
      Discussion should abhor vacuity, as space does a vacuum.
      • (Score: 2) by TheLink on Tuesday March 11 2014, @10:49PM

        by TheLink (332) on Tuesday March 11 2014, @10:49PM (#14997)

        If we are looking for new ways to do computing I think more research needs to be done on more fully understanding how single celled creatures do stuff like create elaborate AND distinctive shells to their species, and how they decide whether to reproduce or not - some don't if there is insufficient shell material for the daughter shell http://biostor.org/reference/7123 [biostor.org] )
        See also: http://www.ncbi.nlm.nih.gov/pubmed/24444111 [nih.gov]

        We examined shell construction process in P. chromatophora in detail using time-lapse video microscopy. The new shell was constructed by a specialized pseudopodium that laid out each scale into correct position, one scale at a time. The present study inferred that the sequence of scale production and secretion was well controlled.

    • (Score: 1) by technopoptart on Wednesday March 12 2014, @01:30AM

      by technopoptart (1746) <jamesNO@SPAMtheorangecrush.co> on Wednesday March 12 2014, @01:30AM (#15042)

      Amen, we have come along way in our capability these last two decades. But we don't really understand or maximize the little silicon wonders a few gifted people manage to craft.

        We are more worried about "Big Data" and amassing it cause ' we can or will ...some day... "analyze it" ' or whatever. We are overflowing with 'data' but no way to smartly compute it. Just keep amassing marketing(surveillance) data on everyone and every pet.

      • (Score: 2) by Grishnakh on Wednesday March 12 2014, @10:46AM

        by Grishnakh (2831) on Wednesday March 12 2014, @10:46AM (#15294)

        Even here I see problems with our technological development. We've managed to develop incredibly fast and power-efficient CPUs and memory, however our storage technology leaves a lot to be desired for storing all that "Big Data".

        Right now, we store data mostly in one of 4 ways:
        1) Hard drives, aka spinning rust-covered platters. This is a very mature technology, and fairly reliable, capable of extremely dense storage. It's a bit on the slow side however, and not that reliable which is why we invented RAID. Worse, because it's not "the next big thing", there's only two companies in the world left making the things, largely in a third-world location prone to flooding. This is not a good strategy for meeting the world's data storage needs. It's also not that power-efficient, and since it relies on moving parts, can fail at any time.
        2) (NAND) Flash memory (aka SSDs when used in place of #1). This isn't all that mature, and not all that reliable, but it's very fast for reads and much lower-power than #1. It's "the next big thing" so lots of companies have jumped on the bandwagon, but NAND has fundamental problems, namely a very finite number of write cycles, making wear-leveling algorithms necessary, and also imposing a fixed lifetime to the media (dependent on number of writes). SSDs have also been known to mysteriously die all of a sudden. You'd think something solid state would be far more reliable than spinning platters, but this simply isn't so.
        3) Optical media, aka CD-ROM, DVD-ROM, and BD-ROM. This mostly just sucks. CD-ROM is too small to be very useful these days (700MB), DVD-ROM isn't much better (4.3GB), and BD-ROM isn't all that great either (50GB?) compared to modern hard drive sizes (1TB+). Read speeds are terrible, write speeds are worse, the media is ridiculously fragile (no protective sleeve), and prone to early and/or random failure (you might have a disc that lasts 10 years with no trouble, and another that's unreadable in a few months). The price per GB isn't very good, and for backup purposes it's just too expensive, unreliable, and small (requiring dozens of discs or more for a single backup). Discs like this used to be useful for distributing data, but with so much data delivery moving to the network, people just don't bother much any more. Even USB flash drives are generally far preferable, for both physical size and speed, and reliability and convenience too (it's easy to modify data on a USB stick, not quite as easy or fast for a DVD-RW).
        4) Tape (e.g. LTO4, LTO5, LTO6). This is generally the preferred solution for backups as the sizes are very large (one tape can generally hold at least the capacity of a typical HD, maybe several), and the cost per GB is quite small (significantly less than actual HDs). However, the speed is a bit on the slow side, reads and writes are sequential (can't do rsync on it), and the drives are hideously expensive, so only larger businesses bother with them. Because of this, individuals and small businesses don't bother, and just use extra HDs for backups, which isn't the greatest solution.

        One thing that's bad with all these methods is longevity. Can you put any of these things in a box in the attic, then find them 100 years later, and still be able to read them without errors? I doubt it. We're amassing so much data, but we're storing it on media that has poor reliability and lifetime, which requires rotating backup strategies to avoid data loss. Instead of putting so much effort into faster CPUs, we should be putting some more effort into better storage methods to store and access and archive all this data we've created.

    • (Score: 0) by Anonymous Coward on Wednesday March 12 2014, @04:36AM

      by Anonymous Coward on Wednesday March 12 2014, @04:36AM (#15100)

      So you are saying we should all get rid of our fancy time-saving abstraction layers in favor of bare metal ultra optimised code for every application? Sure, we could do that. I assume you'll be willing to put your money where your mouth is and wait longer while paying one or two orders of magnitude more for your software and games than you do right now?

      No?

      I thought so.

      • (Score: 2) by Grishnakh on Thursday March 13 2014, @09:47AM

        by Grishnakh (2831) on Thursday March 13 2014, @09:47AM (#15852)

        I never said anything about having bare-metal code. In fact, many times modern compilers get better performance with RISC processors than assembly programmers can get. The problem isn't abstraction, it's crappy software. For proof, look at the Win32 API. There's lots more code out there just like that, or worse; layers upon layers of cruft and garbage, which no one bothers to look at and improve unless major flaws show themselves. For a great example, look at Windows Update in WinXP, IIRC. It wasn't apparent when WinXP was brand-new, but 10+ years later everyone started noticing how dog-slow Windows Update was. (There was a Slashdot article about this a few months ago I believe.) The problem turned out to be that some moron programmer at MS used a horrible algorithm in the program, so as more and more updates were applied to the system, the runtime got exponentially worse. That isn't a case of abstraction layers, it's a case of lousy programming, and if it can go undetected that long in the world's most popular OS, there's no telling how much other crap code lurks, slowing our computers down.

  • (Score: -1, Offtopic) by dyingtolive on Tuesday March 11 2014, @04:39PM

    by dyingtolive (952) on Tuesday March 11 2014, @04:39PM (#14832)

    And I apologize, and will gladly take the karma hit, but do you realize the best part of SN?

    Bennett's not here.

    • (Score: 1, Funny) by Anonymous Coward on Tuesday March 11 2014, @04:44PM

      by Anonymous Coward on Tuesday March 11 2014, @04:44PM (#14836)

      Shh, don't invoke him.

    • (Score: 1) by rts008 on Tuesday March 11 2014, @04:47PM

      by rts008 (3001) on Tuesday March 11 2014, @04:47PM (#14839)

      Pardon my ignorance, but just who is this Bennett, and why does it make a difference whether Bennett is here, or not?

      • (Score: 0) by Anonymous Coward on Tuesday March 11 2014, @04:50PM

        by Anonymous Coward on Tuesday March 11 2014, @04:50PM (#14844)

        You know that guy on SN that's been going around and posting long-winded rambles that are semi-on topic? Basically that, but getting article status.
        Very similar to the 'journalist' who 'found' Satoshi Nakamoto.

      • (Score: 4, Informative) by Anonymous Coward on Tuesday March 11 2014, @05:01PM

        by Anonymous Coward on Tuesday March 11 2014, @05:01PM (#14857)

        He just posted an article on the other site with a longwinded explanation of how he discovered some negative DNS entry caching on Comcast's DNS, didn't understand what they were, and went to great lengths to declare them signs of incompetence.

        The part where he bitched about the fact that the support line wouldn't let him talk to Comcast higher ups was rich too. He's basically some caricature of an old person, given just enough knowledge to be dangerous, and dead set on the idea that he's right.

  • (Score: -1, Troll) by Anonymous Coward on Tuesday March 11 2014, @04:42PM

    by Anonymous Coward on Tuesday March 11 2014, @04:42PM (#14835)

    Does anyone else get a raging boner hearing an Asian woman squeal when a dick is forcibly inserted into her anus?

  • (Score: 2, Interesting) by TestablePredictions on Tuesday March 11 2014, @05:02PM

    by TestablePredictions (3249) on Tuesday March 11 2014, @05:02PM (#14858)

    Ok, terahertz lasers are to be used as the gate control signal for switching electron orbital states. Fine. What atom-sized engineering primitive creates a terahertz laser beam out of an electron orbital state?

    MOSFETs work well because they are voltage controlled voltage sources. The input and output are of the same "type".

    • (Score: 2) by VLM on Tuesday March 11 2014, @05:11PM

      by VLM (445) on Tuesday March 11 2014, @05:11PM (#14867)

      SIMD single instruction multiple data. I'm guessing the architecture would look really weird, like build 2 to the 32 "devices" and whack them with a single wave of light and the one thats "on" is your answer.

      There's some weird crypto analogy where you build a cracker into Chinese radios and when the light turns on the winner turns the radio in for a huge reward, or something like that.

  • (Score: 4, Interesting) by kodiaktau on Tuesday March 11 2014, @05:07PM

    by kodiaktau (615) on Tuesday March 11 2014, @05:07PM (#14862)

    I am not sure the original article actually specifies the process by which this takes place which is frustrating. On the face of it, the concept that electrons have an "orbit" is patently wrong, electrons don't orbit. http://www.chemguide.co.uk/atoms/properties/orbits orbitals.html [chemguide.co.uk] What those circles actually describe is the energy level of the electron. In fact the Heisenberg Uncertainty Principle tells us we cannot know how those work exactly so trying to implement seems like it is premature. Smoke and mirrors.

    • (Score: 3, Informative) by frojack on Tuesday March 11 2014, @05:33PM

      by frojack (1554) on Tuesday March 11 2014, @05:33PM (#14884)

      Setting an energy level seems pretty easy, it all comes down to adding energy at some point.

      Testing the energy level, now that can be problematic. Heisenberg and Schrodinger both have something to say about the issue.

      --
      Discussion should abhor vacuity, as space does a vacuum.
      • (Score: 4, Funny) by c0lo on Tuesday March 11 2014, @09:54PM

        by c0lo (156) on Tuesday March 11 2014, @09:54PM (#14983)

        Heisenberg and Schrodinger both have something to say about the issue.

        Yeah, don;t know about others, but whenever I'm trying to distinguish which of them says what, the meaning of what they say is blown up. And the reverse: if one gets the meaning of what they say, the info on who said it better is lost.
        To make the matter worse, being bosons both, one doesn't get any better if attempting to involve Pauli.

      • (Score: 3, Informative) by stormwyrm on Tuesday March 11 2014, @10:40PM

        by stormwyrm (717) on Tuesday March 11 2014, @10:40PM (#14995)

        This article [extremetech.com] seems to give some more technical detail about what's being discussed than TFA. The way I understand it, it's not a scheme for making actual qubits and a quantum computer, so the exact quantum state of the atoms involved doesn't need to be tested, but for storing and processing classical information.

        --
        Qu'on me donne six lignes écrites de la main du plus honnête homme, j'y trouverai de quoi le faire pendre.