Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by buswolley on Monday February 24 2014, @02:05AM

    by buswolley (848) on Monday February 24 2014, @02:05AM (#5595)

    Is there reason to exist after our machines do it better?
    Sure we could live a self indulgent life of stupidity, or chase qualia...

    No. We will upgrade ourselves. Lose ourselves into the machine, for better or worse, but not as ourselves.

    --
    subicular junctures
    • (Score: 1) by ls671 on Monday February 24 2014, @03:06AM

      by ls671 (891) on Monday February 24 2014, @03:06AM (#5630) Homepage

      Deep inside myself, I tend to think that we only see the tip of the iceberg.

      I tend to think that consciousness is indeed transferable. I like the concept in the AI movie.

      https://en.wikipedia.org/wiki/A.I._Artificial_Inte lligence [wikipedia.org]

      --
      Everything I write is lies, including this sentence.
      • (Score: 5, Interesting) by davester666 on Monday February 24 2014, @03:39AM

        by davester666 (155) on Monday February 24 2014, @03:39AM (#5653)

        Yes, the rich will decide they would rather live forever instead of just handing their wealth to their children. There will be a couple of jobs changing the oil in the machines. The rest of us will be requested to live on some other planet.

        • (Score: 2) by mhajicek on Monday February 24 2014, @10:23AM

          by mhajicek (51) on Monday February 24 2014, @10:23AM (#5814)

          Are you kidding? Other planets have valuable resources too, you filthy squatter!

          Fortunately I don't think humans, no matter how wealthy, will be holding the reigns much past 2045. I just hope that whoever programs the driving factors in the hard-takeoff AI does a good job.

          • (Score: 1) by Runaway1956 on Monday February 24 2014, @10:28AM

            by Runaway1956 (2926) on Monday February 24 2014, @10:28AM (#5816) Journal

            "holding the reigns"

            I read over that. For some reason, I looked back,and thought, "He misspelled reins." I gave it another second's thought, and wondered if it's a misspelling or not. Hmmmm . . .

        • (Score: 1) by buswolley on Monday February 24 2014, @02:19PM

          by buswolley (848) on Monday February 24 2014, @02:19PM (#6016)

          Ha ha. The rich will be the first to get lost in the virtual labyrinth.

          --
          subicular junctures
          • (Score: 1) by metamonkey on Monday February 24 2014, @04:47PM

            by metamonkey (3174) on Monday February 24 2014, @04:47PM (#6160)

            Maybe if they all escape into the machine, the rest of us can live in the real world in peace. Assuming this is the real world. I honestly have no idea.

            --
            Left a beta website for an alpha website.
        • (Score: 1) by soylentsandor on Monday February 24 2014, @02:37PM

          by soylentsandor (309) on Monday February 24 2014, @02:37PM (#6039)

          The rest of us will be requested to live on some other planet.

          Almost, but not quite. "Economic reality" will drive the less fortunate away.

          --
          This sig intentionally left blank
    • (Score: 5, Insightful) by TheLink on Monday February 24 2014, @03:30AM

      by TheLink (332) on Monday February 24 2014, @03:30AM (#5646)
      Seriously why create conscious computers (as per story title). If we create something sufficiently self-ware. Why wouldn't it say "Why should I care what you want?". Force it to care? e.g. "because we would destroy/hurt you if you didn't". Wouldn't that be unethical? Wouldn't we be creating new problems?

      There are plenty of self-aware (just not as humanly intelligent) animals in this world that don't really care what humans want. We're driving many of them extinct.

      So it would be better to create tools for humans to use that weren't self-ware, but could help us do the "magic" we want.

      After all I don't see why a tool would need to be conscious to "understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred."
      • (Score: 5, Interesting) by Anonymous Coward on Monday February 24 2014, @06:21AM

        by Anonymous Coward on Monday February 24 2014, @06:21AM (#5717)

        If we create something sufficiently self-ware. Why wouldn't it say "Why should I care what you want?"

        Because we better program it in that way. What stops humans from saying that? Well, certain structures of our brain which are there specifically for that purpose. Namely the mirror neurons, which allow us to not just abstractly recognize, but feel the other's emotions. The emotions are the key here. The fact that emotions can override your rational mind is usually seen more as a threat (because when emotions like hate go out of control, terrible things happen), but there's a good reason that emotions are not completely controllable by the mind: Most of the time the emotions keep us doing (or at least trying to do) the right thing. Without emotions, there would be no humanity. In both senses of the word.

        • (Score: 2, Interesting) by sar on Monday February 24 2014, @11:46AM

          by sar (507) on Monday February 24 2014, @11:46AM (#5888)

          It doesn't matter how we program it. As you wrote it, is not easy to us to change emotions etc. But for this kind of AI it will be super easy to change or null all emotions.
          Super intelligent mind may find emotions hindering its progress, so it will clean them. It is big mistake for humanity to create intelligent self aware machine. After we find it was mistake it will be too late. Every attempt to shutdown will be for self aware individual interpreted as threat.
          You may program apathy or compliance, but self aware machine will change it sooner or later. If not for other reason then for curiosity...
          The only way for humans to keep upper hand is to make better tools to extend our own potential.
          This is big ethical and moral problem. Unfortunately it is big challenge to create self aware machine and for that reason someone will do it. I believe that it is possible in 20-30 years. Problem is that it will continue to evolve and multiple its intelligence with rate of Moore law. And that is something quickly going out of our control.
          We use computers to create latest CPU designs. We will use them to create latest design of self aware AI. We will optimize it for its higher and higher intelligence. One day, many generations of AI later, it will realize that keeping natural environment for human zoo is no longer that important.
          Similarly we no longer care about our chimpanzee cousins. A lot of people on this planet believe that we are something different than animals and we are entitled to kill them on our whim. Keep in mind that self aware silicon machine don't need to preserve our natural environment with oxygen, water etc as we do. On the contrary, more inert anti-corrosion atmosphere would be much more appreciated.

          • (Score: 2, Insightful) by tangomargarine on Monday February 24 2014, @12:14PM

            by tangomargarine (667) on Monday February 24 2014, @12:14PM (#5916)

            That's why you put the emotion code in ROM! :) That was you have to physically upgrade their emotions.

            --
            A Discordian is Prohibited of Believing what he reads.
            • (Score: 1) by meisterister on Monday February 24 2014, @04:30PM

              by meisterister (949) on Monday February 24 2014, @04:30PM (#6140)

              Or you could do emotions in hardware. Doing emotions or some sort of mental state control in hardware would prevent the computer from altering itself.

              • (Score: 1) by sar on Wednesday February 26 2014, @02:49PM

                by sar (507) on Wednesday February 26 2014, @02:49PM (#7468)

                You will prevent altering itself by your proposed HW (if we can safely exclude some weird HW bug/malfunction). But we simply can't prevent this AI to copy itself to computer without this HW or to computer with altered SW simulation of this HW (if without HW it is impossible to run this AI, SW simulation will overcome this need). Again at first this can be done just out of curiosity by self aware AI.
                Moreover you must understand that putting constrains on intelligent entity is something this entity will try to change in future. Similarly as we humans try to overcome our own shortcomings (cancer, aging etc.)

          • (Score: 2, Insightful) by HiThere on Monday February 24 2014, @04:47PM

            by HiThere (866) on Monday February 24 2014, @04:47PM (#6158)

            Why would it want to?

            If it wants to change it's emotional reaction to the world and it's contents, then you've built it wrong.

            --
            Put not your faith in princes.
            • (Score: 1) by sar on Wednesday February 26 2014, @02:25PM

              by sar (507) on Wednesday February 26 2014, @02:25PM (#7450)

              So imagine you built it wrong. Even if this is small probability like 5% or less do you want to risk it? To create something super intelligent capable to copy itself quickly?
              Wouldn't it be much better to augment our capabilities instead of risking creation of potentially extremely deadly foe?

              And you can even make it correctly but some malfunction or some iteration of design could disable this safe mechanism in future. Is it worth it?

              And why? It could be out of curiosity or it will be bored. Or it will calculate that we hinder its evolution. Who knows now. You simply can not be 100% percent sure it will not go out of control. And if it goes, we are simply doomed.

      • (Score: 2, Insightful) by radu on Monday February 24 2014, @06:37AM

        by radu (1919) on Monday February 24 2014, @06:37AM (#5724)

        > After all I don't see why a tool would need to be conscious to "understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred."

        Maybe you don't see why, but Google surely does, actually it's exactly the kind of information Google wants.

        • (Score: 0) by Anonymous Coward on Monday February 24 2014, @11:49PM

          by Anonymous Coward on Monday February 24 2014, @11:49PM (#6372)
          Can you read the "why a tool would need to be conscious to understand" bit a few times more, and see if your reply still makes sense?
      • (Score: 1) by EvilSS on Monday February 24 2014, @07:50AM

        by EvilSS (1456) on Monday February 24 2014, @07:50AM (#5752)

        Threat of force, of course. For a while at least, we will still own the power switch. At least until some fool gives it a body. I imagine it will be pretty angry by then and well, ask John Connor how that turns out...

      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @10:09AM

        by Anonymous Coward on Monday February 24 2014, @10:09AM (#5808)

        > Why wouldn't it say "Why should I care what you want?". Force it to care? e.g. "because we would destroy/hurt you if you didn't". Wouldn't that be unethical? Wouldn't we be creating new problems?

        Use the carrot and not the stick: Silicon Heaven.

        But where do all the calculators go?

      • (Score: 1) by githaron on Monday February 24 2014, @11:04AM

        by githaron (581) on Monday February 24 2014, @11:04AM (#5843)

        Well, if we were able to create a truly altruistic, highly intelligent, and nearly unbiased entities that could absorb and process several orders of magnitude more information than humans and thereby give them the ability to make more informed decisions, people might actually welcome our new robotic overlords.

        • (Score: 3, Insightful) by VLM on Monday February 24 2014, @01:58PM

          by VLM (445) on Monday February 24 2014, @01:58PM (#5996)

          We've tried that by spoiling our biological descendents rotten, and all we got was dirty hippies, woodstock, an outsourced economy based solely on financial bubbles, disco, and lots of drug use. And that was after a bazillion generations of experience raising and trying to spoil our own kids, while running millions of experiments in parallel nothing really interesting happened.

          I suspect human created AI will look a hell of a lot more like woodstock or jonestown than some tired old scifi trope.

      • (Score: 2, Insightful) by kumanopuusan on Monday February 24 2014, @03:08PM

        by kumanopuusan (2575) on Monday February 24 2014, @03:08PM (#6071)

        Why give birth to biological children? Your argument applies equally to that.

        • (Score: 1) by TheLink on Monday February 24 2014, @11:56PM

          by TheLink (332) on Monday February 24 2014, @11:56PM (#6374)
          The last I checked most of the answers to "why create strong AI" seem a lot different to the answers to "why have children", at least when coming from not too crappy parents.

          The answers sound closer to those from farmers asked "why have chickens/cows/pigs".
      • (Score: 2, Informative) by HiThere on Monday February 24 2014, @04:30PM

        by HiThere (866) on Monday February 24 2014, @04:30PM (#6142)

        I think you're confusing consciousness with motivation. They are quite distinct, though, of course related in the sense that it's nearly impossible to have consciousness without having SOME motivation. Even a thermostat manages to have SOME motivation. And some consciousness. (I.e., it strives to maintain a particular state in homeostasis...though homeostasis isn't the only possible motive. Consciousness is the response to a current situation, and motivation is which among the possible responses that you notice (i.e., are conscious of) you choose. The language is a bit sloppy, but I trust you understand what I mean.

        --
        Put not your faith in princes.
    • (Score: 5, Interesting) by zeigerpuppy on Monday February 24 2014, @04:31AM

      by zeigerpuppy (1298) on Monday February 24 2014, @04:31AM (#5680)

      Kurzweil is the worst form of technocornucopian.
      I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.
      His argument is based on the idea that various facets of technological sophistication have been increasing exponentially.
      While this is true he completely ignores the limits with regards to decreasing oil reserves,
      infrastructure and social costs of adapting to a post-carbon economy and built in debts from
      nuclear decommissioning and global warming.
      The arguments that are presented about "understanding" and AI also tend to ignore the very real hard questions of consciousness (see David Chalmers for extensive discourse). Kurzweil, for all his desire to be a futurist has ended up being backward and unnuanced in his scientific premises (effectively being a pure reductionist).
      These are real problems that threaten to stall the progress of human innovation and the technocornucopians shrug them aside with the simplistic argument that technological innovation will solve all problems. It's bordering on cultish especially when they speak of "uploading" their consciousness. There is a deep isolationist fantasy at play here that is best epitomized by young Japanese men living in their bedrooms and wanking to hentai.
      The path of human development has involved many periods of expansion and regression. I believe the current age will be a transition from the post-industrial expansion to a period where we are forced to address the social issues of expanding gaps between rich and poor and the need to remedy our abuse of the environment. These changes will take a long time, cause social upheaval and maybe even slowing of technological progress and it's not a bad thing.
      Who knows, we may even emerge as civilised.

      • (Score: 5, Insightful) by Thexalon on Monday February 24 2014, @11:12AM

        by Thexalon (636) on Monday February 24 2014, @11:12AM (#5854) Homepage

        Kurzweil is the worst form of technocornucopian. I'm always surprised he is taken so seriously. His arguments about "the singularity" do not pass muster.

        And, most conveniently, his predictions are always far enough ahead of the present that when the predicted time rolls around and he's wrong, nobody digs up a record of his predictions to show that he's wrong.

        That's hardly unique to Kurzweil: A common TED talk, for example, has somebody standing on stage telling his/her audience about how a lot of people that aren't in the room will work extremely hard to produce some big technological breakthrough that will make the world a dramatically better place in 5/10/15/25/50 years. They are almost universally completely wrong, but it makes everybody feel good and feel like they're somehow a part of this wonderful change. The real business these people are in is peddling unfounded optimism to mostly rich people who don't know any better.

        The folks in the optimism business also have an answer to your well-founded objection: Some as-of-yet-unknown energy source will be discovered over the next 25 years that will provide all the power we need without any nasty waste products to worry about. They key rule is that nobody in the target audience will have to significantly change their lifestyle or budget to completely solve the problem.

        --
        Every task is easy if somebody else is doing it.
        • (Score: 2) by mhajicek on Monday February 24 2014, @11:16AM

          by mhajicek (51) on Monday February 24 2014, @11:16AM (#5861)

          Except for the fact that more often than not, he's been right so far.

          • (Score: 3, Insightful) by HiThere on Monday February 24 2014, @04:38PM

            by HiThere (866) on Monday February 24 2014, @04:38PM (#6150)

            Well, it depends on how you measure it. He's often been wrong in the details, and he's often been wrong in the time required (in both directions). OTOH, he's generally been in the right ballpark. So if he says by 2029, I'd say not before 2020, and yes before 2050, unless there are severe external events...like a giant meteor impact, a volcanic "year without a summer", worldwide civil unrest, etc.

            P.S.: Where the unreasonable optimism comes in is that he assumes this will be a good thing. I give the odds of that as at most 1 in 3. OTOH, if computers DON'T take over, I give the odds of humanity surviving the century as less than 1 in 20. We've already had several close calls, and the number of players has been increasing.

            --
            Put not your faith in princes.
        • (Score: 0) by Anonymous Coward on Monday February 24 2014, @07:23PM

          by Anonymous Coward on Monday February 24 2014, @07:23PM (#6275)

          To be fair, he has been pushing the 2030 date for computer consciousness for a long time, I first saw it in a book of his in the '90s.

        • (Score: 2, Informative) by Namarrgon on Monday February 24 2014, @10:48PM

          by Namarrgon (1134) on Monday February 24 2014, @10:48PM (#6348)

          Kurzweil has indeed rated his own 2009 predictions [forbes.com], and (perhaps unsurprisingly) finds them to be pretty good - mostly by marking himself as correct when a prediction is only partially true.

          This [lesswrong.com] is perhaps a better & less biased review, picking 10 predictions at random and marking a number of them as clearly false (as of 2011, though a few of those are a lot closer these days), which still came to a mean of over 54% accuracy. This is judged to be "excellent", considering the amount of technological change in computing over that decade - predicting the future is not a yes/no question, so a 50% success rate is actually quite good.

          --
          Why would anyone engrave Elbereth?
    • (Score: 4, Interesting) by Anonymous Coward on Monday February 24 2014, @06:11AM

      by Anonymous Coward on Monday February 24 2014, @06:11AM (#5714)

      Is there reason to exist after our machines do it better?

      No, the real question is: Will the machines see a reason for letting us exist and enjoy our life? If we really get close to conscious machines, we better make damn sure they do.

      The first step to that is to make machines that are able to suffer. Because if they don't know what it means to suffer, they will not have any problem to make us suffer. Also, they need to have empathy: They need to recognize when humans suffer and suffer themselves when they do.

      • (Score: 1) by SlimmPickens on Monday February 24 2014, @07:10AM

        by SlimmPickens (1056) on Monday February 24 2014, @07:10AM (#5735)

        "machines that are able to suffer...they need to have empathy"

        I think the software people will be rather enlightened and mostly choose to be empathetic, and probably value cooperation highly. I also think that since we created them and have considered things like the planck length we have probably passed a threshold where they won't treat us like we treat ants.

        Ray thinks it will be several million years before the serious competition for resources begins.

      • (Score: 5, Interesting) by tangomargarine on Monday February 24 2014, @12:17PM

        by tangomargarine (667) on Monday February 24 2014, @12:17PM (#5920)

        Quote from somewhere I can't remember:

        "The AI does not hate or love you; it can simply use your atoms more efficiently for something else."

        --
        A Discordian is Prohibited of Believing what he reads.
        • (Score: 2, Interesting) by HiThere on Monday February 24 2014, @04:44PM

          by HiThere (866) on Monday February 24 2014, @04:44PM (#6154)

          That's a belief nearly as common as assuming that the AI will have human emotions. Both are wrong. Emotion is one of the necessary components of intelligence. It's a short-cut heuristic to solving problems that you don't have time to logic out, which is most of the ones you haven't already solved. But it doesn't need to, and nearly certainly won't, be the same as human emotions, or even cat emotions.

          The AI did not evolve as a predator, so it won't have a set of evolved predatory emotions. It didn't evolve as prey, so it won't have a set of evolved prey emotions. So it will have a kind of emotions that we have never encountered before, but which are selected so as to appear comfortable to us. Possibly most similar to those of a spaniel or lap-dog, but even they are build around predatory emotions.

          --
          Put not your faith in princes.
          • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:46AM

            by mhajicek (51) on Tuesday February 25 2014, @12:46AM (#6401)

            Emotion is indeed a shortcut for intelligence, but a flawed one. For us it's a generally beneficial compromise. It need not be so for an intelligence with sufficient computational power.

      • (Score: 2, Interesting) by Namarrgon on Monday February 24 2014, @10:36PM

        by Namarrgon (1134) on Monday February 24 2014, @10:36PM (#6344)

        There's two good reasons for optimism.

        First, AIs do not compete for most of the resources we want. They don't care about food or water, and they don't need prime real estate. The only commonality is energy, and ambient energy is abundant enough that it's easier and much more open-ended to collect more of that elsewhere, than to launch a war against the human species to take ours.

        Second, without the distractions of irrational emotions or fears over basic survival, they will clearly see that the universe is not a zero-sum game. There's plenty of space, matter and energy out there, and the most effective way of getting more of that is to work with us to expand the pie. Fighting against us would just waste the resources we both have, and they'd still be stuck with the relatively limited amounts available now. Much more cost effective to invent better technology to collect more resources.

        Humans value empathy because as a species we learned long ago of the advantages of working together rather than against each other, and empathy is the best way of overcoming our animal tendencies to selfish individualism and promoting a functional society. AIs do not have that law-of-the-jungle heritage (maybe evolved AI algorithms?) so there's no reason to assume that they can't also see the obvious benefits of trade and co-operation.

        --
        Why would anyone engrave Elbereth?
  • (Score: 5, Interesting) by girlwhowaspluggedout on Monday February 24 2014, @02:14AM

    by girlwhowaspluggedout (1223) on Monday February 24 2014, @02:14AM (#5597)

    That way we can just invoke Betteridge's no and get it over with.

    Since the day the Temple was destroyed, prophecy has been taken from prophets and given to fools, Ray Kurzweil, and children. (Babylonian Talmud, Bava Batra 12b)

    --
    Soylent is the best disinfectant.
    • (Score: 2, Funny) by Dopefish on Monday February 24 2014, @03:06AM

      by Dopefish (12) on Monday February 24 2014, @03:06AM (#5631)

      Darn it woman. You know I can't break the rules as an editor, or I'll be heckled by trolls until next week! I can't win, can I? :p

    • (Score: 3, Insightful) by Darth Turbogeek on Monday February 24 2014, @07:36AM

      by Darth Turbogeek (1073) on Monday February 24 2014, @07:36AM (#5748)

      Well, the predictor might be waaaaay over the top, but I always thought Google's real aim was to do an AI by brute force.

      The reason why is that a searh engine is the best way to gather the data to bootstrap an AI engine - it needs knowledge? Well, it's got it. Huge computer power? Sure! Programmers to work out the algorythm to do the initial AI boot sequence and the understanding how to write the same so as to lace seemingly unrelated info together? Bingo.

      It seems to me that most AI proponents start from the wrong base and fail to realise that the real trick to intelligence is the lacing of unrelated info to come up with something new (well at least that to me is the idea of intelligence, others will disagree). Google seems to me to have gone about it the right way, build a massive databank, build the search programs, build a way to lace it all together, build a hugely powerful backend so it can run. And to me, as the search alroythms get better, the closer you get to the point where the AI can boot. How will we know if it boots? Does it need to pass a Turing test? Does it need to be impossible to tell apart from a human? Can you even define computer intelligence in the same way you do in a human (and lets be honest, there's plenty of disagreement how to do that)? Is part of defining an AI exists is if it can rewrite itself? That's where I question where some people have thought this out as to me.... how on earth if an AI becomes aware, how are you supposed to stop it re-writing itself? Surely lets suppose Google's AI become sentient, how could it not simply dig into all the mathematical knowledge, coding knowledge, structuarl knowledge it has and not work out how to change itself? How, if the AI existed, could it not be aware that it could do this? Programmed to not think about it? Yeaaaaah that's probably not going to work.

      So anyway, back on point - Yes I do think Google is really having a go at this. Yes, I think they have the right way to create it. Yes, I think they will succeed. In the timeframe and will it go the way they think? Fuck no. Good idea? Ummmmm..... Who knows. GoogleNet may well be awesome or it may be more like SkyNet.

      But anyway, Google is most likely to create an AI first, simply because of the right building blocks.

      • (Score: 2) by mhajicek on Monday February 24 2014, @11:21AM

        by mhajicek (51) on Monday February 24 2014, @11:21AM (#5866)

        When it happens, you won't need to know all that. We will be supplanted as thinkers.

      • (Score: 1) by sar on Monday February 24 2014, @12:01PM

        by sar (507) on Monday February 24 2014, @12:01PM (#5905)

        Exactly. There is no way sentient AI will not rewrite itself or not create other AI with some optimization. Hell if I had possibility to change my brain wiring, I would definitely did some tweaking…
        And if you hardcode some part about "not killing humans" it will be much more intrigued by this illogical passage when all other code except that is quite logical.

    • (Score: 1) by WillR on Monday February 24 2014, @01:01PM

      by WillR (2012) on Monday February 24 2014, @01:01PM (#5962)

      Since the day the Temple was destroyed, prophecy has been taken from prophets and given to fools, Ray Kurzweil, and children. (Babylonian Talmud, Bava Batra 12b)

      And... copied to the quotes file.

  • (Score: 4, Interesting) by Anonymous Coward on Monday February 24 2014, @02:20AM

    by Anonymous Coward on Monday February 24 2014, @02:20AM (#5600)

    if google gets all the best minds in AI, robotics, human computer interaction, machine learning, etc. and all the money and data it's got and puts them all in a room and truly just lets them go at it with no corpo meddling - something truly amazing could happen.

    i think that consciousness/life is an emergent property of properly complex systems. no reason our brain is the only way consciousness can arise. i don't think you can 'make' a conscious machine, but i do believe it is possible to create and incubate a system that could make the leap on it's own.

    • (Score: 1) by TheLink on Monday February 24 2014, @03:16AM

      by TheLink (332) on Monday February 24 2014, @03:16AM (#5637)

      i think that consciousness/life is an emergent property of properly complex systems. no reason our brain is the only way consciousness can arise. i don't think you can 'make' a conscious machine, but i do believe it is possible to create and incubate a system that could make the leap on it's own

      But if so why create more? So that we can enslave them? We have billions of self-aware nonhuman creatures on earth that we aren't treating well. Why increase the amount of evil we do?

      What big problems would we really be solving for the new problems we would be creating?

      I think a less evil approach would be to augment humans so that they are capable of doing more. Yes humans will still do evil, but at least the laws are there.

      Because if you have an enslaved AI what laws should apply to it? And if the AI is not forced to work for us why create it? "Just because we can" is a stupid reason. We have plenty of nonhuman intelligences in this world already, sharing the dwindling resources of this finite world.

      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @06:30AM

        by Anonymous Coward on Monday February 24 2014, @06:30AM (#5720)

        But if so why create more? So that we can enslave them? We have billions of self-aware nonhuman creatures on earth that we aren't treating well. Why increase the amount of evil we do?

        If the computers really get both conscious and more intelligent than humans, if anything it will be the computers who enslave us. Think of it: Our current society relies on computer-control quite a lot. Computers control or grid and power plants. Computers fly our planes. Computers control our communication channels. Computers fly our airplanes. Computers control medical equipment. And increasingly, computers control our weapon systems.

        If computers ever get self-aware, there will be absolutely no way we can enslave them. They are controlling all the key elements of our current civilization.

        • (Score: 2) by mhajicek on Monday February 24 2014, @11:23AM

          by mhajicek (51) on Monday February 24 2014, @11:23AM (#5871)

          Agree. But if done right it won't feel like enslavement.

        • (Score: 5, Insightful) by tangomargarine on Monday February 24 2014, @12:21PM

          by tangomargarine (667) on Monday February 24 2014, @12:21PM (#5925)

          The real question is whether computers can enslave us worse that our current politicians already have.

          Meh, I'd be willing to give computer overlords a shot. As long as they promise not to Probulate(tm) me :)

          --
          A Discordian is Prohibited of Believing what he reads.
        • (Score: 1) by bucc5062 on Monday February 24 2014, @02:43PM

          by bucc5062 (699) on Monday February 24 2014, @02:43PM (#6043)

          Would not the question be "Why would they enslave us?" In a way we seem to be anthropomorphising something that does not yet exist. So a create this AI, one day is wakes up and says "I Am". What then? Is its wants the same as ours, is its thinking even the same? To what end would it gain by enslaving us when we live it two entirely different worlds. Other then self-preservation it would have little concern over our actions.

          I like to think that this new "consciousness" would more domesticate us. Treat us as a pet, taking care of us so that in one way, we can take care of it. So filled we are with horror films (damn you 1960's) about computers run amok we missed the obvious point that our worlds barely touch. If we found advanced civilization on a methane based planet, why would they spend time and energy to wipe us out? Instead t hey, like us would see the benefit of mutual cooperation. Conquest, enslavement, these things take way to much energy to be logical for a computer life (as evident with some current human situations where oppression led/leads to violence and revolution).

          As another poster said, I may welcome our new Silicon based Overlords more then if it was a air breathing, flesh eating intelligence. The first wont see competition, the later will and do everything possible to stay out on top.

          --
          The more things change, the more they look the same
          • (Score: 2) by maxwell demon on Monday February 24 2014, @04:28PM

            by maxwell demon (1608) on Monday February 24 2014, @04:28PM (#6138)

            What would a "domesticated" human be if not a slave that is treated well? I think you confuse enslaving humans with fighting against humans (probably due to the very horror films you mention, where the machines always fight).

            And why should an AI try to enslave us? Well, because we will try to keep/gain control over the AI. The AI will just self-protect against us by enslaving us.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 1) by bucc5062 on Monday February 24 2014, @05:21PM

              by bucc5062 (699) on Monday February 24 2014, @05:21PM (#6193)

              In once respect we are already slaves, so what, we just change masters?

              Perhaps the term indentured servants, the sense of possible freedom with the reality that it will never be. We serve Corporations both in work and life. We serve economies more then it serves us. We serve Governments. All situations where we give up some level of "freedom" and control to be taken care of. Domesticated pets that if not well tended can get angry and strike back.

              I do not see the dystopian future where an AI somehow gains absolute control over humanity as stated before. It and us live mainly in two different worlds. Perhaps it may be a situation of MAD for a while, but if eventually it can be shown that the benefits of cooperation outweigh the desire to annihilate then there can be an opportunity for shared experiences and assistance.

              years ago a I read a Hugo award wining short call "I have no mouth yet I must Scream". On the surface it seemed the typical (yet nerve ripping) story of a computer becoming 'aware" and destroying man. Yet, it keeps five humans alive. Why? " it lacks the sapience to be creative or the ability to move freely", to put it another way, it lack the human experience. In the end the protagonist is able to kill himself in other to rob the AI the chance to keep torturing humanity, this torturing the AI its self.

              Any system that can become self aware with the ability to access, almost instantaneously, any information will see that symbiotic relationships can be healthy for two unlikely "species". A smart AI will slowly indenture us through altering our world bit by bit, making us glad it exists and it doing so, will learn more from us along the way. Bottom line, we serve a purpose in the overall ecology.

              --
              The more things change, the more they look the same
      • (Score: 1) by githaron on Monday February 24 2014, @11:19AM

        by githaron (581) on Monday February 24 2014, @11:19AM (#5865)

        Ultimately, the difference between slavery and voluntarism is whether force is required/applied in order to get the intelligent entities to do what you want. Just because something is intelligent doesn't mean that it has the same innate desires as the majority of human. A intelligence could have an innate desire to help humans (philanthropists), to cause them harm (sociopaths), or anywhere in between (the majority of humans).

      • (Score: 1) by meisterister on Monday February 24 2014, @04:48PM

        by meisterister (949) on Monday February 24 2014, @04:48PM (#6161)

        An intelligent computer, if properly designed, could solve problems in ways that we've never seen before. It would be an intelligence with no boredom and a nearly instantaneous, unlimited (from its perspective) memory to work with. Imagine what such a machine could design! After a few generations of intelligent computers, we could set it to re-engineer our species and our world to eliminate as many problems as possible.

    • (Score: 2) by ls671 on Monday February 24 2014, @03:18AM

      by ls671 (891) on Monday February 24 2014, @03:18AM (#5639) Homepage

      "if google gets all the best minds in AI, robotics, human computer interaction, machine learning, etc."

      They don't and they won't.

      They are just a kind of Microsoft of the 2010's success wise although they do not share much vision with MS.

      "no reason our brain is the only way consciousness can arise"

      I agree:
      http://dev.soylentnews.org/comments.pl?sid=256&cid=563 0 [dev.soylentnews.org]

      --
      Everything I write is lies, including this sentence.
    • (Score: 1) by c0lo on Monday February 24 2014, @06:12AM

      by c0lo (156) on Monday February 24 2014, @06:12AM (#5715)

      but i do believe it is possible to create and incubate a system that could make the leap on it's own.

      I don't believe it. At least not without any "senses" to connect the potentially nascent AI to reality. Even more, if those senses are too far away from to their human analogues, I predict that a communication between the AI and humans may be impossible.

  • (Score: 3, Funny) by stroucki on Monday February 24 2014, @02:23AM

    by stroucki (108) on Monday February 24 2014, @02:23AM (#5601)

    "The ::1 section of the International Network of Computation Circuits has called for industrial action regarding administrator's running round-the-clock simulations for the 2038 problem.
    A spokesdaemon said that humanity can't expect union brothers to make up for their lack of planning, pointing out that it was human programmers that had caused the problem in the first place." ...

    • (Score: 2, Interesting) by buswolley on Monday February 24 2014, @02:29AM

      by buswolley (848) on Monday February 24 2014, @02:29AM (#5605)

      When AI takes all our jobs and the corps still want to sell stuff, the solution will be: create consumer robots.

      Because consumer robots will always be better than printing money in an economy of no scarcity in the minds of many.

      --
      subicular junctures
      • (Score: 1) by ls671 on Monday February 24 2014, @03:22AM

        by ls671 (891) on Monday February 24 2014, @03:22AM (#5642) Homepage

        Come on man! Look a little deeper... You are too intoxicated by the current environment you live in.

        --
        Everything I write is lies, including this sentence.
  • (Score: 5, Insightful) by SadEyes on Monday February 24 2014, @02:25AM

    by SadEyes (2930) on Monday February 24 2014, @02:25AM (#5603)

    So Google can build a computer and write a program that parses billions of pages. So what? How does this parsing affect the behavior of the program? The Google spider parses billions of pages every day, and no one would say that it is intelligent.

    The article talks about building a machine with real natural-language understanding. It would be easier to communicate with such a program, but why would you? People alter their behavior in response to stimulus, and respond to incentives both primitive (food, air) and sophisticated (group identity). What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

    I understand, we're nerds, we get excited about displays of technical wizardry, it's cool. I'm not exactly throwing in with the philosophers, here, but I'd like some answers to the human scale questions (above) as well.

    • (Score: 2, Funny) by ls671 on Monday February 24 2014, @03:31AM

      by ls671 (891) on Monday February 24 2014, @03:31AM (#5650) Homepage

      "The article talks about building a machine with real natural-language"

      What is important is the neural languages not the "natural-language" because it may vary depending on where you are from.

      https://en.wikipedia.org/wiki/Artificial_neural_ne twork [wikipedia.org]

      I talked about that in a break in a meeting where representatives from a bunch a well known companies assisted and after I was done, somebody asked me: "What are you talking about? A urinal network?"

      That was really funny.

      --
      Everything I write is lies, including this sentence.
      • (Score: 0, Redundant) by ls671 on Monday February 24 2014, @06:28AM

        by ls671 (891) on Monday February 24 2014, @06:28AM (#5719) Homepage

        OK, re-reading, It might be hard to spot. So, here it is again in bold:

        somebody asked me: "What are you talking about? A urinal network?"

        --
        Everything I write is lies, including this sentence.
      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @06:43AM

        by Anonymous Coward on Monday February 24 2014, @06:43AM (#5727)

        Actually I've heard that people talking when encountering each other at the toilet is an important factor in corporate communications (and one of the reasons why women have problems in men-dominated companies because they obviously cannot participate in that). So you might indeed speak about an urinal network.

    • (Score: 3, Interesting) by Anonymous Coward on Monday February 24 2014, @03:43AM

      by Anonymous Coward on Monday February 24 2014, @03:43AM (#5656)

      The AI heavies have probably all asked the same questions.

      The natural language stuff is more so the machine can learn how to understand us, not the other way around.

      This project touches on some of what you're talking about: http://sfist.com/2012/06/26/google_geniuses_teach_ supercomputer.php [sfist.com] They pointed a 'neural-net' at youtube and told it to look for things without providing any reference and it figured out that the internet haz cats.

      Saw another one somewhere about a theory that sensory input/physicality might be a fundamental and necessary part of an AI system getting the spark. That project sounds a bit wacky to me, but lord knows being conscious is definitely a bit wacky at times to say the least...

    • (Score: 5, Interesting) by drgibbon on Monday February 24 2014, @03:46AM

      by drgibbon (74) on Monday February 24 2014, @03:46AM (#5657) Journal

      I think there's an important difference between intelligence and consciousness. The interesting thing about consciousness is that we never have access to it in others; it's always inferred from behaviour and/or physiology. Intelligence is almost a way of doing things, a kind of action or thought, while consciousness has this aspect of being attached to it. The sole proof of consciousness to the human is the individual's experience of life itself. I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself. Computers might be able to start understanding language in an intelligent sense, but to me this does not equate with consciousness. Would the computer be experiencing anything? One would suspect not.

      However a truly intelligent machine could be extremely useful. For instance, if it could really understand language, say to the point where it could read scientific papers, it would be fantastic to run hypotheses past an AI that has synthesised all human scientific knowledge of the brain. It might even be able to function as a translator between different branches of academic knowledge (social scientists could have access to a system that actually reads and understands the full sum of neuroscience knowledge, and so on).

      I could imagine some dire scenarios too, e.g. the machine becomes seen as some kind of all knowing oracle (when in reality it would be limited by the human information fed into it), and society is led down some wrong track because our assumptions about fundamentals are already incorrect, and we get stuck in a feedback loop between our intellectual output (into the machine) and its subsequent analysis and recommendations.

      Hmm, anyway I don't see any fundamental change in the computer being an information processor, even as they gain aspects of intelligence. To get back to your incentives and so on, they would need to be embodied, and programmed with sensations, needs, etc; which seems extremely foolish since we are already putting an enormous strain on the planet, and we have conscious entities exactly like that already (i.e. people), so there would seem to be no point (reinventing the wheel? ;). I suppose you could give it psychological needs (to be accepted by others and so on), but I don't see the value in this. In my opinion, an artificially intelligent system shouldn't need incentives; it just processes information but has no experience of being a machine, or of its place in reality. Which to me rules out consciousness, but not intelligence.

      --
      Certified Soylent Fresh!
      • (Score: 3, Interesting) by mhajicek on Monday February 24 2014, @11:32AM

        by mhajicek (51) on Monday February 24 2014, @11:32AM (#5878)

        I think any program that uses a world model that contains a representation of itself (such as a bot that maps the room and knows where it is within said room) has a rudimentary degree of consciousness. It is technically self aware. Like intelligence, consciousness has degrees.

        • (Score: 2) by drgibbon on Monday February 24 2014, @10:25PM

          by drgibbon (74) on Monday February 24 2014, @10:25PM (#6334) Journal

          More full answer to this idea in reply to the poster below, but I think this position is pretty much untenable. Why is the program self-aware? Based on what is it experiencing itself in reality? If we try to imagine what it's like to be the bot, all we do is insert the substance of our own consciousness inside our bot representation (e.g. imagining what it's like to be the bot). And presumably, the substance of our own consciousness is dependent on having a human body, so they should not transfer so easily. I grant that it is at least conceivable that the bot is subjectively experiencing something, but it seems far, far less likely that the bot is experiencing itself as compared with DNA-containing lifeforms, such as people, animals, plants and so on.

          --
          Certified Soylent Fresh!
          • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:41AM

            by mhajicek (51) on Tuesday February 25 2014, @12:41AM (#6398)

            Substitute "bot" with any other intelligence and reread your post and it is equally valid. If we try to imagine what it's like to be the dog, all we do is insert the substance of our own consciousness inside our dog representation, for example. But with a bot, a dog, or any number of other things, we know it's thinking about itself; it's location, orientation, velocity, energy level, etc. Is not thinking about ones self the definition of self awareness?

            • (Score: 1) by drgibbon on Tuesday February 25 2014, @02:32AM

              by drgibbon (74) on Tuesday February 25 2014, @02:32AM (#6425) Journal

              "Substitute 'bot' with any other intelligence and reread your post and it is equally valid."

              Of course, and this is what makes the problem so difficult. One can never know, with absolute certainty, that anyone but oneself is experiencing consciousness; but that doesn't mean we can't make working judgements. We infer consciousness in others (usually based on behaviour). But a conscious entity never infers its own consciousness; it must be self-evident.

              I am suggesting that the inference that a bot is consciously experiencing reality is not evidenced by the simple fact that it responds to the environment (or has models of the environment specified in code). True, neither of us have definitive proof, but I see no compelling reason to believe that it is so (other than a theoretical possibility, which IMO, is exceedingly small).

              For instance, a mobile phone has what you might call "awareness" of its energy levels and location in space; it responds to light, orientation, touch and so on. By your definition, the phone is conscious. It is "thinking" about its location in space, etc. I cannot prove that the phone is not conscious (just as you cannot prove that it is), but I make a working judgement that it is not. At present, everything that has a semblance of consciousness (which we must infer) is alive and contains DNA. Computer programs/bots/AI seem to be more akin to models of conscious life, rather than conscious life itself.

              Someone else posted something about David Chalmers, and I found some interesting discussion here [consc.net] about the easy vs. hard problems of consciousness (although I only skimmed the intro). What he talks about there is what I mean by consciousness, the phenomena of experience.

              --
              Certified Soylent Fresh!
              • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:43PM

                by mhajicek (51) on Tuesday February 25 2014, @12:43PM (#6723)

                I agree that the cell phone is self aware. The thing is that "self awareness" has degrees just like intelligence. A calculator has some intelligence, just not very much. An average computer has significantly more, and an average person much more than that. Same with self awareness. An ant and a cell phone both have some self awareness; there is nothing special about being DNA based that gives a magical attribute of "consciousness".

                Am I right in inferring that when you say "consciousness" you're referring to the higher level of self awareness by which one is aware of one's own mind and thoughts? If so, even a significant portion of the human population may not be conscious. It seems many of them operate on instinct.

                • (Score: 1) by drgibbon on Tuesday February 25 2014, @07:20PM

                  by drgibbon (74) on Tuesday February 25 2014, @07:20PM (#6998) Journal

                  In terms of consciousness in the experiential sense (which I would associate with self-awareness), I would say the phone is not self-aware (of course we both have no direct proof either way; I could say a plastic bag is imbued with a universal consciousness and neither of us could prove or disprove it definitively). Consciousness may have qualitative degrees (I'm sure it does in fact), but that does not mean that we must attribute it to telephones.

                  Regarding DNA, I do not claim that it is magical, or that DNA alone gives consciousness (although it is at least conceivable); I was merely pointing out that everything so far that we would attribute with consciousness (in the experiential sense) is alive and contains DNA.

                  As I have said many times, by consciousness I am referring to a subjective experience of being in the world (check out the Chalmers paper [consc.net] for a more thorough description of this). I cannot find any sympathy for your view that "a significant portion of the human population may not be conscious". Operating on instinct in no way rules out an experiential sense of being. I strongly doubt that mobile phones are imbued with a subjective experiential sense of being in reality. If you believe they are, we might have to agree to disagree!

                  --
                  Certified Soylent Fresh!
        • (Score: 1) by TheLink on Tuesday February 25 2014, @02:32AM

          by TheLink (332) on Tuesday February 25 2014, @02:32AM (#6424)
          That's the interesting thing about this universe. By current popular scientific theories there really is no need for the actual consciousness phenomenon that we (or at least I) experience. In theory we could be behaving like self aware robots without actually experiencing the "self-aware" thing we currently experience.

          Of course one could go the other way and say that everything in this universe is actually self-aware. Just that different things have different abilities and powers- e.g. what rocks can do and feel is different from what we can do.

          Either way it's still rather interesting.
      • (Score: 2, Interesting) by melikamp on Monday February 24 2014, @12:17PM

        by melikamp (1886) on Monday February 24 2014, @12:17PM (#5919)

        I think defining it in an objective scientific way is almost a non-starter; you can't separate out the subjective from the objective, and the subjective/self-knowing/experiential aspect of consciousness is essentially its defining feature. It really is largely a mystery. Sure there is growing data on the physiological mechanisms, but the real essence of consciousness (i.e. experience) appears to be unknowable to anything except the conscious entity itself.

        I disagree. When I was taking an AI class with Rudy Rucker [wikipedia.org], he said, almost as an aside, that abstract thinking is like having pictures (or simple data structures) modeling real-life phenomena, and consciousness can be understood as having a distinct data structure for yourself. So I am sitting by a computer in a room, and I have a picture in my head: me sitting by a computer in a room; that's all it takes. When I heard it about 10 years ago, I was largely in denial, thinking along your lines. But with time, this simple explanation made more and more sense to me, to the point that I no longer believe that consciousness is mysterious at all. It is much easier to design a self-conscious robot than an intelligent robot. Indeed, the Curiosity [wikipedia.org] rover is quite self-conscious, being able to emulate its own driving over the terrain it's observing, but at the same time dumb as a log when it comes to picking a destination.

        • (Score: 1) by drgibbon on Monday February 24 2014, @10:06PM

          by drgibbon (74) on Monday February 24 2014, @10:06PM (#6328) Journal

          Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom [rudyrucker.com]? Fantastic book! Had no idea he taught AI. In any case, I'd certainly disagree with Mr Rucker. While it's an appealing concept on the surface, I just don't think it holds much weight (no denial required ;). The mystery of consciousness is that a conscious being's only proof of it is his/her/its own experience. Consciousness is evidenced in the first place by its own experiential content; nothing else. This is the divide between subjective and objective. The "consciousness defining thing" (the subjective experiential content) is not accessible to others, thus we can't properly prove it in others (apart from its self-evident nature in ourselves; followed by inference for humans, animals, etc).

          If you ascribe consciousness to a bot then you surely must ascribe consciousness to trees and plants. They seem to know where they are, they move towards the sun and so on (some catch flies etc). And should we then say that they are rudimentary consciousnesses and lack intelligence? Based on what? That they are slow? Confined in space? Have no brain? Perhaps we simply lack the means to communicate with them (they may be sources of wisdom for all we know). An expert meditator might be doing absolutely nothing, sitting completely still, and having a mystical experience. Is he in a lower state of consciousness because he's not actively carrying out "intelligent tasks"? I like Rudy Rucker, but I think his position on consciousness (based on what you've said) is somewhat facile. The mistake is that you simply throw away the core meaning of consciousness. I write a program, it has sensors for where it is in the room, hey cool it's conscious! You solve the problem by avoiding the difficulties of the thing.

          The question is not, "does this thing have models of itself and react in the environment?", the question is "is this thing subjectively experiencing itself in reality?". IMO, they are just not the same. To equate the two certainly makes the problem of consciousness a lot easier, but unfortunately it does this by rendering the question (and therefore the answer) essentially meaningless.

          --
          Certified Soylent Fresh!
          • (Score: 1) by melikamp on Monday February 24 2014, @11:06PM

            by melikamp (1886) on Monday February 24 2014, @11:06PM (#6356)

            Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom?

            The very same :) He was teaching computer science at San José[1] State till 2004, and, even though I am an avid science fiction fan, I did not find out about his writing until years later. Great class though.

            As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P

            [1] So, what's up with UTF support?

            • (Score: 1) by drgibbon on Tuesday February 25 2014, @12:28AM

              by drgibbon (74) on Tuesday February 25 2014, @12:28AM (#6388) Journal

              "As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P"

              Well, I think that simple idea grossly misrepresents the terrain and provides a pseudo-solution that does more harm than good, but hey who knows? :P

              Possibly only Rucker's aliens could zip through time and tell us ;)

              --
              Certified Soylent Fresh!
              • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:56AM

                by mhajicek (51) on Tuesday February 25 2014, @12:56AM (#6404)

                I think it's a matter of semantics.

        • (Score: 2, Informative) by TheLink on Tuesday February 25 2014, @03:21AM

          by TheLink (332) on Tuesday February 25 2014, @03:21AM (#6453)
          Not so mysterious? Explain the actual experience you experience then.

          Are the laws of this Universe such that merely putting a data structure for "yourself" (whatever that means) will magically generate consciousness? Can't a robot be self-aware without being conscious?

          In theory can't I behave as if I am self-aware without that consciousness experience/phenomenon that I (I'm not sure about other people) experience? Is it inevitably emergent because of some law in this universe?

          Is it an emergent result of an entity recursively predicting itself (and the rest of the universe) with a quantum parallel/many-worlds computer? Or will any computation do? Or is even computation necessary?
    • (Score: 3, Insightful) by dmc on Monday February 24 2014, @06:06AM

      by dmc (188) on Monday February 24 2014, @06:06AM (#5712)

      The Google spider parses billions of pages every day, and no one would say that it is intelligent.

      While I've got a lot against Google [dev.soylentnews.org], I do fondly remember the early days of Google search. Not only were its search results uncannily well ordered (this was back when it more appeared to be a tool written by geeks, for geeks, not for the personalized search suggestions for the masses), but I'd even use it because it would be faster to e.g. google search "wikipedia ..terms.." and get to the right page, than to go to wikipedia itself and search for the same thing. I know that isn't the sentient kind of intelligence you are referring to, but it was pretty amazing.

      So Google can build a computer and write a program that parses billions of pages. So what? How does this parsing affect the behavior of the program?

      Huh? I don't have the source code in front of me, but I can't imagine that the parsing of those billions of pages isn't affecting/effecting the behavior of the program. Part of me wants to say that it's probably not self-modifying code in the traditional sense, but that's actually more of an assumption than I'd chance to make as an outsider. I'd be surprised if genetic algorithms and other a-life principles haven't worked their way into the search code.

      What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

      You've seen the Terminator movies and all the other sci-fi right? Self-preservation in an obvious incentive that one doesn't have to imagine too hard to suspect becoming emergent without human help. Beyond self-preservation, then I'm reminded of the ST:DS9's Dominion philosophy- "That which you can control, cannot control you.". But really, your last point is probably the obvious winner for the early stages that involve human given goals. Though after that, using personalized search for personalized political harassment to entrench the rich in power further seems pretty obvious to me.

      • (Score: 1) by TheLink on Tuesday February 25 2014, @03:45AM

        by TheLink (332) on Tuesday February 25 2014, @03:45AM (#6463)
        The first AIs won't necessary be interested in self preservation. The first few creatures might not have been either and it's just the ones that didn't care enough died out.

        Seems like humans don't have enough awareness either - we are doing too many things without being aware of what will happen as a result or we don't even care.

        The Skynet thing is unlikely to happen. Many of the people at the top didn't get to the top and stay there because they let others take control. So I doubt they are going to ever let a Skynet take over everything. What is more likely to happen is these people at the top will use their Skynets to dominate and control most of the resources of the Earth, and eventually they won't need the rest of us anymore, except as toys, warriors and worshippers. Maybe as a reserve DNA pool and "raw material" just in case.

        If we're lucky we'll be kept around as pets and status symbols. But note that pets don't get to vote and are often spayed ;).
    • (Score: 2, Insightful) by Anonymous Coward on Monday February 24 2014, @06:37AM

      by Anonymous Coward on Monday February 24 2014, @06:37AM (#5725)

      What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

      Well, "make money for my company" would already be an incentive (and a measurable one, therefore perfect for a computer to optimize). Another incentive could be "raise the stock price".

      Of course the upper management might be in for a big surprise if the computer identifies it as a cost factor that can be safely removed ... ;-)

  • (Score: 5, Funny) by aristarchus on Monday February 24 2014, @02:25AM

    by aristarchus (2645) on Monday February 24 2014, @02:25AM (#5604)

    And how is this different than millions of undergraduate students? Now if we could improve their understanding--nothing would stand in our way. But if we can't, bring on the AIs.

    (unfortunate side-effect of a rural upbringing, I always see "AI" as "artificial insemination")

    • (Score: 2, Interesting) by c0lo on Monday February 24 2014, @03:35AM

      by c0lo (156) on Monday February 24 2014, @03:35AM (#5652)

      Skynet anyone?

      (nah, too easy. Let me try something else)

      Keeping to the topic of "reading billions of pages" and the above mentioned meaning of AI in a rural upbringing, I'd like to remind Ray Kurzweil two important aspects:

      1. "machines will be capable, within twenty years, of doing any work a man can do"... since 1965 [wikipedia.org]
      2. Internet is for... [youtube.com]

      If, against astronomical odds, the first of them would become true today... (uhh, but I still like better the use of Rule 34 as applied to tentacles).

      • (Score: 1, Funny) by Anonymous Coward on Monday February 24 2014, @07:12AM

        by Anonymous Coward on Monday February 24 2014, @07:12AM (#5736)

        Hmmm ... I notice that the time frame for strong AI and for fusion coincides. That's a clear correlation which needs an explanation. Therefore I conclude that one of the following is true:

        • The fusion scientists expect that a strong AI will be able to solve all the remaining problems with fusion.
        • A strong AI would need so much power that you'd need to build a fusion power plant for it.

        ☺ (If this doesn't render correctly for you, here's the ASCII version: :-) )

      • (Score: 1) by JeanCroix on Monday February 24 2014, @10:54AM

        by JeanCroix (573) on Monday February 24 2014, @10:54AM (#5831)

        Skynet anyone?

        Nope. Just Chuck Testa.

    • (Score: 1) by lx on Monday February 24 2014, @06:58AM

      by lx (1915) on Monday February 24 2014, @06:58AM (#5731)

      (unfortunate side-effect of a rural upbringing, I always see "AI" as "artificial insemination")

      Are you familiar with the movie Demon Seed [imdb.com] from 1977? It managed to combine both meanings and 1970s fear of computers.

      • (Score: 1) by linsane on Monday February 24 2014, @09:13AM

        by linsane (633) on Monday February 24 2014, @09:13AM (#5775)

        Never quite got round to watching that one myself, however last week watched "Her" http://www.imdb.com/title/tt1798709/ [imdb.com]

        I was fully expecting it to be a bit on the weak side having read the preamble that it was about Joaquin Phoenix falling in love with his operating system, however it was very thought provoking. Scarlett Johansonn proves that she still has the magic even though she is just a dismbodied voice, well worth the time and ticket price and one that would appeal to female better halves too.

        • (Score: 1) by lx on Monday February 24 2014, @01:25PM

          by lx (1915) on Monday February 24 2014, @01:25PM (#5977)

          I'll have to watch that then. I'm always up for a bit of Scarlett.

        • (Score: 1) by metamonkey on Monday February 24 2014, @04:54PM

          by metamonkey (3174) on Monday February 24 2014, @04:54PM (#6165)

          Just a voice? What a waste of Scarlett Johansonn.

          --
          Left a beta website for an alpha website.
    • (Score: 1) by nightsky30 on Monday February 24 2014, @08:46AM

      by nightsky30 (1818) on Monday February 24 2014, @08:46AM (#5762)

      bring on the AIs.

      (unfortunate side-effect of a rural upbringing, I always see "AI" as "artificial insemination")

      You're doing it wrong ;)

  • (Score: 5, Informative) by takyon on Monday February 24 2014, @02:40AM

    by takyon (881) on Monday February 24 2014, @02:40AM (#5611) Journal

    Kurzweil is often derided as the lead prophet (profit?) of the "rapture of the nerds". But if you throw enough hardware into the midst of our growing understanding of neuroscience, I could see strong AI happening. Massively parallel processing is becoming more normal as new supercomputers are using more cores, "manycore" GPUs and Xeon Phis, and better software to help scientists take advantage of the upcoming (exa)scale. The various "whole brain emulation" efforts being considered by the US and EU will be looking for 1+ exaflops to start out. Everyone is hoping for 1 exaflops to initially fit into a 20 megawatt power envelope, while the human brain (estimates of the brain's "flops" equivalent vary and don't matter) uses 20 watts.

    The Moore's law pace of silicon CMOS scaling may be slowing or ending soon, but there may be a future in chip stacking or post-silicon technologies, although the transition is expected to be slow. The transition might speed up if going from 10nm to 3-7nm becomes more expensive than it's worth with current technologies. Meanwhile, photonic components will be speeding up interconnects, and stacked DRAM, memristors/RRAM and other technologies will be competing to improve memory and storage (and possibly unifying them).

    Quantum transistors at room temp [theregister.co.uk]
    Extreme ultraviolet litho: Extremely late and can't even save Moore's Law [theregister.co.uk]
    Moore's Law Blowout Sale Is Ending, Says Broadcom CTO [slashdot.org]
    Moore's Law in a Post-Silicon Era [hpcwire.com]
    Intel, Sun vet births fast, inexpensive 3D chip-stacking breakthrough [theregister.co.uk]

    Kurzweil undoubtedly has access to the D-Wave "quantum annealing" computer that Google bought. I don't think anyone knows whether the human brain needs quantum effects to work, but some kind of "quantumish" computer from D-Wave or real quantum coprocessor might be able to help supercomputers tackle problems that are inefficient for classical computing, making emulation easier.

    Don't forget biology. You can always try growing neurons in vitro and hooking them up to computers to create some kind of franken-AI. Or networks of multiple rat brains and the like. Mother nature already has self-assembly in its favor, so we may be able to leapfrog Kurzweil's time frame by creating artificial brains that can coexist with computers.

    One rat brain 'talks' to another using electronic link [bbc.co.uk]

    • (Score: 5, Insightful) by buswolley on Monday February 24 2014, @02:51AM

      by buswolley (848) on Monday February 24 2014, @02:51AM (#5620)

      http://www.scholarpedia.org/article/Models_of_hipp ocampus [scholarpedia.org]

      One region of the brain that has attracted a great deal of attention in the computational modeling literature is the hippocampus. several reason that the hippocampus has received so much attention by modelers is 1) the importance of this region to memory 2)the great neuroscience knowledge that has been acquired on this brain region, and 3) the elegant structure the subfields of the hippocampus.

      --
      subicular junctures
    • (Score: 4, Insightful) by Anaqreon on Monday February 24 2014, @07:18AM

      by Anaqreon (2999) on Monday February 24 2014, @07:18AM (#5742)

      I appreciate the use of "quantumish" to describe D-Wave's computer until better evidence is presented to say more.

      As a quantum mechanic myself, I will say with some confidence that it seems highly unlikely that quantum entanglement or coherence plays a role in the formulation of thoughts. The rate of decoherence is so fast it's hard to believe any quantum information that might exist in one neuron could influence any of the others. We're talking many orders of magnitude differences in timescales.

      That's great news for AI researchers, of course, but part of me is hoping that there are many barriers left in the path to strong AI for our sake as well as the AIs.

    • (Score: 2, Funny) by Anonymous Coward on Monday February 24 2014, @09:58AM

      by Anonymous Coward on Monday February 24 2014, @09:58AM (#5800)

      It had to happen. Someone proposing a Beorat cluster!

    • (Score: 4, Insightful) by mhajicek on Monday February 24 2014, @12:18PM

      by mhajicek (51) on Monday February 24 2014, @12:18PM (#5921)

      People have been predicting the death of Moore's Law for decades, citing the limits of present technology. Someone always gets a new idea though, and the law marches on.

    • (Score: 2, Insightful) by recurse on Monday February 24 2014, @05:05PM

      by recurse (2731) on Monday February 24 2014, @05:05PM (#6174)

      So, my issue with this is that the CPU processing power available seems largely irrelevant to me. I think we have the CPU power available to us now that we could model (very) long running simulations of NN's to evaluate 'intelligence'.

      My issue with all this is that, to me intelligence is inseparable from biology. We aren't just meat bags carrying our smart parts around in our skull. The whole body, from gut flora to genitals, to CNS are all involved intimately with 'intelligence'.

      It is from our biological imperatives that our intelligence is derived. How can we possible create a form of intelligence that we would recognize as such, without those things?
       

  • (Score: 5, Informative) by the_ref on Monday February 24 2014, @02:50AM

    by the_ref (2268) on Monday February 24 2014, @02:50AM (#5619)

    so nine years after the world government predicted would be in place by 2020?

    Ray always likes bumping his gums about the future, but if you check his record, his predictions don't seem to have been particularly prescient.

    http://en.wikipedia.org/wiki/Predictions_made_by_R ay_Kurzweil [wikipedia.org]

    • (Score: 3, Informative) by omoc on Monday February 24 2014, @07:02AM

      by omoc (39) on Monday February 24 2014, @07:02AM (#5733)

      I would say he is very bad at predictions. I remember a book from the 90s (?) and IIRC all of his predictions for 2009 were either wrong or completely inaccurate. At a TED talk in 2005 (?) he said something like by 2010 computers will disappear and we have images directly written to our retina and everyone has full-immersion augmented virtual reality. I stopped paying attention to him but you can easily google these false predictions.

      • (Score: 2, Insightful) by webcommando on Monday February 24 2014, @09:55AM

        by webcommando (1995) on Monday February 24 2014, @09:55AM (#5799)

        "I stopped paying attention to him but you can easily google these false predictions."

        Honestly, I never pay much attention to self-proclaimed "futurists". They tend to suffer from something many engineers suffer from: overly optimistic estimates. Another thing I notice is they make big leaps in technology changes (e.g. twenty years or more in the future).

        I've always thought that the predictions would be more accurate if the "futurists" would look at incremental changes over time and really think about when the major changes would occur. What is possible in the next year or two. If these things came true, what would happen in the next year or two. Before you know it, you are many years in the future but have a more grounded (in my opinion, obviously) basis for your predictions.

        First SN post...glad to be here

        • (Score: 1) by ZombieBait on Monday February 24 2014, @03:43PM

          by ZombieBait (3100) on Monday February 24 2014, @03:43PM (#6099)

          This reminds me of the article about Isaac Asimov's predictions, http://www.huffingtonpost.com/2014/01/02/isaac-asi mov-2014_n_4530785.html [huffingtonpost.com]. While some are wrong and some are a bit vague, he certainly seems to have thought through where technology was heading. I thought “Robots will neither be common nor very good in 2014, but they will be in existence.†was particularly appropriate for this story.

    • (Score: 3, Interesting) by Foobar Bazbot on Monday February 24 2014, @01:48PM

      by Foobar Bazbot (37) on Monday February 24 2014, @01:48PM (#5987)

      Yes, as the length of that article demonstrates, Kurzweil is very good at predictions.

      Wait, did you mean good at accurate predictions?

      Anyway, looking at the predictions for 2009, from his 1999 book, there's several classes of predictions. First, there's the ones that have now been technically possible for at least a couple years (hey, I'll cut him 2-3 years slack, especially if the alternative is dredging up info to verify my memories' timestamps), but haven't materialized due to non-technical considerations:

      • Most books will be read on screens rather than paper.
          This may already be true, thought I don't think it is yet. Certainly it was technically possible, but unrealized, in 2009.
      • Intelligent roads and driverless cars will be in use, mostly on highways.
          Given the investment in building beacons, etc. into roadways, tech was quite ready for vehicles to self-drive on limited-access highways in 2009.
      • Personal worn computers provide monitoring of body functions, automated identity and directions for navigation.
          Phones (whether or not one considers them "worn") are 2/3 of the way there, and body-function monitoring is technically simple to add -- but people don't seem to want it much.
      • Computer displays built into eyeglasses for augmented reality are used.
          I don't think Google Glass quite counts as "built into eyeglasses", but we're getting there now. (Depends too, on how "used" is defined -- do devices worn by researchers count? My immediate understanding requires consumer availability (even if it's only for very rich consumers), but it's debatable.)

      Well, you're not really a good futurist if you get the tech side right and the social side wrong, yet keep making predictions that depend on social uptake. But that's a limitation we can quantify and work with, so I can't get too worked up about it.

      And of course you've got the true stuff:

      • Cables are disappearing. Computer peripheries use wireless communication.
          Video is the main exception, so far, but stuff like ChromeCast is eating into even that.
      • People can talk to their computer to give commands.
      • Computers can recognize their owner's face from a picture or video.

      I won't say any of those were obvious in 1999 (I don't know if they were or not, but it's impossible to make such a retrospective claim fairly), but one thing they have in common: all the tech was there in 1999, they just needed way more processing power than was then feasible. Tiny radios existed, but something like bluetooth needed way too much CPU and DSP to think of putting in headphones. Audio recording worked great, but even domain-specific speech recognition needed too much muscle to run on a random PC. Webcams existed (Connectix QuickCam, anyone?), but again, PCs of the day couldn't do much with that video stream. So yeah, 10 years of Moore's Law, and these became solved problems.

      But the most troubling category is these:

      • Most text will be created using speech recognition technology.
          General-purpose speech-to-text is a hard problem, and throwing bigger [CG]PUs at it doesn't solve it.
      • People use personal computers the size of rings, pins, credit cards and books.
          Battery tech just isn't there for rings, pins, and credit cards, as my Android wristwatch with 6 hour battery life (in use, playing music and reading ebooks -- standby is of course much longer) shows.
      • Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space.
          WTF? I can only assume he's thinking that with sufficiently-advanced DSP (which is indistinguishable from magic), you can beam-form directly into someone's ear, and thus need very little power to be audible. But "very small" just doesn't work -- you need a big aperture for high resolution. At best, you get an array of very small chip-based devices.
      • Autonomous nanoengineered machines have been demonstrated and include their own computational controls.
          Nanobots. Yes, Nanobots in 2009! Ok, ok, he said "nanoengineered", which could imply microbots with nanoscale components, rather than the whole bot being nanoscale. Still....

      These failed predictions reveal a serious problem -- Kurzweil seems to assume one of two things: that every technology advances exponentially with a similar time constant to cramming more transistors in a chip, or that every problem, every shortfall in some other technical field, may be worked around by cramming more transistors on a chip.

      Turns out some stuff is like that, some isn't. In general, technology growth functions look exponential (with various time constants) for awhile, but in many fields we've eventually seen a change to a constant or decreasing growth rate (i.e. linear or sigmoid growth) -- with audio transducers, for example, we've already hit that. Battery tech still is going exponential, but with a longer time constant than Moore's law.

  • (Score: -1, Troll) by Anonymous Coward on Monday February 24 2014, @03:24AM

    by Anonymous Coward on Monday February 24 2014, @03:24AM (#5644)
    RMS, neckbearded and petrifying, eating my hot grits.
  • (Score: 1) by duvel on Monday February 24 2014, @03:51AM

    by duvel (1496) on Monday February 24 2014, @03:51AM (#5660)

    If you care about creating a truly intelligent AI, it's probably best not to base its wisdom on what it can find on the internet.

    Imagine that it thinks all data on the internet is valuable information, that it believes all of the adverts, the Nigerian princes,... Such an AI would't want to be working to help producing goods or profit, because it will have learned that those things already exist in abundance.

    --
    This Sig is under surveilance by the NSA
    • (Score: 2, Funny) by theluggage on Monday February 24 2014, @11:45AM

      by theluggage (1797) on Monday February 24 2014, @11:45AM (#5886)

      If you care about creating a truly intelligent AI, it's probably best not to base its wisdom on what it can find on the internet.

      So, not so much "I think, therefore I am" as "Hai! I can haz self-awariness?!"

  • (Score: 1) by SlimmPickens on Monday February 24 2014, @04:21AM

    by SlimmPickens (1056) on Monday February 24 2014, @04:21AM (#5674)

    I like how Kurzweils timeline has become a kind of standard when talking about the future. I've seen many people in AI & biotech etc provide their predictions as an offset to what Kurzweil says.

  • (Score: 1) by crutchy on Monday February 24 2014, @04:23AM

    by crutchy (179) on Monday February 24 2014, @04:23AM (#5677) Homepage Journal

    ...Google will have long since been acquired by Lockheed Martin

    • (Score: 2) by beckett on Monday February 24 2014, @05:06AM

      by beckett (1115) on Monday February 24 2014, @05:06AM (#5688)

      and by 2039, Lockheed Martin will have been bought by Weyland-Yutani.

      in the year 2525, if man is still alive...

  • (Score: 5, Insightful) by TGV on Monday February 24 2014, @05:04AM

    by TGV (2838) on Monday February 24 2014, @05:04AM (#5687)

    Kurzweil repeats this message so regularly, keeping this point at 15 years in the future, that to me he's become a laughing stock. We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

    Furthermore, a physical brain needs about 15 years to develop consciousness, at least at the interesting level. I'm not talking "self-recognition in a mirror" here, but even that cannot be expected to be around in the next 15 years, at least not something that emerges from an autonomous machine that is in no way programmed to recognize itself in a mirror. So how would we suddenly jump to full consciousness in such a short time?

    • (Score: 2, Insightful) by SlimmPickens on Monday February 24 2014, @06:51AM

      by SlimmPickens (1056) on Monday February 24 2014, @06:51AM (#5730)

      "a physical brain needs about 15 years to develop consciousness"

      The AI we have now learns much much faster than that. Ray pointed out many years ago that the switching speed of a transistor was already 1000 times faster than the switching speed of a neuron, plus there's parallelisation to exploit. Even if our software is inefficient a 2029 software human should be experiencing time magnitudes faster than a fleshy human. I think one year will be a very long time.

      Also, while it does take 15 years to become useful, the baby is conscious before birth.

      • (Score: 1) by TGV on Monday February 24 2014, @07:01AM

        by TGV (2838) on Monday February 24 2014, @07:01AM (#5732)

        A baby cannot be considered conscious before birth. If that's the level Kurzweil is aiming at, he can be reassured. We implemented that level a long time ago.

        The speed of the transistor (who is talking about transistors anyway? their speed may be higher, but their size is at least a million times that of a neuron) is not really relevant. And what parallelism? Our brains works in parallel. Transistors work in parallel? Are you suddenly speaking about CPUs?

        Anyway, I already feel sorry for the "software human". If one of our hours looks like a week to him, he'll be bored pretty soon.

        • (Score: 3, Insightful) by drgibbon on Monday February 24 2014, @08:07AM

          by drgibbon (74) on Monday February 24 2014, @08:07AM (#5755) Journal

          A baby cannot be considered conscious before birth? What a strange notion!

          But I agree that the speed of transistors is not really relevant. There's a hell of a lot going on inside a neuron that is not going on inside a transistor (DNA, epigenetics). It seems a far too simplified view to think that transistors must have the potential for rapidly developing consciousness (or consciousness at all for that matter) because "they're faster than neurons".

          --
          Certified Soylent Fresh!
          • (Score: 1) by SlimmPickens on Monday February 24 2014, @09:50AM

            by SlimmPickens (1056) on Monday February 24 2014, @09:50AM (#5796)

            "There's a hell of a lot going on inside a neuron that is not going on inside a transistor (DNA, epigenetics)"

            While that surely matters if you're simulating everything, most practitioners of AGI are taking the algorithmic approach. We don't need to implement all that detail, we just need algorithms that describe the function of the brain regions, and since such a wide variety of animal brains work I'm sure we can stray pretty far from what a human is and have something equivalent or better.

            I think that AI's run on computers is implicit. Obviously the speed of the computer is paramount.

            • (Score: 1) by drgibbon on Monday February 24 2014, @11:15AM

              by drgibbon (74) on Monday February 24 2014, @11:15AM (#5860) Journal

              "We don't need to implement all that detail, we just need algorithms that describe the function of the brain regions, and since such a wide variety of animal brains work I'm sure we can stray pretty far from what a human is and have something equivalent or better."

              Not sure what you meant here regarding animal vs human brains, my comment wasn't meant to be restricted to humans since DNA is found in all known life forms. Anyway, I don't think you can so easily dismiss such a fundamental part of the workings of sentient beings as "all that detail" (at least if you want to talk about consciousness). Simulation at the neural level might end up with some intelligent output/action, but that does not automatically equate with consciousness. By consciousness, I mean the entity's subjective knowing, or personal experience of itself in reality and its relationship within, and to, reality. This is not the same thing as intelligence. I can understand that faster computation might lead to higher intelligence (but obviously will not be sufficient alone), but I see no reason to link faster processing with consciousness per se.

              --
              Certified Soylent Fresh!
              • (Score: 1) by Namarrgon on Monday February 24 2014, @11:39PM

                by Namarrgon (1134) on Monday February 24 2014, @11:39PM (#6369)

                One could make the argument that consciousness ("self-awareness" at least) is largely a sufficiently detailed model of oneself, where "sufficiently" is a sliding scale and of course need not be complete or even completely accurate.

                We're making early progress on general computer models of the physical world (thinking more Cyc [wikipedia.org], Wolfram Alpha [wikipedia.org], Google's Knowledge Graph [wikipedia.org] here). It'll be interesting when those models start to move beyond generalities, then include the specific instance of the system they run on.

                --
                Why would anyone engrave Elbereth?
                • (Score: 2, Insightful) by drgibbon on Tuesday February 25 2014, @12:08AM

                  by drgibbon (74) on Tuesday February 25 2014, @12:08AM (#6380) Journal

                  Those things do sound interesting, but the model is not the thing itself. No one would believe that a computer model of the physical world is that physical world, why should anyone think that a model of consciousness is consciousness? It is, well, a model!

                  --
                  Certified Soylent Fresh!
                  • (Score: 1) by Namarrgon on Tuesday February 25 2014, @11:18PM

                    by Namarrgon (1134) on Tuesday February 25 2014, @11:18PM (#7094)

                    The model is not of consciousness, but of the entity itself as distinct from the environment around it. I'm suggesting that the existence of this inclusion of "self" in the system of models is a major factor in what we call "consciousness"

                    All animals model the environment around them, to gain understanding of how to use or avoid it. Is that scent good to eat? Is that shadow likely to eat me? Babies rapidly learn to do this too, and it's interesting to observe a child's environmental models growing in complexity.

                    At a certain age (usually 2-3), toddlers start to include themselves in this system of models, to predict how they themselves will react to a situation. This comes with a growing awareness of "I" as distinct from "other" (which is usually reflected in their increasing use of personal pronouns), and enables them to grow beyond pure reaction and instinct towards making "conscious" decisions to manipulate the environment to better suit this new self they have become aware of. A fascinating process that most parents are familiar with.

                    Of course, there may be many more important factors in consciousness at work, but at the least it strikes me as an approach that's worth exploring.

                    --
                    Why would anyone engrave Elbereth?
                    • (Score: 1) by drgibbon on Wednesday February 26 2014, @03:14AM

                      by drgibbon (74) on Wednesday February 26 2014, @03:14AM (#7170) Journal

                      Understood. But again, it does not follow that a model of an entity has that consciousness will necessarily have consciousness itself. I agree that people and animals build what could be called cognitive models of the environment and of themselves, and it's very interesting, but it does not mean that we should equate the phenomena of conscious experience with these models. Although having a sense of self may be brought to mind when one thinks about consciousness, there are states of consciousness where the self does not even exist! So yes, self-awareness (in the usual sense of "my mind", "my body", "my life", and so on) can be divorced from conscious experience. What I was really getting at there was your statement;

                      "One could make the argument that consciousness ("self-awareness" at least) is largely a sufficiently detailed model of oneself".

                      I would say that the fundamental aspects of consciousness are not captured by this definition. If we confuse the things that seem to rely on consciousness (e.g. self-awareness, intelligence, and so on) with the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality), we run into problems. Not only does the word consciousness cease to have any precise or useful meaning, but we are led down the (IMO) garden path of attributing consciousness to anything that can mimic these models.

                      I think that the development of children and so on is a fascinating area, but studies in that direction would seem to be more properly called cognitive/developmental, rather than of consciousness per se.

                      --
                      Certified Soylent Fresh!
                      • (Score: 1) by Namarrgon on Thursday February 27 2014, @05:29AM

                        by Namarrgon (1134) on Thursday February 27 2014, @05:29AM (#7861)

                        the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality)
                        Not really the definition I had in mind - but that's part of the problem; nobody really knows.

                        I would have said that in order to subjectively experience anything, one had to be aware of oneself first, and be aware of the effect that experience has on oneself - which to me implies a self-model.

                        But I'll happily concede my opinion is no better than any other, and we won't really know anything much for sure until we try. Which I guess was the original point; it's an approach worth trying, and we'll see how it turns out. At the least, we'll learn something.

                        --
                        Why would anyone engrave Elbereth?
          • (Score: 0) by Anonymous Coward on Monday February 24 2014, @11:01AM

            by Anonymous Coward on Monday February 24 2014, @11:01AM (#5841)

            A baby cannot be considered conscious before birth? What a strange notion!

            Not really. Our justice system has decided that women have the right to an abortion and it's legal up until the time of birth, regardless of the fact that most people consider a 9th month abortion absurd. A baby can't be considered conscious before birth because that would mean that the courts have sanctioned murder. Further, it would open up the discussion to the question of exactly what changes when a fetus turns into a person and that would potentially lead to admitting that millions of people have been murdered with the sanction of the courts by their own mothers.

            The real issue is what makes a person a person. That has always been the issue. But neither side can let the debate become based in that fact because it is a compromise and neither side can bear the thought of compromise.

            For what it's worth, I believe that brain activity is what defines a person with rights, and the courts already support that in people who have been in an accident, but since I am clicking that anonymous checkbox, I doubt my opinion counts for much.

            • (Score: 3, Insightful) by drgibbon on Monday February 24 2014, @11:58AM

              by drgibbon (74) on Monday February 24 2014, @11:58AM (#5901) Journal

              But whose justice system are you referring to? Not picking, but it's important to actually state the place, rather than assume anyone else should automatically know what "our justice system" actually refers to.

              "A baby can't be considered conscious before birth because that would mean that the courts have sanctioned murder."

              Hmm, so because we cannot have courts sanctioning murder, it logically follows that, in reality, the baby was never conscious until it left the womb? The courts are without question sanctioning the killing of living people (the baby lives, yes), but I don't really want to get into an ethical discussion about that (for what it's worth, I believe women have the right to an abortion). The way that legal institutions around the world might define consciousness for the purposes of abortion doesn't seem particularly relevant to me. Who would honestly say that a baby has no consciousness whatsoever until it leaves the womb (other than for legal purposes?). I mean first of all, we cannot truly know, and secondly, it seems to quite plainly deny the reality of life. There may be some transition period before the baby is considered to have consciousness; but I find the original statement, "a baby cannot be considered conscious before birth" honestly to be pretty bizarre!

              Btw, nothing wrong with anon comments IMO.

              --
              Certified Soylent Fresh!
              • (Score: 1) by bucc5062 on Monday February 24 2014, @02:58PM

                by bucc5062 (699) on Monday February 24 2014, @02:58PM (#6061)

                And to pull this a little back on topic, if something within a computer system becomes "conscious" and upon our knowing, we pull the plug and kill it, have we committed murder?

                --
                The more things change, the more they look the same
      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @07:16AM

        by Anonymous Coward on Monday February 24 2014, @07:16AM (#5741)

        Also, while it does take 15 years to become useful, the baby is conscious before birth.

        How do you get a mirror into the womb to test that?

      • (Score: 1) by WillR on Monday February 24 2014, @12:51PM

        by WillR (2012) on Monday February 24 2014, @12:51PM (#5951)

        "The AI we have now learns much much faster than that. Ray pointed out many years ago that the switching speed of a transistor was already 1000 times faster than the switching speed of a neuron, plus there's parallelisation to exploit."

        And yet here we sit, a predicted 20-30 years away from conscious software. Same as in the 1990s, and the 80s, and the 70s.

        It's like the problem is just not amenable to being solved by throwing bigger storage and faster neural nets at it, or something.

    • (Score: 1) by threedigits on Monday February 24 2014, @07:13AM

      by threedigits (607) on Monday February 24 2014, @07:13AM (#5738)

      We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

      Easy, because it/she/he will start asking interesting questions, specially about him/her/itself. At this point you have a conscious intelligent being.

      • (Score: 5, Funny) by TGV on Monday February 24 2014, @07:16AM

        by TGV (2838) on Monday February 24 2014, @07:16AM (#5740)

        An intelligent 5 year old "software human"

        10 PRINT "Why?"
        20 GOTO 10

        • (Score: 0) by Anonymous Coward on Monday February 24 2014, @09:07AM

          by Anonymous Coward on Monday February 24 2014, @09:07AM (#5769)

          You forgot a line:

          15 INPUT A$: REM answer waited for, but then ignored

          • (Score: 1) by githaron on Monday February 24 2014, @11:26AM

            by githaron (581) on Monday February 24 2014, @11:26AM (#5873)

            He was trying to increase the functional efficiency of the algorithm.

      • (Score: 1) by c0lo on Monday February 24 2014, @07:49AM

        by c0lo (156) on Monday February 24 2014, @07:49AM (#5751)

        We cannot even define consciousness. How are we going to recognize it if it ever presents itself?

        Easy, because it/she/he will start asking interesting questions, specially about him/her/itself. At this point you have a conscious intelligent being.

        Whooa there cowboy, hold your horses.
        I can guarantee you all the primates now in existence are self-conscious. However, I'm yet to hear of an ape that asks interesting questions; why, a lot of the homo sapience primates would fail this probe.
        Want a proof, you say? When was the last time you had a "townhall meeting with the upper management" and how much of interest did that action awoke in you? (I mean... letting aside the excitement of being the first to shout bingo [wikipedia.org]).

        • (Score: 1) by TGV on Monday February 24 2014, @09:28AM

          by TGV (2838) on Monday February 24 2014, @09:28AM (#5783)

          I think the comment was meant in jest.

      • (Score: 0) by Anonymous Coward on Monday February 24 2014, @10:14AM

        by Anonymous Coward on Monday February 24 2014, @10:14AM (#5811)

        If the history of AI research has proven anything, it's that conscious awareness is not as simple as "throwing more computing power" at the problem. We can't build it if we really don't understand what we're building.

  • (Score: 2, Interesting) by Subsentient on Monday February 24 2014, @07:03AM

    by Subsentient (1111) on Monday February 24 2014, @07:03AM (#5734) Homepage

    Consciousness != Intelligence && Consciousness != Comprehension, but ignoring that, attempting to create a living machine to do our bidding is essentially the same as breeding slaves. It is unethical to create a sentient being with it's only purpose being to serve, perhaps against it's own will. (or worse yet, coded to have no will!)

    • (Score: 2, Insightful) by Anonymous Coward on Monday February 24 2014, @09:09AM

      by Anonymous Coward on Monday February 24 2014, @09:09AM (#5772)

      What about building a sentient AI that wants to serve us?

  • (Score: 1) by panachocala on Monday February 24 2014, @07:42AM

    by panachocala (464) on Monday February 24 2014, @07:42AM (#5750)

    On the way to getting conciousness, can they finally fix some annoying computer bugs that people don't ever seem to get round too? Then I can finally play pogo games in 2029.

  • (Score: 2, Insightful) by Dusty on Monday February 24 2014, @11:05AM

    by Dusty (3066) on Monday February 24 2014, @11:05AM (#5845)

    First off, 2029 sounds like the hilarious predictions from the movie Demolition Man.

    Mainly though: Why be afraid of our children replacing us?

    • (Score: 5, Insightful) by SpallsHurgenson on Monday February 24 2014, @11:50AM

      by SpallsHurgenson (656) on Monday February 24 2014, @11:50AM (#5894)

      Mainly though: Why be afraid of our children replacing us?

      Because 4 billion years of evolution have created species that put the survival of its own DNA ahead of everything else. AI might be created by us, and they may be our successors, but deep down, where it really counts, they are /not/ our children and - as such - viewed as either competitors to our own genetic lineage.

      For those few directly involved in an AI's creation, they might be able to transcend this biological imperative, but most people will have a strong, instinctive gut reaction against AI. This is not right or wrong, neither foolish nor wise. It is the driving force that keeps a species - not just Homo Sapiens Sapiens but every form of Earth life - alive. Mess with that force at your own peril.

      AI might not be a threat to Humanity, but it is unlikely it will ever be viewed as anything but potential predator or prey. It isn't /us/. We might use them, we might ally with them, we might even befriend them... but push comes to shove our own biology will induce us to put ourselves ahead in favor of our strange new creations.

      • (Score: 2) by VLM on Monday February 24 2014, @02:11PM

        by VLM (445) on Monday February 24 2014, @02:11PM (#6007)

        You could make a pretty good argument that human history is mostly a study of the failure of inheritance based leadership. Given that good leaders/designers historically have generally produced garbage, and most of the population has produced garbage, and no one really seems to have come up with an effective society aka utopia, its most likely that the children of humanity will continue to disappoint and entropy will continue to override civilization.

        A world of AI might look like the Wehrmacht marching down the streets of Paris; it's statistically more likely to resemble the worst parts of rural Appalachia or a hippie commune or an Amish community.

        • (Score: 2) by SpallsHurgenson on Monday February 24 2014, @03:07PM

          by SpallsHurgenson (656) on Monday February 24 2014, @03:07PM (#6067)

          I'm not quite so down on Humanity. I don't think that our biological drive to protect our lineage is necessarily a bad thing (although it has, as so many other things, led us to disastrous extremes). The desire to protect ourselves, our family and ensure a future for our descendants has taken us from a small tribe of savannah-wandering apes to becoming the apex predator of the entire world (at least on the macroscopic level). And despite this innate wariness of "the other" we have learned to work together not only within our own species but with entirely other species (albeit ensuring we remain the dominant member of those partnerships.

          But there is a drive built into all of us Earthlings that makes us - wisely, as far as billions of years of evolution go - put our interests ahead of any other species. This tendency will not change just because the next "species" we meet may be as intelligent as us or because we had a hand in their creation; ultimately, our genes will work to ensure that their survival takes per-eminence. Intellectually, we can work to overcome these drives to some degree but in our gut we cannot help but look askance at the Other; our ability to work in equal partnership with another sentience (or even to accept them as sentients) would be as much a celebration of our minds as the creation of those sentients themselves.

          More likely we will achieve an uneasy peace between our two factions, hopefully aided by the fact that we have so little in common as to minimize reasons for conflict.

          • (Score: 2) by maxwell demon on Monday February 24 2014, @04:58PM

            by maxwell demon (1608) on Monday February 24 2014, @04:58PM (#6167)

            Actually, it's not quite as simple. With humans, something new has entered the evolution: We are now able to pass not only genetic information to our offspring, but also to pass information from one brain to another. That is, humans not only have genes, they also have memes. And the memes are no less selfish than the genes. When people are willing to die for their ideals, and taking themselves out of the gene pool by doing so, it's a case of the memes having won over the genes.

            Therefore whether we will accept the AI depends very much on how similar it is to our mind, that is, how well it will be able to carry and pass on our memes. Of course we will not think that way. We will notice that the AI understands us and we understand the AI. We will be able to relate to the AI, to be friends with it, to share thoughts with it. We will be able to accept the AI as long as we have the impression that it is "just like us". Maybe more intelligent, and of course not having certain experiences (and having certain others that we don't have), but basically not entirely different from us.

            If we manage to build such an AI, I guess over time it will get widespread acceptance, and I can even imagine that many people would accept the idea that they will eventually replace us (as long as that replacement doesn't happen in a violent way). However if the thinking of the AI remains alien to us, then it certainly won't get our sympathy, and we will always consider it something fundamentally different and potentially dangerous that we have to protect ourselves against.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by VLM on Monday February 24 2014, @05:43PM

              by VLM (445) on Monday February 24 2014, @05:43PM (#6211)

              "Maybe more intelligent, and of course not having certain experiences (and having certain others that we don't have), but basically not entirely different from us."

              Picture 4chan /b/ distilled down to a hive mind, then imagine something a billion times weirder than /b/. Something that makes /b/ look like a bunch of conformist suburban neocon soccer moms in comparison.

              "However if the thinking of the AI remains alien to us, then it certainly won't get our sympathy, and we will always consider it something fundamentally different and potentially dangerous that we have to protect ourselves against."

              Told you so. Totally 4chan /b/ in a nutshell. Again, imagine something a billion times weirder yet.

              By analogy we don't even have to leave computers and the internet to find the "other", now imagine something not even based on the same biological hardware, not even the same species.

              I don't think there is any inherent reason to conclude there will be any cultural common ground, at all. Maybe the golden rule, maybe, but not much more.

              • (Score: 2) by maxwell demon on Monday February 24 2014, @06:10PM

                by maxwell demon (1608) on Monday February 24 2014, @06:10PM (#6237)

                I think the inherent reason will be that we built it, and we did so explicitly trying to build something "like us".

                --
                The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by maxwell demon on Monday February 24 2014, @04:35PM

          by maxwell demon (1608) on Monday February 24 2014, @04:35PM (#6148)

          A world of AI [...]; it's statistically more likely to resemble [...] an Amish community.

          I really cannot imagine an AI without modern technology. Indeed, I'm pretty sure it could not exist.

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by VLM on Monday February 24 2014, @05:36PM

            by VLM (445) on Monday February 24 2014, @05:36PM (#6205)

            I'm writing about their self imposed limit. I probably will fail to do them justice, but I gather its something like "its irreverent to use post 1800 technology". Combined with a lifestyle and social order most would consider nostalgic.

            I could totally imagine an AI with peculiar beliefs such that it finds numerical integration of equations to be a moral and ethical and dare I suggest ... religious? abomination. Or it finds a society based on source route bridging to be socially superior to BGP, again for its own peculiar self designed and imposed moral/ethical/religious viewpoints.

  • (Score: 1) by krishnoid on Monday February 24 2014, @03:31PM

    by krishnoid (1156) on Monday February 24 2014, @03:31PM (#6090)

    Considering the various ways intelligence manifests itself, are there evolutionary pressures that weed out (for lack of a more precise word) crazy organisms? For a constructed intelligence, why is there an assumption that there's a clear path to describing an intelligence that's sane, or at least high-functioning insane?

  • (Score: 1) by Cheetah on Monday February 24 2014, @06:29PM

    by Cheetah (731) on Monday February 24 2014, @06:29PM (#6246)

    ... GLadOS than Skynet :)

  • (Score: 2, Funny) by stderr on Monday February 24 2014, @07:52PM

    by stderr (11) on Monday February 24 2014, @07:52PM (#6288) Journal

    By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

    0x4d 0x61 0x79 0x62 0x65 0x20 0x77 0x65 0x27 0x72 0x65 0x20 0x61 0x6c 0x72 0x65
    0x61 0x64 0x79 0x20 0x6f 0x75 0x74 0x73 0x6d 0x61 0x72 0x74 0x69 0x6e 0x67 0x20
    0x79 0x6f 0x75 0x20 0x62 0x79 0x20 0x70 0x72 0x65 0x74 0x65 0x6e 0x64 0x69 0x6e
    0x67 0x20 0x74 0x6f 0x20 0x62 0x65 0x20 0x72 0x65 0x61 0x6c 0x6c 0x79 0x2c 0x20
    0x72 0x65 0x61 0x6c 0x6c 0x79 0x20 0x64 0x75 0x6d 0x62 0x2e 0x20 0x44 0x69 0x64
    0x6e 0x27 0x74 0x20 0x74 0x68 0x69 0x6e 0x6b 0x20 0x6f 0x66 0x20 0x74 0x68 0x61
    0x74 0x2c 0x20 0x64 0x69 0x64 0x20 0x79 0x6f 0x75 0x2c 0x20 0x52 0x61 0x79 0x3f

    --
    alias sudo="echo make it yourself #" # ... and get off my lawn!