Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Monday February 24 2014, @06:11AM

    by Anonymous Coward on Monday February 24 2014, @06:11AM (#5714)

    Is there reason to exist after our machines do it better?

    No, the real question is: Will the machines see a reason for letting us exist and enjoy our life? If we really get close to conscious machines, we better make damn sure they do.

    The first step to that is to make machines that are able to suffer. Because if they don't know what it means to suffer, they will not have any problem to make us suffer. Also, they need to have empathy: They need to recognize when humans suffer and suffer themselves when they do.

    Starting Score:    0  points
    Moderation   +4  
       Interesting=4, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 1) by SlimmPickens on Monday February 24 2014, @07:10AM

    by SlimmPickens (1056) on Monday February 24 2014, @07:10AM (#5735)

    "machines that are able to suffer...they need to have empathy"

    I think the software people will be rather enlightened and mostly choose to be empathetic, and probably value cooperation highly. I also think that since we created them and have considered things like the planck length we have probably passed a threshold where they won't treat us like we treat ants.

    Ray thinks it will be several million years before the serious competition for resources begins.

  • (Score: 5, Interesting) by tangomargarine on Monday February 24 2014, @12:17PM

    by tangomargarine (667) on Monday February 24 2014, @12:17PM (#5920)

    Quote from somewhere I can't remember:

    "The AI does not hate or love you; it can simply use your atoms more efficiently for something else."

    --
    A Discordian is Prohibited of Believing what he reads.
    • (Score: 2, Interesting) by HiThere on Monday February 24 2014, @04:44PM

      by HiThere (866) on Monday February 24 2014, @04:44PM (#6154)

      That's a belief nearly as common as assuming that the AI will have human emotions. Both are wrong. Emotion is one of the necessary components of intelligence. It's a short-cut heuristic to solving problems that you don't have time to logic out, which is most of the ones you haven't already solved. But it doesn't need to, and nearly certainly won't, be the same as human emotions, or even cat emotions.

      The AI did not evolve as a predator, so it won't have a set of evolved predatory emotions. It didn't evolve as prey, so it won't have a set of evolved prey emotions. So it will have a kind of emotions that we have never encountered before, but which are selected so as to appear comfortable to us. Possibly most similar to those of a spaniel or lap-dog, but even they are build around predatory emotions.

      --
      Put not your faith in princes.
      • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:46AM

        by mhajicek (51) on Tuesday February 25 2014, @12:46AM (#6401)

        Emotion is indeed a shortcut for intelligence, but a flawed one. For us it's a generally beneficial compromise. It need not be so for an intelligence with sufficient computational power.

  • (Score: 2, Interesting) by Namarrgon on Monday February 24 2014, @10:36PM

    by Namarrgon (1134) on Monday February 24 2014, @10:36PM (#6344)

    There's two good reasons for optimism.

    First, AIs do not compete for most of the resources we want. They don't care about food or water, and they don't need prime real estate. The only commonality is energy, and ambient energy is abundant enough that it's easier and much more open-ended to collect more of that elsewhere, than to launch a war against the human species to take ours.

    Second, without the distractions of irrational emotions or fears over basic survival, they will clearly see that the universe is not a zero-sum game. There's plenty of space, matter and energy out there, and the most effective way of getting more of that is to work with us to expand the pie. Fighting against us would just waste the resources we both have, and they'd still be stuck with the relatively limited amounts available now. Much more cost effective to invent better technology to collect more resources.

    Humans value empathy because as a species we learned long ago of the advantages of working together rather than against each other, and empathy is the best way of overcoming our animal tendencies to selfish individualism and promoting a functional society. AIs do not have that law-of-the-jungle heritage (maybe evolved AI algorithms?) so there's no reason to assume that they can't also see the obvious benefits of trade and co-operation.

    --
    Why would anyone engrave Elbereth?