Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by dmc on Monday February 24 2014, @06:06AM

    by dmc (188) on Monday February 24 2014, @06:06AM (#5712)

    The Google spider parses billions of pages every day, and no one would say that it is intelligent.

    While I've got a lot against Google [dev.soylentnews.org], I do fondly remember the early days of Google search. Not only were its search results uncannily well ordered (this was back when it more appeared to be a tool written by geeks, for geeks, not for the personalized search suggestions for the masses), but I'd even use it because it would be faster to e.g. google search "wikipedia ..terms.." and get to the right page, than to go to wikipedia itself and search for the same thing. I know that isn't the sentient kind of intelligence you are referring to, but it was pretty amazing.

    So Google can build a computer and write a program that parses billions of pages. So what? How does this parsing affect the behavior of the program?

    Huh? I don't have the source code in front of me, but I can't imagine that the parsing of those billions of pages isn't affecting/effecting the behavior of the program. Part of me wants to say that it's probably not self-modifying code in the traditional sense, but that's actually more of an assumption than I'd chance to make as an outsider. I'd be surprised if genetic algorithms and other a-life principles haven't worked their way into the search code.

    What would the incentives for an AI even look like? Who would give them to the machine, and why would they ever give an AI an incentive other than "make money for my company"?

    You've seen the Terminator movies and all the other sci-fi right? Self-preservation in an obvious incentive that one doesn't have to imagine too hard to suspect becoming emergent without human help. Beyond self-preservation, then I'm reminded of the ST:DS9's Dominion philosophy- "That which you can control, cannot control you.". But really, your last point is probably the obvious winner for the early stages that involve human given goals. Though after that, using personalized search for personalized political harassment to entrench the rich in power further seems pretty obvious to me.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by TheLink on Tuesday February 25 2014, @03:45AM

    by TheLink (332) on Tuesday February 25 2014, @03:45AM (#6463)
    The first AIs won't necessary be interested in self preservation. The first few creatures might not have been either and it's just the ones that didn't care enough died out.

    Seems like humans don't have enough awareness either - we are doing too many things without being aware of what will happen as a result or we don't even care.

    The Skynet thing is unlikely to happen. Many of the people at the top didn't get to the top and stay there because they let others take control. So I doubt they are going to ever let a Skynet take over everything. What is more likely to happen is these people at the top will use their Skynets to dominate and control most of the resources of the Earth, and eventually they won't need the rest of us anymore, except as toys, warriors and worshippers. Maybe as a reserve DNA pool and "raw material" just in case.

    If we're lucky we'll be kept around as pets and status symbols. But note that pets don't get to vote and are often spayed ;).