Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by WillR on Monday February 24 2014, @12:51PM

    by WillR (2012) on Monday February 24 2014, @12:51PM (#5951)

    "The AI we have now learns much much faster than that. Ray pointed out many years ago that the switching speed of a transistor was already 1000 times faster than the switching speed of a neuron, plus there's parallelisation to exploit."

    And yet here we sit, a predicted 20-30 years away from conscious software. Same as in the 1990s, and the 80s, and the 70s.

    It's like the problem is just not amenable to being solved by throwing bigger storage and faster neural nets at it, or something.