Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by drgibbon on Monday February 24 2014, @10:06PM

    by drgibbon (74) on Monday February 24 2014, @10:06PM (#6328) Journal

    Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom [rudyrucker.com]? Fantastic book! Had no idea he taught AI. In any case, I'd certainly disagree with Mr Rucker. While it's an appealing concept on the surface, I just don't think it holds much weight (no denial required ;). The mystery of consciousness is that a conscious being's only proof of it is his/her/its own experience. Consciousness is evidenced in the first place by its own experiential content; nothing else. This is the divide between subjective and objective. The "consciousness defining thing" (the subjective experiential content) is not accessible to others, thus we can't properly prove it in others (apart from its self-evident nature in ourselves; followed by inference for humans, animals, etc).

    If you ascribe consciousness to a bot then you surely must ascribe consciousness to trees and plants. They seem to know where they are, they move towards the sun and so on (some catch flies etc). And should we then say that they are rudimentary consciousnesses and lack intelligence? Based on what? That they are slow? Confined in space? Have no brain? Perhaps we simply lack the means to communicate with them (they may be sources of wisdom for all we know). An expert meditator might be doing absolutely nothing, sitting completely still, and having a mystical experience. Is he in a lower state of consciousness because he's not actively carrying out "intelligent tasks"? I like Rudy Rucker, but I think his position on consciousness (based on what you've said) is somewhat facile. The mistake is that you simply throw away the core meaning of consciousness. I write a program, it has sensors for where it is in the room, hey cool it's conscious! You solve the problem by avoiding the difficulties of the thing.

    The question is not, "does this thing have models of itself and react in the environment?", the question is "is this thing subjectively experiencing itself in reality?". IMO, they are just not the same. To equate the two certainly makes the problem of consciousness a lot easier, but unfortunately it does this by rendering the question (and therefore the answer) essentially meaningless.

    --
    Certified Soylent Fresh!
  • (Score: 1) by melikamp on Monday February 24 2014, @11:06PM

    by melikamp (1886) on Monday February 24 2014, @11:06PM (#6356)

    Haha wow, Rudy Rucker, the guy who wrote Saucer Wisdom?

    The very same :) He was teaching computer science at San José[1] State till 2004, and, even though I am an avid science fiction fan, I did not find out about his writing until years later. Great class though.

    As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P

    [1] So, what's up with UTF support?

    • (Score: 1) by drgibbon on Tuesday February 25 2014, @12:28AM

      by drgibbon (74) on Tuesday February 25 2014, @12:28AM (#6388) Journal

      "As for our disagreement, I hear what you are saying. But give it a few years, and you may find yourself returning to this simple idea :P"

      Well, I think that simple idea grossly misrepresents the terrain and provides a pseudo-solution that does more harm than good, but hey who knows? :P

      Possibly only Rucker's aliens could zip through time and tell us ;)

      --
      Certified Soylent Fresh!
      • (Score: 2) by mhajicek on Tuesday February 25 2014, @12:56AM

        by mhajicek (51) on Tuesday February 25 2014, @12:56AM (#6404)

        I think it's a matter of semantics.