Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by Dopefish on Monday February 24 2014, @02:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by drgibbon on Wednesday February 26 2014, @03:14AM

    by drgibbon (74) on Wednesday February 26 2014, @03:14AM (#7170) Journal

    Understood. But again, it does not follow that a model of an entity has that consciousness will necessarily have consciousness itself. I agree that people and animals build what could be called cognitive models of the environment and of themselves, and it's very interesting, but it does not mean that we should equate the phenomena of conscious experience with these models. Although having a sense of self may be brought to mind when one thinks about consciousness, there are states of consciousness where the self does not even exist! So yes, self-awareness (in the usual sense of "my mind", "my body", "my life", and so on) can be divorced from conscious experience. What I was really getting at there was your statement;

    "One could make the argument that consciousness ("self-awareness" at least) is largely a sufficiently detailed model of oneself".

    I would say that the fundamental aspects of consciousness are not captured by this definition. If we confuse the things that seem to rely on consciousness (e.g. self-awareness, intelligence, and so on) with the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality), we run into problems. Not only does the word consciousness cease to have any precise or useful meaning, but we are led down the (IMO) garden path of attributing consciousness to anything that can mimic these models.

    I think that the development of children and so on is a fascinating area, but studies in that direction would seem to be more properly called cognitive/developmental, rather than of consciousness per se.

    --
    Certified Soylent Fresh!
  • (Score: 1) by Namarrgon on Thursday February 27 2014, @05:29AM

    by Namarrgon (1134) on Thursday February 27 2014, @05:29AM (#7861)

    the phenomena of consciousness itself (i.e. the capacity to subjectively experience reality)
    Not really the definition I had in mind - but that's part of the problem; nobody really knows.

    I would have said that in order to subjectively experience anything, one had to be aware of oneself first, and be aware of the effect that experience has on oneself - which to me implies a self-model.

    But I'll happily concede my opinion is no better than any other, and we won't really know anything much for sure until we try. Which I guess was the original point; it's an approach worth trying, and we'll see how it turns out. At the least, we'll learn something.

    --
    Why would anyone engrave Elbereth?