Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by LaminatorX on Saturday March 01 2014, @08:30AM   Printer-friendly
from the Call-me-once-you've-quantified-'love' dept.

AnonTechie writes:

"Can a Computer Fall in Love if It Doesn't Have a Body? Much has been written about Spike Jonze's Her, the Oscar-nominated tale of love between man and operating system. It's an allegory about relationships in a digital age, a Rorschach test for technology. It's also premised on a particular vision of artificial intelligence as capable of experiencing love.

Poetic license aside, is that really possible ?"

 
This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by bucc5062 on Saturday March 01 2014, @08:47AM

    by bucc5062 (699) on Saturday March 01 2014, @08:47AM (#9081)

    Now really guys, I thought we talked about this before. I'm going to go with, "Are they paying attention?" And since that questions NOT a headline I will answer yes.

    Betteridge's Law applied...No.

    Now, It's off to the corner Mr/Ms Editor for not only did you present a story in a manor more commiserate with ... well ... you know, but it is really a dumb one as well. A rather enjoyable discussion was held on the topic of AI and when that will happen and Love, as I understand it, will need to come after, not before , sentience.

    No...No...and one more time...No.

    --
    The more things change, the more they look the same
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Informative=2, Overrated=1, Total=4
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 1) by HiThere on Saturday March 01 2014, @01:02PM

    by HiThere (866) on Saturday March 01 2014, @01:02PM (#9161)

    I've *got* to disagree. The emotional structure of the program needs to be in place BEFORE full sentience appears. Afterwards it would be immoral to try to coerce the change...and probably lead to a justifiable revolt.

    Of course, this sort of depends on what you are willing to call sentience. A dog is capable of love. So is a cow (though they rarely love people). I'm not sure about a mouse...I haven't gone looking for evidence. My suspicion is that it is. I believe, with insufficient evidence, that the capability for love is inherent in the basic mammalian brain structure. I suspect that birds, also, are capable of love, but that's less certain (and partially depends on your precise definition).

    In mammals emotions are mediated through chemicals as well as neural firings, but I can see no reason why this should be a requirement. Mirror neuron equivalents, however, may be. Or possibly the structure could be equivalenced at a higher level of abstraction.

    N.B.: Merely having emotions doesn't imply that the emotions will be cognate to those of humans, or of any other animal. But it *is* a requirement. (I suspect that they will generally be quite different unless careful work is done to make them cognate...and that the work will be necessary, because otherwise people won't understand them.)

    --
    Put not your faith in princes.
    • (Score: 1) by SlimmPickens on Saturday March 01 2014, @08:17PM

      by SlimmPickens (1056) on Saturday March 01 2014, @08:17PM (#9289)

      The emotional structure of the program needs to be in place BEFORE full sentience appears.

      I think there will be a very wide variety of ways bootstrap a mind, and in thirty years, regardless of what laws exist, people will have enough computing power on their desk for a lot of unethical experiments that no-one else need know about.

      In mammals emotions are mediated through chemicals as well as neural firings, but I can see no reason why this should be a requirement. Mirror neuron equivalents, however, may be. Or possibly the structure could be equivalenced at a higher level of abstraction.

      "Neural firings" being a chemical phenomenon aside, I think that basically everyone in AGI agrees with this, and would even lean on the side of algorithmic equivalents. Demis Hassabis (the guy Google just paid £400m for Deep Mind) can be found on Youtube advocating what he calls something like the "middle road algorithmic approach, guided where possible by neuroscience" and Ben Goertzel essentially does that, even if he hasn't had a great deal of neuroscience to guide him in the many years he's been creating AGI.

      • (Score: 1) by HiThere on Sunday March 02 2014, @08:59PM

        by HiThere (866) on Sunday March 02 2014, @08:59PM (#9846)

        I think you misunderstand my proposal. I'm proposing that neural firings, and even neurons, is the wrong level to model. That you need to model what they are doing. Rather like compilers changing the same code to map to different processor designs. The assembler level code may be very different when produced to run on two different CPUs...particularly ones with very different abstractions that they, in turn, turn into bit manipulations (down at the bit manipulation level, where half-adders, etc. work). And, of course, it's even possible to not have the base level implemented in terms of bits. (In the early 1950's there was a computer that worked in base 10 at the lowest level...i.e., storing 10 different voltage levels in the same cell.)

        So you can model things at lots of different levels and achieve approximately the same results, and I suspect that the neuron level is too low a level to choose to model when you're building an AI.

        --
        Put not your faith in princes.
    • (Score: 2) by SMI on Sunday March 02 2014, @04:56AM

      by SMI (333) on Sunday March 02 2014, @04:56AM (#9460)

      THIS is why I enjoy SoylentNews and this community so much. I'm inclined to believe that pretty much anything could be presented, and ridiculously good debate will ensue. Thanks guys. :)

  • (Score: 3, Informative) by mattie_p on Saturday March 01 2014, @04:34PM

    by mattie_p (13) on Saturday March 01 2014, @04:34PM (#9218) Journal

    We pay attention. I don't think we're deliberately toying with you. One of the things the editors have worked out is the principle of minimum changes. We can make any changes we want and still have it say that the submitter wrote it. We don't want to do that. We would much rather post something that changes as little as possible about the submission, even if it isn't perfect.

    We're still striking that balance, as you can see. But, for now, I'd rather err on the side of caution, and leave the words and flow of thought intact, than to make changes that abuse our privileges within the system.

    Thanks for reading, and keep up the constructive criticism! ~mattie_p

    • (Score: 2) by SMI on Sunday March 02 2014, @04:58AM

      by SMI (333) on Sunday March 02 2014, @04:58AM (#9462)

      This is very informative, thank you mattie_p. Keep up the good work!