Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
If you are intrigued by artificial intelligence, this movie is a must see. A programmer wins a company lottery to spend a week with his seclusive boss at his home/research center out in the boonies to weigh in on a secret project involving A.I. Most importantly, it is thought provoking and feels plausible. It builds slowly for an unexpected ending. I was impressed and surprised. IMO it's best to see this movie without reading the spoilers in advance.

Update: all bets are off for shielding spoilers in this thread. If you plan on watching this movie, go away until you've seen it lol!

file_124561_0_exmachinaposterlarge.jpg

I won't insist that my reaction is typical, but my impression is that if something acts human enough, besides it clearly being a machine, I/we will have empathy for it, especially when it expresses fear or frustration while I may overlook that it is manipulating towards a goal given to it. You might assume that something like this is innocent and honest but in hindsight, in this case, you be underestimating its capabilities. Blame the programmer! ;)

This movie raises aspects of what it means to human. In the case of this AI, it has been programmed to emulate human aspects of emotions, empathy, sexuality , and self preservation. Maybe not anger and fear, but I have to think they manifested themselves as evidence in a video Cal watches. And if intended or not, manipulation and deceit. Nathan's (the genius boss) goal was to produce something that could pass for human. However where he goes wrong is either he is unaware of Isaac Asimov or chose to ignore the 3 Laws of Robotics contributing to his untimely end.

While it might be argued these androids were justified taking lethal action in self defense, for fear or maybe not fear but resistance to termination, I have to wonder after Ava gets loose, how well will her programming serve her to fit into society? Although she was not the one to plunge the knife, I wonder at what point she would deem it appropriate to take a life? It's not clear if she was programmed with a sanctity for life or if self preservation routines over-rode them, which takes me back to the Three Rules. And she showed no hesitance to leave Cal, who empathized and was ironically motivated to help her escape, locked in the facility presumably to starve. Again, a programming/AI failure. ;)
 
Last edited:

SandboxGeneral

Moderator emeritus
Sep 8, 2010
26,482
10,051
Detroit
Excellent idea for a thread. One in which has potential for much traction.

Again, a programming/AI failure.

Warning to readers, spoilers below.

Was it really an error or failure of programming? Perhaps it could have been viewed as a total success in making Ava as "real" as could ever be hoped for.

What Ava did, in my opinion, was superbly human-like. She didn't plunge the knife into Jay, but she did set it up and manipulate the other girl robot to be in a position to make the knife go in. Total passive aggressivness.

Then she exhibited more manipulation of Cal by way of using her sexuality to gain his trust and use him to help her escape. All the while, likely, resenting him for being human, like Jay, who kept her captive. It was because of that that she left him behind, no doubt to die and die alone like she had probably thought was going to be her end.
 
  • Like
Reactions: Huntn and RawBert

VideoFreek

Contributor
May 12, 2007
577
180
Philly
Spoilers follow!

Great idea for a thread, thanks!

I was deeply troubled by the ending upon first viewing, and really didn't understand Ava's actions at all. Having seen it now for a second time, things have become a bit clearer. I believe that effectively, Ava was a sociopath--capable of understanding, emulating and manipulating human emotions, as human sociopaths do, but without actually "feeling" any of them herself. I don't believe that creating a sociopath was Nathan's explicit design intention, but I think he was so focused on getting her to convincingly pass for a human being (pass the Turing Test), that imbuing her with actual emotion and empathy was not his priority. She merely needed to be able to fake it convincingly enough. That, combined with the fact that her ability to read humans made her a walking lie detector, meant that she was equipped to be the ultimate con artist.

As to why she left Cal to die, I don't think it was because she "resented" him in any way. She had no feelings for him one way or the other--he was merely an object in her environment to be used. So, when she asks him, "will you stay here?", she sees from his reaction that the answer is clearly "hell, no!!" At that instant, he ceases to be a help to her and becomes a probable hindrance, so she leaves him behind to almost certainly die without batting an eye.
 
Last edited:
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
Excellent idea for a thread. One in which has potential for much traction.
Warning to readers, spoilers below.

Was it really an error or failure of programming? Perhaps it could have been viewed as a total success in making Ava as "real" as could ever be hoped for.

What Ava did, in my opinion, was superbly human-like. She didn't plunge the knife into Nathan but she did set it up and manipulate the other girl robot to be in a position to strike. Total passive aggressivness.

Then she exhibited more manipulation of Cal by way of using her sexuality to gain his trust and use him to help her escape. All the while, likely, resenting him for being human, like Jay, who kept her captive. It was because of that that she left him behind, no doubt to die and die alone like she had probably thought was going to be her end.

Thank you. She was definitely programmed and yes unfortunately it appears that Nathan was enabling her to pass herself off as having human qualities, all most like a new born, but in reality it was reaction and displaying certain agenda driven characteristics, which might be fine under controlled circumstances, but not associated with free reign. Although he felt comfortable enough to let one move about the residence freely. Later it was revealed via the video that Nathan had at least one unhappy android on his hands. So we know they could get mad.

I maintain the key is faulty programming. He wanted them to be human like, why program them to "act" human versus really feeling the way they portrayed? Maybe being too human was not a good goal. Maybe deceit is a terrible quality to program into an A.I. And maybe programming them to really feel like a human would be more difficult/impossible, than having them pretend, plus it was revealed that he had given Ava an agenda to convince Cal of her humanity.

Not to sound like the professor, but the difference between pretending and feeling? Pretending can be turned on and off at will, while feelings are more set , requiring stimulus and take time to change. At a minimum, by overlooking the 3 Robotic Laws, Nathan sealed his and Cal's fate.

High functioning sociopath. Great film.

Spoilers follow!

Great idea for a thread, thanks!

I was deeply troubled by the ending upon first viewing, and really didn't understand Ava's actions at all. Having seen it now for a second time, things have become a bit clearer. I believe that effectively, Ava was a sociopath--capable of understanding, emulating and manipulating human emotions, as human sociopaths do, but without actually "feeling" any of them herself. I don't believe that creating a sociopath was Jay's explicit design intention, but I think he was so focused on getting her to convincingly pass for a human being (pass the Turing Test), that imbuing her with actual emotion and empathy was not his priority. She merely needed to be able to fake it convincingly enough. That, combined with the fact that her ability to read humans made her a walking lie detector, meant that she was equipped to be the ultimate con artist.

As to why she left Cal to die, I don't think it was because she "resented" him in any way. She had no feelings for him one way or the other--he was merely an object in her environment to be used. So, when she asks him, "will you stay here?", she sees from his reaction that the answer is clearly "hell, no!!" At that instant, he ceases to be a help to her and becomes a probable hindrance, so she leaves him behind to almost certainly die without batting an eye.

Thanks! But you agree she had motivations? I propose freedom was her agenda and it was not programmed into her but was an unintended consequence of how Nathan programmed and treated them. I agree with you about her not feeling anything emotionally, but she certainly had motivations, whether she was mad at Nathan or not is not quite known.

I think of a sociopath as someone who has no empathy for anyone else, primarily concerned with their own well being, and no or minimal conscious. As I proposed above, it can be argued that Nathan was mostly concerned with getting his androids to act in a certain way, but they did not feel that way, and so the end result would be sociopath behavior. The problem, (easy for me to say, lol) the boss forgot to include a a morality compass, empathy, and friendship routine which enforced the 3 Robotic Laws. And it can be argued that his abuse, warped them.

I think the big reveal in this movie is that Ava had no feelings nor empathy for Cal, but was just acting on her motivation (where ever that came from) to find freedom and discover the world.

Just like in Terminator 2, when young John Connor tells the Terminator, "you will not kill anyone", emphasizing there was no moral compass, just routines to be followed and if there are no restraints nor, morality constraints well, you've got a lethal machine on your hands. I remember the scene in Artificial Intelligence when Professor Hobby instructs the female robot to get sexually excited and on cue she exhibits signs, like flushed skin, quickened pulse, which really hits home, that these are computers that can pull up any emotion on command, and it's the command and obeying that is the over riding concern. Now David (in A.I.) was more advanced, seemingly crossing a threshold where he experienced emotions, although he was programmed to bond with a human on command. So all this is rather crude as compared to Data (STNG) with his positronic brain that was in possession of a moral compass. ;)

In Ex Machina, I really liked the idea of a gel brain, with the flexibility that appeared to mimic a human brain. While I think technically it's way off in the future, we'll evemtially be able to duplicate the functioning and memory capacity of a human brain, but until we see it, we'll be wondering where our moral compass comes from, if it can be duplicated, or is it something that automatically appears beyond a threshold of intelligence? Yes, there is the possibility of a spiritual aspect of the moral compass, but not the right thread to discuss it. :p
 
Last edited:
  • Like
Reactions: SandboxGeneral

RawBert

macrumors 68000
Jan 19, 2010
1,729
70
North Hollywood, CA
Love this movie. Every second of it is phenomenal. I've seen it 3 times and it only gets better each viewing. I loved the ominous tension and pace throughout the movie. The dancing scene is crazy funny and it's chilling how awesome the ending is. Seeing Eva's emotionless expression as she was leaving the house was amazing. Oscar Isaac (Nathan) played that role perfectly. This movie is a classic.
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
Regarding the climax, I'll ramble on a bit and repeat myself once ot twice.. :)

One of the impressive aspects of this AI story is the transformation of these androids from being viewed as innocent, benign, almost children, to troubled, dangerous creations for which they are not to blame. Blame the genius, Jay. :D

I mispoke in an earlier post, when I said she (Ava) did not plunge the knife. Well, she was not the first to plunge the knife, but the second, the two androids had a plan, and Kyoko came to the confrontation with a knife. However, I'll temper these troubling facts with th observation that when Ava disobeyed Nathan and tackled him, he responded by inflicting grievous bodily harm upon her which from his perspective and mine, at this point it was necessary to regain control of the situation.

From the androids pov, this could be argued as self defense, especially if we imagine this as two sentient beings kept as captives and subjected to Nathan's agenda. Ok, no they were not people, but they were machines given enough intelligence and feelings to get angry about being locked up, Nathan's bad. ;) It would have been more jarring if they had killed Nathan in cold blood. And I could possibly forgive them if she had not left Cal behind locked in an inescapable room (presumed), however...

Leaving him behind, while it might at first squash most notions about any good character she might have, she may have viewed Cal as Nathan's assistant, hence the enemy (which was mentioned in this thread). Cal told her the only reason he was there was to test her. It would require more work to determine the motivations of these machines and understand the world through their perspective, but due to the circumstances of her viewing Cal as the enemy, would that gain Ava any sympathy?

Repeating myself, step one would be to insert The Three Robotic Laws into the programming. The problem with this idea, while making it safer for everyone, lol, might defeat the goal of designing AI that can pass for human unless an unbreakable morality routine was included, limited negative emotions, but a reasonable spectrum of positive emotions. Then you might have someone who fails The Turing Test, but is a very good artificial human. :):)

And how would we judge her if she had gone back, thanked Cal for saving her, showed affection towards him and asked for his help in this big new world to explore? We'd be debating if she was using him or not, but I think the audience generally would have more empathy for her.

For anyone interested , I revived an AI thread which seems like a better location for an extended discussion on the subject, on a more grounded level. :p
 
Last edited:

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
Sorry guys, I spent the last couple of posts calling Cal's boss "Jay", but his name was "Nathan". I've gone back and corrected the name in my posts. If you responded and used the name "Jay", it's my fault. :oops:
 

td1439

macrumors 6502
Sep 29, 2012
337
115
Boston-ish
Spoi- oh, sheesh, if you've made it this far...

So then the question remains: did she pass the Turing test? (For us, the viewers, I mean). Is it more believably human to recognize one's own lack of freedom and do whatever is necessary to achieve it, or is it more believably human to develop caring feelings for another and allow that to possibly hinder one's quest for freedom?
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
Spoi- oh, sheesh, if you've made it this far...

So then the question remains: did she pass the Turing test? (For us, the viewers, I mean). Is it more believably human to recognize one's own lack of freedom and do whatever is necessary to achieve it, or is it more believably human to develop caring feelings for another and allow that to possibly hinder one's quest for freedom?

Both, with judgement and compassion for others? :) A reference is made to the Turing test, but if it applies to this situation, then it would have to be the new improved Turing test, not just having a conversation with an A.I, that you can't distinguish from a human's, but a test where motivations, compassion, and actions are also taken into account.

To clarify, I may sound critical, but don't consider Nathan to be a bad guy. Did he deserve compassion? From my perspective he did, but from their perspective, he had no compassion for them, treating them as non-feeling machines, when clearly they did have feelings.

Would Kyoko have killed him, if he had not been aggressive towards Ava? I think so, that appeared to be the plan as designed by Ava. Both androids expressed a desire to have more freedom and that was the price requirement of their freedom. Was going to say "price" but I'm not sure what they felt other than 1+1= 2 (Nathan+ Dead= Freedom). Kyoko in the video showed anger, and Ava showed joy upon obtaining her freedom. The intrigue and fun of this story in hindsight, is realizing we (the audience) were deceived along with Cal and we didn't really know what's going on in Ava's head other than a quest to be free. Ironically, I think Nathan knew, but the situation got away from him. I don't think Cal saw the real Ava, until the door locked on him. ;)

One of the most powerful scenes in the movie is were where Cal sees the big picture, or thinks he sees it, not realizing he has been manipulated by Ava (at this point the audience does not know either) and he verifies his own humanity by cutting himself.
 
Last edited:

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
Love this movie. Every second of it is phenomenal. I've seen it 3 times and it only gets better each viewing. I loved the ominous tension and pace throughout the movie. The dancing scene is crazy funny and it's chilling how awesome the ending is. Seeing Eva's emotionless expression as she was leaving the house was amazing. Oscar Isaac (Nathan) played that role perfectly. This movie is a classic.

But she smiled when she got outside looking around and brushing her hand on some ferns. :)
 

ProjectManager101

Suspended
Jul 12, 2015
458
722
I saw the movie a few months ago. Kept me thinking. The wiring of her mind is totally different than a human. Humans are limited by other humans power, for example the police. This woman (machine) has no limitations, she will do what it takes to go around, learn and of course trial an error. The problem consist in that we, as humans, expect from other human to re think what he is doing. But this machine will make you think she is doing that, at the end is a machine with an objective.

I believe the movie is very realistic and scary. You have no clue what a machine will end up doing with the information is getting. Artificial intelligence is very dangerous.

My dad once told me, in the military when you give an order and the other person says "ok", you do not know what "ok" means... if is "ok I will do it" or a sarcastic "ok", it may be "ok, now I know what side you are on". The same thing with this woman... double face person.
 
  • Like
Reactions: Huntn

RawBert

macrumors 68000
Jan 19, 2010
1,729
70
North Hollywood, CA
But she smiled when she got outside looking around and brushing her hand on some ferns. :)
Yes, that was cool but I misspoke. I meant she had no expression when she got to the hallway (right before she left the house). She sees Kyoko on the floor, glances at the dead Nathan and gets in the elevator with only a slight glance towards Caleb as the doors close. His silent screams as he realizes he's being left for dead are unsettling. The music is so awesome.
 
  • Like
Reactions: Huntn

td1439

macrumors 6502
Sep 29, 2012
337
115
Boston-ish
To clarify, I may sound critical, but don't consider Nathan to be a bad guy. Did he deserve compassion? From my perspective he did, but from their perspective, he had no compassion for them, treating them as non-feeling machines, when clearly they did have feelings.

Makes me wonder how Ava might have acted differently if Nathan had treated her as a human and not a machine.
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
I saw the movie a few months ago. Kept me thinking. The wiring of her mind is totally different than a human. Humans are limited by other humans power, for example the police. This woman (machine) has no limitations, she will do what it takes to go around, learn and of course trial an error. The problem consist in that we, as humans, expect from other human to re think what he is doing. But this machine will make you think she is doing that, at the end is a machine with an objective.

I believe the movie is very realistic and scary. You have no clue what a machine will end up doing with the information is getting. Artificial intelligence is very dangerous.

My dad once told me, in the military when you give an order and the other person says "ok", you do not know what "ok" means... if is "ok I will do it" or a sarcastic "ok", it may be "ok, now I know what side you are on". The same thing with this woman... double face person.

Have you been watching too much terminator? Just kidding. ;) I agree that before you give a computer decision making abilities over something like STARTING A NUCLEAR WAR, you'd better have you programming down pat, lol. :D

Human beings are a product of millions of years of evolution. I don't know where science is as far as mapping and understanding the human brain which must be combined with understanding of the psyche. I think we think we have a decent understanding of the latter. I'm not well versed in the field of A.I. development, but my novice view is that the prudent path would be to develop an AI approximating an advanced organism and first turn it loose in a contained simulation. For a prototype human, you'd need a morality routine and strict walls of control, respect for authority, and sanctity of life, along the lines of the Three laws of Robotics. Also I imagine for lack of a better word, you'd need to have a scalable ambition routine that allows the AI's intelligence to be exercised, but controls its expectations. Only after you have verified those things, that you have a stable and content AI, and are positive you've not created a monster would you consider giving it mobility and the ability to interact with a physical environment.

We were not told what steps Nathan took in his AI development, but I'll speculate in his desire to create a human indistinguishable from a real person, he placed no restrictions on them, ignoring or because of a god complex, he was oblivios of the dangerous environment he exposed himself and Cal to.

It becomes apparant that the lockdown protocol was for two reasons, to protect sensitive research and to keep Ava locked up, however it's inexplicable why he allowed Kyoko (a previous model?) to wander around and handle knives (she prepared steak for dinner). Kyoko was his companion/research sex partner, who unfortunately he was verbally abusive/dismissive towards, so I assume he felt for whatever reason, she was not a threat. This supports the premise that Nathan was overconfident in the control he exercised over his creations.

Yes, that was cool but I misspoke. I meant she had no expression when she got to the hallway (right before she left the house). She sees Kyoko on the floor, glances at the dead Nathan and gets in the elevator with only a slight glance towards Caleb as the doors close. His silent screams as he realizes he's being left for dead are unsettling. The music is so awesome.

Agreed. At that point we were jarred to the realization that Ava was either not capable of understanding Cal's role and motivations and/or she had no empathy for him and was just using him as part of the initial goal given to her by her maker to convince Cal of her humanity along with her primary goal of achieving freedom. Cal was expendable.

Makes me wonder how Ava might have acted differently if Nathan had treated her as a human and not a machine.

I feel like there were serious errors and omissions in Ava/Kyoko's programming that allowed them to take a life. What is difficult to determine is if we should view them as abused, lacking in empathy, or both allowing them to formulate decisions and actions in a moral vacuum. The scariest aspect of this story is observing an interaction and not being able to determine if we are observing an honest reaction and expression of feelings or a deceitful calculated response.
 

td1439

macrumors 6502
Sep 29, 2012
337
115
Boston-ish
I feel like there were serious errors and omissions in Ava/Kyoko's programming that allowed them to take a life. What is difficult to determine is if we should view them as abused, lacking in empathy, or both allowing them to formulate decisions and actions in a moral vacuum. The scariest aspect of this story is observing an interaction and not being able to determine if we are observing an honest reaction and expression of feelings or a deceitful calculated response.

But wouldn't true artificial intelligence have to be a learning intelligence? In other words, what if real empathy wasn't something Nathan could actually program into Ava, just something he had to create the capacity for? And through his actions and the way he treated he, he stunted that part of her emotional growth to the point that she simply lacked any sense of empathy.
 
  • Like
Reactions: Huntn

ProjectManager101

Suspended
Jul 12, 2015
458
722
The thing is that humans develop empathy for survival, we live in communities. Machines does not have that, do not need that, they are code and metal. Now... they are programmed to emulate human emotions and answers but their conclusions are based on "what ever they come up with".

Let me give you an analogy, when you are in a meeting and someone grab the microphone, everybody thinks that person has something important to say and may be the smarted in the room. The fact is that "is a person with a microphone" but you give that person that "power". If we see a machine talking like humans it is just that, it does not mean is the smarter person in the room. We have no clue how is wiring things until we see the results. Dangerous. I believe that movie was very accurate.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
But wouldn't true artificial intelligence have to be a learning intelligence? In other words, what if real empathy wasn't something Nathan could actually program into Ava, just something he had to create the capacity for? And through his actions and the way he treated he, he stunted that part of her emotional growth to the point that she simply lacked any sense of empathy.

I think the view is that AI will not automatically have or acquire a moral compass nor would they automatically arrive at decisions that humans would find acceptable without programming to address those feelings, especially if they are to be put into positions where they can interact physically with humans and their surroundings, even more importantly when put into the position of making critical decisions for us, we need safeguards to be sure they don't drive us off a cliff. In science fiction, the best four examples I can think of were AI runs amuck are Skynet (Terminator), the WOPR (War Games) The Matrix, and one some of you might not be familiar with, seen in Forbidden Planet.

Regarding their emotional stability, I am not a programmer, but my impression is that the androids emotional reactions could be programmed with some kind of limits required. At their stage of development in this story, it seems inconceivable that Nathan would allow their emotions to reach the stage of rage, for example beating on the glass until Kyoko's forearms broke off.

After thinking about it, my guess is that programming natural empathy and sympathy would not be difficult, like having the capacity and a natural concern (behavioral protocols) for the well being of any human they meet, modified within limits by how the human treats them from attraction to dislike, but at a minimum concerned about the human's physical well being. On a more superficial level it seems relatively easy to program action and response, like a complement would be responded to by specific behavioral changes.

Where this applies to Nathan and his androids is that apparantly he allowed their expectations (to be allowed more freedom) and discontent to grow to a dangerous level, plus they knew how to kill him and found it acceptable to do so. Programming could easily prevented this. Maybe Nathan got to wrapped up with the Turing Test and lost his perspective and common sense.
 

filmbufs

macrumors 6502
Sep 8, 2012
252
187
Oklahoma
I really enjoyed the movie and it does raise a variety of questions. One of the biggest questions, to me, is wondering what happens if there truly becomes a point in which we attribute human-like qualities to a machine. If so, what are those qualities and, if those qualities exist, should those machines now be treated as human? There are many who argue the moment when human life begins, and as such, set out to create parameters in what can or cannot be done. Why then could you not apply the same set of rules to machines? If they demonstrate qualities in which we determine are truly human, and are able to render those qualities without flaw, then certainly moral questions would arise.

Of course, these 'rules' originate from humans and, as such, would have many areas left open to interpretation, flaws and errors. One merely has to look how we treat animals, particularly domesticated animals, to get a sense of how we would treat machine companions as subordinates. But what happens after we create a machine that can not only possess human-like qualities but can also think, adapt and essentially evolve on their own? Would there be parameters that limit that evolution or would we create a nature/nuture process and see what happens?

What happens then when robots/computers surpass humans? Our reliance upon them now is staggering and many jobs have been displaced. But will there ever be a point when robots/computers realize how much control they possess? We aren't there yet but one has to wonder if we are fast approaching. Would robots grant us the same parameters that we give them? One has to imagine that for them, rules would be more black and white. We already get locked out if we don't remember our passwords (for our safety, of course.) We already have rules in place for computers follow in order to prevent unwanted access, damage or concerns for safety. What if the computers/robots further restrict the rules, out of concern for our best interest of course. We all follow rules previously created for the best interest of society and those rules ebb and flow. How would robots interpret those rules and what would become of humans if we granted them that power? Would there be a point where we could override that power and, more importantly, would there ever be a point where we couldn't? In short, we do we become the subordinate species?
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
I really enjoyed the movie and it does raise a variety of questions. One of the biggest questions, to me, is wondering what happens if there truly becomes a point in which we attribute human-like qualities to a machine. If so, what are those qualities and, if those qualities exist, should those machines now be treated as human? There are many who argue the moment when human life begins, and as such, set out to create parameters in what can or cannot be done. Why then could you not apply the same set of rules to machines? If they demonstrate qualities in which we determine are truly human, and are able to render those qualities without flaw, then certainly moral questions would arise.

My guess is that if and when we create machines capable of pleading for their lives, we will move towards treating them like people, however for our own sake I believe that would be very unwise, (creating a machine like this). Even if we give them a high level of intelligence, reasoning, and awareness, we don't want to endow them with the frailty and weakness of the human psyche and should place specific barriers in their abilities and actions.

I see it as plausible to create an advanced AI smart or even smarter than us, able to engage us intellectually, possibly intimately as companions, but still have limits that would prevent them from acting out in negative human ways, such as negative emotions and we would demand obeyence within a moral framework. For example they believe (not subject to doubt) their purpose is to serve us, and although we should encourage them to advise us, without much objection, we should expect them to obey us, unless we want them to become the masters.

Of course, these 'rules' originate from humans and, as such, would have many areas left open to interpretation, flaws and errors. One merely has to look how we treat animals, particularly domesticated animals, to get a sense of how we would treat machine companions as subordinates. But what happens after we create a machine that can not only possess human-like qualities but can also think, adapt and essentially evolve on their own? Would there be parameters that limit that evolution or would we create a nature/nuture process and see what happens?

What happens then when robots/computers surpass humans? Our reliance upon them now is staggering and many jobs have been displaced. But will there ever be a point when robots/computers realize how much control they possess? We aren't there yet but one has to wonder if we are fast approaching. Would robots grant us the same parameters that we give them? One has to imagine that for them, rules would be more black and white. We already get locked out if we don't remember our passwords (for our safety, of course.) We already have rules in place for computers follow in order to prevent unwanted access, damage or concerns for safety. What if the computers/robots further restrict the rules, out of concern for our best interest of course. We all follow rules previously created for the best interest of society and those rules ebb and flow. How would robots interpret those rules and what would become of humans if we granted them that power? Would there be a point where we could override that power and, more importantly, would there ever be a point where we couldn't? In short, we do we become the subordinate species?

I'll argue that most industrial robots have very little need of the kind of AI that would threaten us. I don't see them in a position of making moral judgements. The same goes for management of power grids and such. The morality aspect and constraints come into play if and when we allow AI to physically interact and/or make decisions for humans (such as servants, companions, nannies, or an android police force), and or it is involved in cutting edge scientific research and development where what we are looking for is innovation, insight and discoveries and we think the AI can contribute. We might benefit from such input as long as it (AI) can't take over.

If and when AI can evolve, the barrier to believing it knows what's best for us and enforce those beliefs must be unbreakable. That is the nightmare scenario, handing over control of management our lives to AI with access to physical interaction and control and it decides to lock us up for our own good (or worse like Skynet, decide we are the problem and its best to destroy us). Maybe if designed right, it would know what's best. :p Let it become really smart, let it advise us, but don't allow it to take over our research, or to jump out so far in front of us that we can't keep up, and under no circumstances does it gain the ability to dictate, it must always remain subservient even to the point of allowing us to destroy ourselves if we insist upon it. Thoughts?
 
Last edited:
  • Like
Reactions: filmbufs

filmbufs

macrumors 6502
Sep 8, 2012
252
187
Oklahoma
You raise some interesting points, Huntn and I agree with with what you are suggesting. I wonder, however, how advanced we would develop robots with human-like intelligence and emotions. It is one thing to develop advanced qualities that have limitations to prevent them from acting out in negative ways but would that satisfy our needs? I would imagine as we get more comfortable with intelligent robots, especially those designed to be a companion or a personal assistant, the more we will want 'real' emotions.

Nobody really wants a Yes Man; people want, and may even need, dialogue to discuss or reason out various issues. And if we continually perceive robots as only functional, the likelihood of having them develop realistic behaviors decreases. Human behavior is such that, like in the movie, Her, people can also develop attachments and that need will create a demand for more realistic behaviors. I agree with you however, that safeguards would be wise, but I believe those boundaries will continually evolve.

I also think the boundaries continue to get pushed in how society currently accepts machine intelligence in our everyday lives. And because most of those tasks are relatively mundane, we fully accept them without truly considering the consequences. I think the speed in which machine intelligence has evolved over the last 25 years has been staggering. So much information is now digital and, of course, much of the digital is handled by machines. I agree with you those machines should remain subservient but, as we push their capabilities to suit our needs, one has to wonder when, as I do not doubt it will happen, we will put too much emphasis on a machine assessing a situation and responding directly without supervision. Will the safeguards that have been installed be enough or will flaws exist, particularly in cases where human interaction can filter through the gray areas instead of a machine treating it as a direct cause/result.

These are fun things to ponder, and you're a lot better at it than I am, Huntn. I just hope those in charge will carefully consider empathetic responses alongside functionality.
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,461
26,582
The Misty Mountains
You raise some interesting points, Huntn and I agree with with what you are suggesting. I wonder, however, how advanced we would develop robots with human-like intelligence and emotions. It is one thing to develop advanced qualities that have limitations to prevent them from acting out in negative ways but would that satisfy our needs? I would imagine as we get more comfortable with intelligent robots, especially those designed to be a companion or a personal assistant, the more we will want 'real' emotions.

Nobody really wants a Yes Man; people want, and may even need, dialogue to discuss or reason out various issues. And if we continually perceive robots as only functional, the likelihood of having them develop realistic behaviors decreases. Human behavior is such that, like in the movie, Her, people can also develop attachments and that need will create a demand for more realistic behaviors. I agree with you however, that safeguards would be wise, but I believe those boundaries will continually evolve.

I also think the boundaries continue to get pushed in how society currently accepts machine intelligence in our everyday lives. And because most of those tasks are relatively mundane, we fully accept them without truly considering the consequences. I think the speed in which machine intelligence has evolved over the last 25 years has been staggering. So much information is now digital and, of course, much of the digital is handled by machines. I agree with you those machines should remain subservient but, as we push their capabilities to suit our needs, one has to wonder when, as I do not doubt it will happen, we will put too much emphasis on a machine assessing a situation and responding directly without supervision. Will the safeguards that have been installed be enough or will flaws exist, particularly in cases where human interaction can filter through the gray areas instead of a machine treating it as a direct cause/result.

These are fun things to ponder, and you're a lot better at it than I am, Huntn. I just hope those in charge will carefully consider empathetic responses alongside functionality.

When I see where we actually are (the AI thread) as compared to a movie like this, I don't think we are even close. The cool thing about a movie like this is that it looks real and I believe gives a window into the future, even though it's scifi. :)
 
  • Like
Reactions: filmbufs

LIVEFRMNYC

macrumors G3
Oct 27, 2009
8,778
10,844
Both Ex Machina and Chappie were good movies.

Would like to see a sequel to Ex Machina to see if they are going to explore if she can develop real feelings for others since she's out in the real world now.
 
  • Like
Reactions: Huntn
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.