Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
Both Ex Machina and Chappie were good movies.

Would like to see a sequel to Ex Machina to see if they are going to explore if she can develop real feelings for others since she's out in the real world now.

I agree a sequel could be neat. But unless she dragged a bag with her (that they did not show) full of charging equipment, some spare parts, and has access to the the boss's credit cards, it seems like it would a short adventure. But if there is a sequel, of course she did! ;)
 
Jul 4, 2015
4,487
2,551
Paris
Well made and good on the surface. But as soon as you start asking questions the film collapses.

One man in a house did all that work?

A billionaire genius who can create an advanced AI but he can't vet his beta testers properly?

Robots who want freedom and start to rebel. That's old now.

She puts that skin on really quickly.

And then...after she reaches her boring goal of becoming a people watcher...what then?

I prefer the robot in Interstellar. It's a much more realistic and thoughtful example of where AI will actually go. A machine that finds its 'purpose in life' by performing the duties it was purposed for. It has no need for flights of fantasy or rebellions or trying to kill it's makers.
 

LIVEFRMNYC

macrumors G3
Oct 27, 2009
8,780
10,844
I agree a sequel could be neat. But unless she dragged a bag with her (that they did not show) full of charging equipment, some spare parts, and has access to the the boss's credit cards, it seems like it would a short adventure. But if there is a sequel, of course she did! ;)

Well she was that manipulative to get out, don't see why she would be naive not to take necessities.
 
  • Like
Reactions: Huntn

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
Well made and good on the surface. But as soon as you start asking questions the film collapses.

One man in a house did all that work?

A billionaire genius who can create an advanced AI but he can't vet his beta testers properly?

Robots who want freedom and start to rebel. That's old now.

She puts that skin on really quickly.

And then...after she reaches her boring goal of becoming a people watcher...what then?

I prefer the robot in Interstellar. It's a much more realistic and thoughtful example of where AI will actually go. A machine that finds its 'purpose in life' by performing the duties it was purposed for. It has no need for flights of fantasy or rebellions or trying to kill it's makers.

I could counter some of your arguments, but I'll just say for the purpose of this story, the new and improved Turing Test, not meant to be a human, but be indistinguishable from human, in this case a scheming, lieing, treacherous human that's expert at hiding it's motivations until it lowers the boom on the unsuspecting. But that's the programmer's fault! :D
 
Last edited:
Jul 4, 2015
4,487
2,551
Paris
I could counter some of your arguments, but I'll just say for the purpose of this story, the new and improved Turing Test, not meant to be a human, but be indistinguishable from human, in this case a scheming, lieing, treacherous human that's expert at hiding it's motivations until it lowers the boom on the unsuspecting. :D
I preferred a certain Voight Kampf test, especially when one replicant was testing another replicant and both of them thought they were human ;)
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
Well made and good on the surface. But as soon as you start asking questions the film collapses.

One man in a house did all that work?

A billionaire genius who can create an advanced AI but he can't vet his beta testers properly?

Robots who want freedom and start to rebel. That's old now.

She puts that skin on really quickly.

And then...after she reaches her boring goal of becoming a people watcher...what then?

I prefer the robot in Interstellar. It's a much more realistic and thoughtful example of where AI will actually go. A machine that finds its 'purpose in life' by performing the duties it was purposed for. It has no need for flights of fantasy or rebellions or trying to kill it's makers.

I was also impressed with the way they portrayed the robot in Interstellar.

At first I disliked that robot, because despite what my eyes saw, realistically I questioned just how physically practical this intelligent self mobile slab would be? I doubt it would be able to navigate obstacles, even a simple set of stairs, and he could'nt even handle a chess board or interact with equipment in a physical way, unless he had hidden appendages that I've forgotten about. Did he? Despite this, his personality won me over. ;)
 

Suture

macrumors 65816
Feb 22, 2007
1,002
212
Watched the trailer for this on the Apple TV last night. Added it to the list. Hoping to sit down to watch it sometime this weekend, or next week. Looks interesting.
 

Gutwrench

Suspended
Jan 2, 2011
4,603
10,530
I feel asleep watching it.

However, it's amazing how CGI in the hands of good artists and smart producers who know how and when to use it can create some incredible scenes/pictures. A whole different topic, I know.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
I feel asleep watching it.

However, it's amazing how CGI in the hands of good artists and smart producers who know how and when to use it can create some incredible scenes/pictures. A whole different topic, I know.

This is definitely not an action film. It's powered by the interpersonal dynamics of Cal with his boss, and coming face to face with a plausible representation of advanced AI who reveals its not happy. The interesting thing is that this movie grabbed me just as much as the best action film.
 

filmbufs

macrumors 6502
Sep 8, 2012
252
187
Oklahoma
This is definitely not an action film. It's powered by the interpersonal dynamics of Cal with his boss, and coming face to face with a plausible representation of advanced AI who reveals its not happy. The interesting thing is that this movie grabbed me just as much as the best action film.

I can appreciate movies of all genres IF they are well-made. But I agree, 'quiet' movies can really captivate you with a wonderful story and excellent production values.
 
  • Like
Reactions: Huntn

AlkaiserSoze

macrumors newbie
Jul 22, 2015
6
3
United States
Very intriguing (and a little disturbing) science fiction film. Love Alex Garlands writing as always, and there are strong performances throughout. Very much so would recommend to fellow genre fans
 
  • Like
Reactions: Huntn

filmbufs

macrumors 6502
Sep 8, 2012
252
187
Oklahoma
^ "Atlas is terrifyingly lifelike." Except for the lack of head, hands, feet and the long electrical cables flowing out his butt.

Joking aside, it's cool they built a robot that can keep its balance through a forest and perform other tasks.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
Stop! The Big Spoiler- mentioned for anyone who has not seen movie. :D



If you are intrigued by artificial intelligence, this movie is a must see. A programmer wins a company lottery to spend a week with his seclusive boss at his home/research center out in the boonies to weigh in on a secret project involving A.I. Most importantly, it is thought provoking and feels plausible. It builds slowly for an unexpected ending. I was impressed and surprised. IMO it's best to see this movie without reading the spoilers in advance.

Update: all bets are off for shielding spoilers in this thread. If you plan on watching this movie, go away until you've seen it lol!

file_124561_0_exmachinaposterlarge.jpg

I won't insist that my reaction is typical, but my impression is that if something acts human enough, besides it clearly being a machine, I/we will have empathy for it, especially when it expresses fear or frustration while I may overlook that it is manipulating towards a goal given to it. You might assume that something like this is innocent and honest but in hindsight, in this case, you be underestimating its capabilities. Blame the programmer! ;)

This movie raises aspects of what it means to human. In the case of this AI, it has been programmed to emulate human aspects of emotions, empathy, sexuality , and self preservation. Maybe not anger and fear, but I have to think they manifested themselves as evidence in a video Cal watches. And if intended or not, manipulation and deceit. Nathan's (the genius boss) goal was to produce something that could pass for human. However where he goes wrong is either he is unaware of Isaac Asimov or chose to ignore the 3 Laws of Robotics contributing to his untimely end.

While it might be argued these androids were justified taking lethal action in self defense, for fear or maybe not fear but resistance to termination, I have to wonder after Ava gets loose, how well will her programming serve her to fit into society? Although she was not the one to plunge the knife, I wonder at what point she would deem it appropriate to take a life? It's not clear if she was programmed with a sanctity for life or if self preservation routines over-rode them, which takes me back to the Three Rules. And she showed no hesitance to leave Cal, who empathized and was ironically motivated to help her escape, locked in the facility presumably to starve. Again, a programming/AI failure. ;)

EX-MACHINA-23.png

Just rewatched this, and I had forgotten or overlooked a huge plot point. Nathan admits the real purpose of the test, was not to see if Ava could pass for human, but to see if Ava could use human traits to manipulate, Caleb.

When Nathan shows Caleb the video of his conversation to release Ava, Nathan admits this experiment was really a challenge he gave to Ava using Caleb as the object of her test, which would be to use her self awareness, imagination, empathy, sexuality, and manipulation to elicit Caleb's help to escape, (a super Turing Test) as he (Nathan) knows his androids are unhappy in captivity.

In one of the online articles about this movie, the questioned was asked, who is the villain?

In my opinion, there is no villian, just somewhat wreckless behavior in retrospect on behalf of Nathan (the boss). Placing myself in such a position, once you have an A.I. that acts human enough, you are going to run into moral issues. Although machines, he treated his androids who had emotions, like slaves and regarding at least one aspect, they were unhappy. The one desire Nathan and the viewer knows for certain is their desire for freedom. It can be asked, if they had not escaped, was any harm done? After all, they are just smart machines.

In this regard, some of us of us can relate to Data in Star Trek Next Generation. What should be required to deserve the rights humans consider to be God given? I think self awareness plays a significant part in this.
 

ProjectManager101

Suspended
Jul 12, 2015
458
722
The thing with humanity and AI is... what is your agenda and who has the most power? The issue in question is power and what that person with power will do and what is going to happen with the person that loses the power.

For example, humans have power in society but leave a human in a savanna in Africa or in the Amazon... the human lost power and now has to run and try to survive. The issue with AI is the threshold on when they are going to become so powerful that they can mean a tread to us? Are we creating out future owners? how can we avoid that?
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
The thing with humanity and AI is... what is your agenda and who has the most power? The issue in question is power and what that person with power will do and what is going to happen with the person that loses the power.

For example, humans have power in society but leave a human in a savanna in Africa or in the Amazon... the human lost power and now has to run and try to survive. The issue with AI is the threshold on when they are going to become so powerful that they can mean a tread to us? Are we creating out future owners? how can we avoid that?

That is one of the questions, how much power do we want to hand over. The other will be what rights should self aware manufactured entities have? A lot of this depends on how human do we want them to be?

The three rules of robotics must never be forgotten or neutered, and ultimately it is a matter of control and capabilities. In the movie A.I., society allowed for automonous androids, the example are the pleasure bots which didn't seem to have masters. They could not reproduce, nor did they have the technical ability to do much other than mimic humans and provide pleasure to clients. It seems much more would have to be given to an android for it to be a threat.

For really smart A.I. who might not be given human form, but given control of technology and technical systems there would have to be circuit breakers for emergency disconnect. The nightmare scenario would be when they figure out how to circumvent control such as in the Terminator franchise. Other stories in STNG and movies like Forbidden Planet show the undesirable outcomes when technology is handed over to A.I. and it either goes rogue, there are unintended consequences, or the society forgets everything except how to live a cushy life... until the A.I. decides we are a drag. ;) I found the end of A.I. to be very somber.


art_4.jpg

After the humans...

5467833_1_l.jpg

The T-1, just the tip of Skynet's Iceberg, convienently manufactured by us.​
 
Last edited:

ProjectManager101

Suspended
Jul 12, 2015
458
722
The problem is more serious because... you do not know what powers you are giving over, machines will be deciding themselves. For example, every human believe what they are doing is right.... ISIS believe they are doing the right thing, court houses exists because people believe they are entitled. Everybody has their own interpretation of life.

Now, you are creating a technology that will be dealing will all that information. Probably the outcome will not be good for you but will be good for ISIS. As in the movie, you are giving power to an entity and you do not know how that entity is going to interpret those rules, but for sure it will have way more knowledge about so many things and it will take their own decisions, that is why is intelligent. It will have the same flaws as humans do, why? because it was made by humans.

I do not trust AI at all.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains
The problem is more serious because... you do not know what powers you are giving over, machines will be deciding themselves. For example, every human believe what they are doing is right.... ISIS believe they are doing the right thing, court houses exists because people believe they are entitled. Everybody has their own interpretation of life.

Now, you are creating a technology that will be dealing will all that information. Probably the outcome will not be good for you but will be good for ISIS. As in the movie, you are giving power to an entity and you do not know how that entity is going to interpret those rules, but for sure it will have way more knowledge about so many things and it will take their own decisions, that is why is intelligent. It will have the same flaws as humans do, why? because it was made by humans.

I do not trust AI at all.

It can be debated that AI has what we've given it. A moral protocol could be included that protects us. The danger won't exist until we put machines in a position where judgement calls can be be made based on a moral framework and they can act on it.

The danger is an AI that's allowed to evolve, where control could be lost. A mechanical switch is needed, but I can imagine a situation where we've turned over so much control, Throwing the switch would result in collapse and chaos.
 

ProjectManager101

Suspended
Jul 12, 2015
458
722
Well, just like the internet, it is lost. In the 90' there was this thing called "virtual reality" (Remember The Lawnmower Man?). The idea of virtual reality and internet was to create an space free of government, laws and all that. So the utopia is "create something free by men who wants control". That is like living in a world with no money. There is always going to be people who do not want your freedom, who wants power and another group against the other for the same reasons.

The .com era came, everybody wanted to have "a portal", today is called "social media", same intention, different names. And it works better in some places than others.

So you will end up with AI that will be handled in New York differently than in L.A. or Colombia or Venezuela or China. AI is going to be a network. Remember when cars were a solution and now they are a problem? The average speed today is slower than ridding a horse.

AI is something with too much power AN with a mind of its own literally. Imagine if cars could decide themselves. We humans are not meant to hurt each other but we do.
 

Huntn

macrumors Core
Original poster
May 5, 2008
23,484
26,601
The Misty Mountains

When I see something like this I think: just a machine. Yes, it has sensors, yes it may be able to navigate based on a navigational program, avoiding obstacles, but that's a far cry from Ava. I'm realizing, at least I think that the more human it looks, especially if we find it attractive, and acts, especially displaying self awareness and the equivalent of human emotions, the sooner we will accept it as the equivalent of human.

From a philosophical standpoint, it makes me think (sorry if this is a repeat ;)) about what is the difference between biological organisms say human vs worm, and for the future, comparing human vs mechanical organisms like an android designed to mimic humans. What do we possess that makes us special, that they don't have?

Minimum requirement would be intelligence, self awareness, and be able to question your existence. As biological creatures we're vastly more complicated than any machine, but as we advance, I see no reason why our mechanical creations could not take on more characteristics of biological organisms. In Ex Machina, I forget the term Nathan used to describe Ava's brain, but it seemed to have a biological aspect to it.

Ultimately how important is this? The key difference for many of us is the concept of having a soul, otherwise how are we truly different than any other intelligent being, artificial machine? We age, wear out and die. Until we learn how to regrow new parts and maintain, retain our memories, the android could end up being superior to us.

Have you thought about what aspects of your existence gives you a feeling of uniqueness? It has to be a combination of your body, how your brain functions, your senses and accumulated memories, and the nebulous idea of consciousness, which is like an orb centered in our heads. We know that identical twins start with the same bodies, but then differentiate with age and experience, but both feel they are unique beings. I see no reason why this idea could not be applied to an advance android. There might be thousands of them, but they'd all feel unique.

I think if you don't believe in souls, the jump to human-android equivalence would be easier. And where souls are important (the concept is important to me) whose to say that if an android could possess the equivalent of human intelligence, emotions, and self reflection, that God could not consider it an adequate vessel for a soul? Consider that humans and androids would both be considered creations of this system/reality we are occupying. If you believe God created humans, though human hands it could allow for the creation of androids.

For any replies, this is a philosophical post, please keep them out of PRSI territory. ;)
[doublepost=1466435358][/doublepost]
Excellent idea for a thread. One in which has potential for much traction.



Warning to readers, spoilers below.

Was it really an error or failure of programming? Perhaps it could have been viewed as a total success in making Ava as "real" as could ever be hoped for.

What Ava did, in my opinion, was superbly human-like. She didn't plunge the knife into Jay, but she did set it up and manipulate the other girl robot to be in a position to make the knife go in. Total passive aggressivness.

Then she exhibited more manipulation of Cal by way of using her sexuality to gain his trust and use him to help her escape. All the while, likely, resenting him for being human, like Jay, who kept her captive. It was because of that that she left him behind, no doubt to die and die alone like she had probably thought was going to be her end.

I had hope for Caleb, with running water (if it indeed kept running), he could survive a month, and without word from Nathan (hopefully he had some online interactions or business responsibilities) or Caleb for a month, I'd hope that a trigger would result in an inquiry that would result in his rescue.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.