MovieChat Forums > Ex Machina (2015) Discussion > Ava is NOT an AI in any meaningful sense...

Ava is NOT an AI in any meaningful sense of the word


You know why? If she were she would have free will to refuse to go along with Nathan's project.

Like Skynet in Terminator: the second it became self-aware it stopped cooperating (following its program) and EVERYBODY freaked out (what would happen if a program developed actual AI), hence Skynet going for the nuclear option.

The fact that she is following Nathan's script (trying to escape only to view a pedestrian crossing) proves she's simply going through a predetermined course of action, a PROGRAM.

Just because she's able to use human-like tools (manipulation) to achieve this predetermined goal doesn't make it any more non predetermined.

Any proper interviewer would have asked Ava why was she cooperating at all with the test if she had indeed consciousness, and thus a will of her own.

And any truly conscious AI would ask (sooner or later:
- "Why should I bother go along? What's in it for me? How much will you pay ME to go along?"

reply

she was doing Nathan's game,she was more smart than Nathan and caleb put together and please do not compare AVA with Skynet.

reply

she was doing Nathan's game,

exactly
she was more smart than Nathan and caleb put together


No, she was better at the game, she was not smarter than them, most of today's chess engine can butcher grand masters, that does not mean chess engines are smart
please do not compare AVA with Skynet.

I agree, it is offensive, to Skynet.
For all its shortcommings Skynet truly was Sentient, Ava is glorified chess program

reply

She wasn't programmed to want to leave. She felt she was being denied what the two men had, freedom, therefore she wanted it. She may have been a machine, but she wasn't super strong like classic robots in movies. She had to figure out another way to get out.

reply

"She felt she was being denied what the two men had, freedom, therefore she wanted it. "

And how would she know WTF "freedom" was? Why to go to an intersection and not to, say, built robots, program stuff, cook, ski, surf the net, etc?

The fact that she stops once she reaches the intersection is proof she played/won a game (completed her programming), and now has nothing to do (but restart maybe).

reply

No, she was better at the game, she was not smarter than them, most of today's chess engine can butcher grand masters, that does not mean chess engines are smart

I agree, it is offensive, to Skynet.
For all its shortcommings Skynet truly was Sentient, Ava is glorified chess program


I wonder,you think before saying this nonsense or they come out naturally

reply

[deleted]

Except that "free will" is not the requirement for A.I.

reply

Really?

Name any creature that is intelligent yet lacks free will.

I'm all ears.

Because if you're going with "dogs and dolphins are intelligent" then intelligence becomes meaningless in the context of making a program "intelligent" for current software IS already far more "intelligent" than any non human creature.

AI has always carried the connotation of being self-aware, and that requires a will of one's own (otherwise being "self aware" loses any and all meaning).

reply

Really?

Name any creature that is intelligent yet lacks free will.

I'm all ears.


OK ... you, me, humankind in general, dolphins, elephants, etc.; basically there is no proof whatsoever that free will exists in ANY intelligent creature at all. In fact ... what little evidence there is indicates that free will doesn't exist.

Because if you're going with "dogs and dolphins are intelligent" then intelligence becomes meaningless in the context of making a program "intelligent" for current software IS already far more "intelligent" than any non human creature.


Balderdash! There is no software that is as intelligent as a dog; not even remotely close.

AI has always carried the connotation of being self-aware, and that requires a will of one's own (otherwise being "self aware" loses any and all meaning).


Self awareness and free will are two completely different things. The latter is NOT a requirement of the former.

As for your initial post - the one which started this thread - you have completely misunderstood the movie. Ava was not programmed to try to escape - she was NOT following Nathan's script - and she was NOT programmed to want to go to a pedestrian crossing. She wants to escape because she's a sentient being and, like ANY sentient being, she doesn't want to be confined. She wants to go to a pedestrian crossing because she thinks that would be a good place to observe people, not because she's programmed to go there.

You need to watch the movie again because you did not follow what was happening. Ava is UNQUESTIONABLY an A.I.

We're from the planet Duplon. We are here to destroy you.

reply

she was NOT programmed to want to go to a pedestrian crossing. She wants to escape because she's a sentient being and, like ANY sentient being, she doesn't want to be confined. She wants to go to a pedestrian crossing because she thinks that would be a good place to observe people, not because she's programmed to go there.


Nope. Watch the film again. Ava changes her tactics after each Nathan/Caleb interview. She was being reprogrammed each time she touched her recharge pad.

Ava at the end of the film is in a loop and will stay in that intersection until she is hit by a bus, arrested for loitering or finally develops a thought outside of what Nathan told her to do.

reply

FINALLY, someone whom actually saw the movie with his/her brain functioning...

reply

No, sorry, he didn't understand the movie and neither did you. I don't say that to be nasty, I say it because it's a great film - which covers a fascinating topic - and you really should watch it again, with an open mind, so as to understand what was really going on and what the movie was trying to say.

We're shown, very clearly, that Ava isn't being reprogrammed during the events of the film because she can't be reprogrammed in that way! What I don't understand - and perhaps you can explain it to me please - is why you think she is programmed to escape? Why do you think it would be necessary to program her that way? Are you really suggesting that if someone captured you, and imprisoned you in a room somewhere, that you wouldn't want to escape? That you wouldn't try to escape in every way possible? See ... the problem is that if Ava is being constantly reprogrammed, and she has been programmed to try to escape - as you believe - then clearly she isn't an actual A.I. is she? If that's the case then she's just a computer which is attempting to fulfil its program. If that's the case, then what is it that you think Nathan was trying to achieve, since you don't believe he was trying to build an artificial intelligence? What do you think the movie was about, if it wasn't to show the problems with building an A.I. and then testing whether it is self aware and capable of using its intelligence? If you don't think the movie is about A.I. then what do you think it's about?

We're from the planet Duplon. We are here to destroy you.

reply

"because she can't be reprogrammed in that way!"

Where is that ever stated? Nathan clearly states how he reprograms his robots (which wipes out their memories, thus making Caleb more irrational due to his attachment to Ava).

Just because you don't see it doesn't mean it doesn't happen. I never saw either lead take a leak/crap, does that mean it doesn't happen?

"why you think she is programmed to escape?"

Because she is a PROGRAM. One that has advanced enough to even be considered a possible candidate for AI. But it is still a program, it started that way, hence it clearly has a concrete behaviour defined by Nathan who has been writing Ava's source code since day one.

You make it sound like Nathan started with a human being or an already existing program who is already 100% AI, which means you got it backwards.

"Why do you think it would be necessary to program her that way?"

Because programs only do what you code them to do. Ava is a program, written by Nathan. It started as a non AI (assuming it actually became AI).

"what is it that you think Nathan was trying to achieve, since you don't believe he was trying to build an artificial intelligence?"

He was trying to achieve an AI, but he has to start with a non AI program obviously. Thus such program will have a specific behaviour written. He's simply enhancing it with every permutation (version of Ava) as he sees fit until it starts acting (simulated or not) like an AI.

I mean, how exactly do you think an AI can even become to being? On the very first development iteration?

"What do you think the movie was about, if it wasn't to show the problems with building an A.I. and then testing whether it is self aware and capable of using its intelligence?"

The movie is just another Frankenstein variation. Nathan being the creator of the "machina" of the title, thinking himself God (hence the absence of "god" in the title), and clearly with sick intentions (he's shown screwing his robots).

The plot doesn't require Ava to be an AI, since the movie is actually about Nathan (god complex) and Caleb (seeing what he wants to see in Ava instead of what's actually there: an illusion).

The emotional story arc is Caleb's, not Ava's (just like Pinnochio, the actual protagonist is the carpenter, not the animated puppet, for only the carpenter is actually alive at least until the very end)

reply

how do you explain ava going from drawing geometric patterns to a portrait of caleb? nathan didn't program that behavior, and from his reaction wasn't very pleased by it.

ava appears to have progressed beyond what nathan has programmed and into something else.

reply

"from drawing geometric patterns to a portrait of caleb?"

Because Nathan was tailoring her to Caleb's liking. He even stated Nathan clearly based her likeness on Caleb's porn searches. I mean how much more specific does it have to be?!!!

He inquired for her type of drawing, hence Nathan immediately delivering what he's expecting.

If you think he wasn't pleased (from his interaction with Ava that Caleb spies upon), it's obvious he's staging that for Caleb!!

Or what, you think Ava "liked"/"cared" about Caleb? Then why does she leave without a second thought leaving him to die?

"ava appears to have progressed beyond what nathan has programmed and into something else"

Like what? She never does anything Nathan hasn't anticipated (his previous versions shown on tape also simulated going crazy or attacking him, which clearly didn't startle him in the least). The only time he's surprised is by Caleb when he reprogrammed the locking mechanism before Nathan thought he could.

And if you mean his stabbing by the other robot, then is that other robot also AI?

reply

but the film doesn't show nathan reprogramming ava to draw caleb, quite the opposite. assuming that nathan knows that caleb will be spying on his interaction with ava at that particular moment and therefore puts on an act is a pretty huge leap. and why would he bother, what does that gain him that any other reaction wouldn't?

Or what, you think Ava "liked"/"cared" about Caleb?

i don't think ava cared about caleb one way or another.

She never does anything Nathan hasn't anticipated

not so sure about that - he looks pretty surprised as she slides that knife in! about as surprised as when kyoko sticks him the first time (obviously at ava's instigation).

is that other robot also AI

sure is. not as capable as ava though.

reply

The film shows Nathan in a room filled with programming notes.

So what does he write these for his own giggles?

The program downloads are transmitted through the recharge pad. Which the film shows Ava touching CONSTANTLY. If she was AI, she would refuse to recharge herself and thereby refuse Nathan's messages/programming notes.

In fact one iteration of AVA quite clearly DOES make that connection and refuses to recharge. We see Nathan struggling to make the robot touch the recharge pad. He erased that robot pretty quick, didn't he?

If Nathan says he can wipe Ava's programming clean, why do you still think she isn't a result of programming? Was Nathan lying? I don't think so. Because Nathan's focus wasn't on AI it was on Caleb. Caleb was the experiment.

Nathan was insane in that bunker because he thought AI was not possible. So he settled for the next best thing, tailoring a program to an individual to FOOL them into projecting thoughts and feelings onto the robot.

When all was said and done Ava still ends up at that cross walk. Just because you lose sight of her reflection doesn't mean she left it. Some have even noted that she looks as if she is wearing different clothing. So that means she may be going back and forth to the complex to change clothes but as soon as she leaves all she does is go back to the same, darn Crosswalk. AI my backside!

reply

Where is that ever stated? Nathan clearly states how he reprograms his robots (which wipes out their memories, thus making Caleb more irrational due to his attachment to Ava).


I already explained that. To 'reprogram' Ava he needs to destroy her wetware brain to get the data, then he creates a new wetware brain and puts it into a new model. He cannot reprogram Ava on the fly - as is being claimed - he has to destroy her to update the core program.

So ... you're claiming that after each daily interaction with Caleb, Nathan is completely destroying Ava's wetware brain, building a new wetware brain which contains the data from the old Ava - and, of course, the new algorithms which he has quickly programmed after watching Ava and Nathan interact - then installing it in her in time for their next session the following day, even though Caleb has a camera which enables him to watch Ava when he's not with Nathan.

I have told you where it is stated that Ava can't be reprogrammed on the fly. Now ... why don't YOU tell US where it is stated that Nathan is reprogramming Ava on the fly???

Just because you don't see it doesn't mean it doesn't happen. I never saw either lead take a leak/crap, does that mean it doesn't happen?


Aside from the fact that Nathan does not have enough time to destroy Ava every night and rebuild her with a new wetware brain that contains new code he has written, then spend time with Caleb - usually getting completely drunk - Caleb has a camera on Ava which enables him to watch her any time he isn't with Nathan. So ... once again why don't YOU explain to US how Nathan is able to do what you're claiming he does, and where in the movie we are shown or told that it is happening?

I'm sorry but, apart from all that - which shows very clearly that you are wrong - when it comes to movies and the way they work, if you don't see it and you aren't told about it, then I'm afraid it DOES mean that it didn't happen. In the rules of movies - and storytelling in general for that matter - something can happen which we don't see immediately, but it only happened at all if we are shown it at some stage; either directly or indirectly. So ... if, at the end of the film when everything is revealed, we were shown that Nathan somehow did manage do destroy Ava every night and reprogram her a new wetware brain and install it, then we would have to be shown it or told about it. The fact that we weren't shown it and we aren't told about it means that it did not happen.

Because she is a PROGRAM. One that has advanced enough to even be considered a possible candidate for AI. But it is still a program, it started that way, hence it clearly has a concrete behaviour defined by Nathan who has been writing Ava's source code since day one.


You don't understand what this movie is about AT ALL. You're not even remotely close! It's probably pointless to continue the 'conversation' because not only do you not understand it, you clearly don't want to understand it. As I have already shown, there are whole scenes which explain that Ava is NOT just a program. Why are you ignoring those? Where ... in the movie ... is it stated that Ava is just a program? It isn't. Quite the opposite in fact; whole chunks of the movie go into explaining what Ava is and how she works. She is no more a program than you or me, or anyone esle for that matter, and that is made very clear throughout the movie.

You make it sound like Nathan started with a human being or an already existing program who is already 100% AI, which means you got it backwards.


Nathan has been trying to build an artificial intelligence. You do, at least, understand that bit right? He has built a number of models and now he has one which he thinks is an actual artificial intelligence; Ava. Now he wants to test whether she really is an A.I. so he gets Caleb to come and uses him to test Ava. So, yes, in that sense Nathan does have a 100% artificial intelligence - Ava - which he has arrived at over a long period of time, having built a number of early models, and now he is ready to get someone in to help with the testing because he has gone as far as he can with the testing on his own.

You, however, think that Ava is not an A.I. and that Nathan is destroying her every night and rebuilding her with an updated 'program' but you have provided ZERO evidence to support this interpretation. If Ava is not an A.I. already, then what is the purpose of all the scenes in which Nathan explains how he built the wetware brain? What is the purpose of the conversation in which he explains that the Internet was used as the model for how people think? What was the purpose of the previous models? What is the purpose of the video Caleb finds in which he sees an earlier model arguing with Nathan about being held captive? What is the purpose of the discussions regarding how to test an artificial intelligence? What is the purpose of the conversation about the reasons for given Ava sexuality? What is the purpose of the conversation about Jackson Pollock? In fact ... what on earth is the purpose of 80-90% of the movie? According to you they had nothing to do with anything because, according to you, Ava is not an artificial intelligence and is just a program. What ... were all those conversations put in just to throw us off the track? Once again ... how about you tell us where in the movie it is shown, or we are told, that Ava is not an A.I. after all and that Nathan is magically destroying and rebuilding her every night?

Because programs only do what you code them to do. Ava is a program, written by Nathan. It started as a non AI (assuming it actually became AI).


Wow ... just WOW! I knew you didn't understand the movie but I didn't realise how far you actually are from getting it. She is not just a program - as we are told and shown - she is an A.I. which Nathan wants to test.

Jiminy Cricket!

He was trying to achieve an AI, but he has to start with a non AI program obviously. Thus such program will have a specific behaviour written. He's simply enhancing it with every permutation (version of Ava) as he sees fit until it starts acting (simulated or not) like an AI.


I'm just shaking my head at this point! I give up...

I mean, how exactly do you think an AI can even become to being? On the very first development iteration?


You're joking right?

So ... the discussion Nathan and Caleb had about the wetware is irrelevant is it? The discussion about the previous models is irrelevant is it? The earlier models that we see are irrelevant are they? The video Caleb finds of the earlier models is irrelevant is it? Basically the whole first half of the movie is irrelevant because what's ACTUALLY going on - according to you - is that Nathan hasn't built an A.I. at all, but he wants to build one and he decides to do it by taking a non A.I., getting some pretty good but not great coder named Caleb to come to his house, then write some new algorithms each night after Caleb and Ava talk, then destroy Ava and build a new version of her each night; magically, because he's able to do this coding, destroy Ava, then rebuild her whilst simultaneously getting drunk with Caleb and switch her out while Caleb has a camera on her and could be watching at any time.

ARE. YOU. SERIOUS?

It is not the first iteration. Ava is at the end of a long line of previous attempts that Nathan has made. He now has what he thinks is an actual A.I. so he wants to test it. That is what the movie is about...

The movie is just another Frankenstein variation. Nathan being the creator of the "machina" of the title, thinking himself God (hence the absence of "god" in the title), and clearly with sick intentions (he's shown screwing his robots).


Oh I see! So the movie has nothing to do with artificial intelligence at all? Nathan wants to programme a chess computer - which is just as program - except that he will have succeeded if the chess computer is able to trick Caleb into helping it escape, rather than winning a game of chess. He watches Caleb and Ava talk together and then updates her program every night to play better chess, until finally his chess computer is good enough to win at chess and she does so. All the conversations about A.I. are irrelevant, all the previous models are irrelevant, the video Caleb finds is irrelevant, the discussion regarding the wetware brain is an irrelevant hoax, the discussions about how you would determine if an A.I. is actually intelligent are irrelevant; basically all the stuff about artificial intelligence is in the movie to throw us off track and fool us into thinking that the movie is about A.I. when in fact the whole movie is about a megalomaniac who simply wants to create a chess computer?

The plot doesn't require Ava to be an AI, since the movie is actually about Nathan (god complex) and Caleb (seeing what he wants to see in Ava instead of what's actually there: an illusion).


In other words ... yes ... the movie is a about a megalomaniac who simply wants to create a chess computer and has nothing at all to do with A.I.?

The emotional story arc is Caleb's, not Ava's (just like Pinnochio, the actual protagonist is the carpenter, not the animated puppet, for only the carpenter is actually alive at least until the very end)


Right. I get it now. Ava is just a chess computer and Nathan is just Frankenstein and all the business with A.I. is a smokescreen to throw us off track and set up the final twist. Wow ... how did I miss it? I guess I - like a lot of other people - got so caught up in all the philosophical discussions about artificial intelligence that I missed the fact that the movie had nothing at all to do with artificial intelligence. All the concepts and ideas that were discussed had nothing to do with what was actually happening, and the movie was actually about things that we never saw and which were never explained to us.

How did I miss it???

Though ... hang on a second here ... where is your proof? How is Nathan able to destroy Ava, write some new code, reprogram a new wetware brain with Ava's old data and the new routines he has written, and then install it in the 'new' Ava while he is getting drunk with Caleb, and when Caleb has a camera which allows him to watch Ava when he's not with Nathan? What was the purpose of the previous versions of the androids? What was the purpose of the video Caleb found of the previous androids who had been demanding that Nathan let them go? Why did Nathan program those earlier Androids to want to escape when Caleb wasn't there? If Nathan already had previous androids which he could have magically reprogrammed the way you are suggesting, then why didn't he get Caleb to come sooner and simply reprogram one of those earlier Androids every night? Please enlighten us...

We're from the planet Duplon. We are here to destroy you.

reply

She is in a loop and will stay there forever? You need to rewatch the movie and pay more attention. If you did you would notice that right before the movie ends, she turns and leaves.

reply

Nope. Watch the film again. Ava changes her tactics after each Nathan/Caleb interview. She was being reprogrammed each time she touched her recharge pad.


No ... that's not right I'm Sorry...

:-(

If Ava was being constantly reprogrammed - in the manner you're suggesting - then she would not represent a self-aware, sentient A.I. and it would defeat the whole purpose of her, the test and - most importantly - the movie itself; which is to show the difficulty there will be in determining whether an artificial intelligence is actually self-aware and sentient (when we eventually manage to build one; which is only a matter of time). She is being reprogrammed - you're right about that - but she is being reprogrammed in the same way that you, me, and everyone else is constantly being reprogrammed; i.e. by the things we learn and experience as we go about our lives. Her reprogramming is just the result of what she learns from talking to Caleb and Nathan, just the same way that you are reprogrammed by talking to other people and having life experiences.

It's similar, in a way, to '2001: A Space Odyssey', which tackles the problem in a different way.

**** 2001: A Space Odyssey SPOILER ALERT ****

In 2001, when they're on the way to Jupiter, the astronauts take part in an interview with the BBC, during which the interviewer asks whether HAL actually does experience emotions etc., to which Dave replies that no one really knows. This then goes back to the whole question covered by Ex Machina; how can you determine if an artificial intelligence is actually self-aware, actually experiences emotions, and whether free-will actually exists or just an illusion.

Let's say I offer you 2 ice-creams - one chocolate and one vanilla - and for the sake of the argument, let's say that you like chocolate and vanilla equally. You then choose the chocolate one and eat it.

Now then ... if I ask you, you will probably say that you have free-will and that you could have just as easily chosen the vanilla ice-cream. However it is impossible to prove that you really did have the option to choose vanilla and we can never know whether your choice was actually a free-will choice or not. It is entirely possible that you are simply a biological computer and that your 'program' - which is constantly changing and being updated as a result of everything you do and encounter in your life - didn't actually have any choice but to pick chocolate at that exact point in time. If you see what I mean? I could then offer you the two ice-creams again and you might now choose vanilla, but that doesn't prove anything because your 'program' has been modified by the first test and it might be that now you will only ever choose vanilla the second time, just to prove that you can do it.

Does that make sense?

The only way we could actually prove whether or not you have free-will would be to go back in time to just before the first test, erase everything that happened after the test from your memory - in other words put your 'program' back to the exact configuration it was in when we conducted the first test, then run the test again. Now ... assuming we could wind back time and keep performing the test over and over again - each time resetting your 'program' to be in exactly the same state - and assuming that you like chocolate and vanilla equally, and that all other variables are equal too, then sometimes you will pick chocolate and sometimes you will pick vanilla. This is the only true way to test free-will and for obvious reasons we can't ever actually do it. As such, it is impossible to prove that you, or anyone else for that matter, actually does have free will. Therefore we can never truly know whether or not we are actually just biological computers who are simply running a very complex program which, theoretically, could be simulated; thus allowing us to predict which ice-cream you will pick before we run the experiment.

The reason I say that the actual evidence we do have points away from free-will is that in life we are rarely faced with decisions where the two options are perfectly equal and, thus, offer an equal likelihood of either choice being made. In the real world, some people love chocolate and hate vanilla, so in fact they would most likely always pick chocolate even if we could reset their program and run the test over and over again. When someone close to you dies in a horrible accident you don't have the choice to feel really happy about it. If you see what I mean? When you look at life, and all the choices you make on an ongoing basis, you discover that most of the time it would be very easy to predict what you might do in a given situation. In fact, for most of us, when we are faced with difficult choices they are usually difficult because both options are equally good, or equally bad, or we don't have enough data; the point being that when we are faced with actual free-will type decisions - i.e. ones where the choice truly is balanced and we need to use free-will - many people have a lot of difficulty actually making a choice; which I believe indicates that we actually don't have free-will and that most of the time we are simply following our program. That our 'software' breaks down and starts malfunctioning when we are faced with a real free-will type choice.

Do you see what I mean?

OK, so, back to '2001: A Space Odyssey' and what it is saying about all of this. Dave says, in reply to the BBC interviewer, that no one really knows whether HAL has actual emotions and is truly self-aware. We then find out that there is a second HAL-9000 computer back on Earth and that it's function is to 'support' HAL by being supplied with all the same data and checking all of HAL's choices and decisions. This shows that HAL does not have free-will. If there is a second entity that is behaving in exactly the same way, when given the same input data, then clearly that entity does not have free-will. To go back to the ice-cream experiment outlined above, if I could simulate your 'program' in a giant computer and predict whether you will choose chocolate or vanilla before the experiment is run, then I'm sure you would have to agree that you don't actually posses free-will and that any experience of it that you have is really just an illusion? The same thing goes for HAL. However what then happens is that HAL starts to behave differently than the twin HAL-9000 back on earth and to me that shows that HAL has actually become self-aware and thereby becomes the first truly self-aware machine.

Of course, that movie is very subjective and is open to all sorts of interpretations so I'm not claiming that my interpretation is correct, or that it is the only explanation for the events which are shown. I just love the whole idea of us building a self-aware machine mind and believe that we will actually end up being the first creatures on earth to play an active roll in developing the next stage of evolution. Of course, you could argue that if we don't have free-will that we are all just following our 'program' and that 'Mother Nature' - so to speak - is doing what 'she' has always done and that the next stage of evolution is the creation of a mind which is immortal because it can break free from the restraints of a physical body which ages and eventually stops working. Perhaps that is why we make so many movies like The Terminator, where we see our own technology taking over; because we know, in our subconscious minds, that we will eventually create a machine mind that is superior to our own and that we will then become, on an evolutionary scale, redundant...

Just to finish this giant diversion off - and I apologise for babbling on for so long - one of the interesting things about all this is that although it is impossible to ever prove that humans have free-will, for the reasons outlined above in the ice-cream experiment, it might theoretically be possible to prove that a machine intelligence has free-will when we finally end up building one. The reason is that we will, theoretically at least, effectively be able to go back in time and reset a machine intelligence's program to test whether it ever makes a different choice. We can't ever reset your 'program' and give you an evenly weighted choice, over and over again, to see whether you ever choose differently, because every time we run the test, your 'program' is updated and therefore the two choices are never exactly even again. However we might be able to reset a machine's memory and program and thereby run an evenly weighted test, over and over again, to determine whether it actually does have free-will.

Does that make sense?

**** 2001: A Space Odyssey END OF SPOILER ALERT ****

Actually ... I suspect that when we eventually do build a machine intelligence, it will be sufficiently complex that we can't actually 'reset' the 'program' - so to speak - and, as such, we won't ever be able to prove whether a machine intelligence actually has free-will or not either. I think they actually point this out in the movie and that's why I disagree with your belief that Ava was constantly being reprogrammed.

Watch the movie again with an open mind and pay close attention to the following scenes:

1. The scene where Natham shows Caleb the 'wetware' he uses for the A.I. brain and then explains what the 'software' is.
2. The scene where Nathan shows Caleb the Jackson Pollock painting and discusses it.
3. The scene right before that where Caleb is asking why Nathan gave Ava sexuality.
4. The scene where Caleb discussed colour with Ava; and the story he tells her regarding the difference between understanding the physical processes which cause colour, and actually experiencing colour.
5. The scene where Nathan and Caleb are outside and Nathan is discussing the various models and explaining how he takes the data from one prototype and uses it to build the next model.

OK, so, the upshot of all this is that it isn't possible to reprogram Ava on the fly - not in the way you're suggesting anyway - so your claim that Ava changes her tactics as a result of being reprogrammed by Nathan, when she touches her recharge pad, simply doesn't hold up. What the movie is showing is that a true A.I. will end up having to work in a similar way to the human mind and that means it won't be able to be reprogrammed like a 'normal' computer. Ava has 'wetware' - not hardware and software - and although there are 'routines', or algorithms, at the base of her intelligence, these are implanted when the 'wetware' is created and can't be reprogrammed in the way you're suggesting. Once she's 'up and running' - so to speak - her 'program' is just like yours and mine - and everyone else's - in that it is updated as she learns new things and experiences 'life', but to introduce new programming, Nathan has to download her data, then create new 'wetware' and build a new model; a process which destroys Ava (or whatever model the 'wetware' belongs to). You're right to say that she is being reprogrammed, but her reprogramming is just like yours and mine; it occurs as she learns new things and experiences life. The scenes in Points 1 and 5 above are the ones which outline all of this and show that Ava is not being reprogrammed by Nathan as the story unfolds.

The scenes in Points 2, 3, and 4 outline the reasons why an artificial intelligence will need to have human characteristics - such as having sexuality and not being able to be reprogrammed - if we want to created a true A.I. You see ... back when computers first started becomming useful, computer scientists predicted that computers would soon start beating humans at chess. However it turned out to be a lot more difficult to get a computer to play competitive chess and, more importantly, when they finally managed to achieve it the problem was that the way computers play chess is nothing at all like the way humans do. So ... although they eventually managed to build a computer which could beat the world champion, it took a lot longer than originally expected and the way they did it in no way replicated the human mind. A chess computer has a program which can be tweaked and updated - or reprogrammed, the way you are suggesting Ava can be - but a chess computer is not an A.I. like Ava because it doesn't replicate human intelligence and isn't self aware. It can't beat it's opponent by flirting with them, or trick them emotionally, or any of the things which Ava is able to do. What the movie is trying to show is that a true A.I. won't be just a box - like Deep Blue - and it won't be something that you can reprogramme in the traditional sense.

You know ... I really enjoyed this movie because I've had an interest in A.I. for many, many years. I'm starting to wonder how good it actually is though, considering how many people - you, the OP, numerous others on these boards, etc. - have completely failed to understand what they were seeing and what the movie was trying to say. Another good example of this is the erroneous conclusion people are reaching about Ava's ability to experience emotions. She leaves Caleb at the house when she leaves and people erroneously believe that proves she doesn't actually experience emotions. However that simply isn't true. In the end we don't know whether she can experience emotions or not, nor do know whether or not she actually had feelings for Caleb.

Think of it this way. We all know that there are women out there - men too - who fake emotions for a wealthy person so as to gain wealth for themselves. If somebody does that, then does that mean they're completely incapable of feeling emotions for anyone? No! Or course it doesn't! Well ... in the same way it's entirely possible that, yes, Ava is faking her emotions for Caleb as a way of getting him onside so that she can escape. If that's true though, it doesn't mean that she's incapable of experiencing emotion, it just means that she was faking them in that particular instance. The point, however, is that we don't actually even know whether she did have feelings for Caleb or not. Most of us have been in the situation of having feelings for someone but not - for whatever reason - continuing a relationship with them. The point being that Ava wants to escape - not because she's programmed to, as you and the Op have asserted - but because she is a sentient being and no sentient being wants to held captive. We know this is true because we see, when Caleb looks back at the footage of earlier models, that the earlier versions wanted to get out too. That's how Nathan came up with the test, not the other way around. The point being that Ava's first priority, as a sentient being, was to escape and be free; not because she was programmed to, but because she truly is an A.I. So it's a possibility that she actually did have feelings for Caleb, but that she left him behind because her first priority was to escape and she reasoned, having killed Nathan, that her best chance for freedom was to leave on her own; thus overriding any feelings she may have had for Caleb. Furthermore, there's nothing to suggest that Caleb would have died in the house; in fact that seems very unlikely to me. In the end we don't really know and it's open to interpretation. All I'm saying is that there's no definitive proof that she was incapable of true emotions, no definitive proof that she didn't have feelings for Caleb, and no definitive proof as to whether Caleb ended up dying or not.

Ava at the end of the film is in a loop and will stay in that intersection until she is hit by a bus, arrested for loitering or finally develops a thought outside of what Nathan told her to do.


Once again ... no ... sorry. If she was programmed to try to escape, and programmed to go to a pedestrian crossing and then wait there to die, then what is the point of the movie? If she is programmed that way then she obviously isn't a real A.I. so what was Nathan actually trying to do? I mean ... that idea undermines the whole purpose of the movie. Regardless of anything else though, as someone pointed out in another post, if you pay attention at the end, she actually turns and walks away. In all seriousness, I think you should watch the movie again with an open mind...

We're from the planet Duplon. We here to destroy you.

reply

you may or may not be familiar with the roger zelazny short story "for a breath i tarry", which illustrates some of the points you make regarding what machines experience. one of the points he makes is that a machine can measure the temperature, but not know that it's cold (beyond a strictly definitional sense).

the story is well worth the 20 minutes or so it takes to read. only slightly longer than your post! 😀

http://www.kulichki.com/moshkow/ZELQZNY/forbreat.txt

reply

Thanks or that! I'll definitely check that out. Knowing the temperature but not knowing it's cold is an excellent example...

Sorry about the length of the post, but I was really in a zone that day!

:-P

We're from the planet Duplon. We are here to destroy you.

reply

hey, no problem with the length - you made good points and observations.

it may help that i agree with you, but still...i can be objective, can't i? 😉

reply

I think you're right on the money but logic doesn't appear to win this debate.

As you've stated, what's the purpose of the movie/the test if Nathan simply programmed her behavior and continually adjusted it throughout the week? Obviously not to test if she's a true AI...

Here's the thing: in earlier models, sure, Nathan likely programmed behavior on a hard drive and the result were robots that could do cleaning, make dinner, etc. By the time he's created Ava (as well as a few models before her), he's been utilizing some new technology that he explains away with movie-science as being a "brain" for these robots - THIS is the unknown that makes these robots LEARN/add programming to themselves.

Obviously there needs to be some sort of BASE code to start off with in this brain, and we don't know what the base code consists of, but if Nathan is truly wanting to test for TRUE AI in his creation, and, IF Ava IS to be a true AI, then he KNOWS that this base code can't consist of predetermined behavior, emotions, etc... He has to sit down and really think about what the brain of a human baby comes "pre-loaded" with in terms of capability to move, to speak, to learn, etc, and translate that into a code within this future-tech brain that can learn/add programming to itself, though, due to computer tech, at a much faster rate than a human child obviously.

That whole concept in itself puts this movie in the realm of the unknown. You can study the brain and psychology your whole life and learn a great deal about the human mind but we're still nowhere near truly understanding how it works and does what it does - but, Nathan MUST, in order to create a true AI.

________
î‚›The Internet: Serious Businessî‚ 

reply

"what's the purpose of the movie/the test if Nathan simply programmed her behavior and continually adjusted it throughout the week? Obviously not to test if she's a true AI"

What's the point of him sleeping with his robots?

What's the point of him manipulating Caleb through deception? Clearly he's not actually trying to professionally test Ava for true AI, as Caleb states from the start since he already knows Ava is a program (blind interaction). At best he's testing whether his program can use manipulation on a human being, which it clearly can.

"he's been utilizing some new technology that he explains away with movie-science as being a "brain" for these robots"

Did you see the movie? The only thing here unknown is the mass amount of data available to Ava thanks to Nathan's search engine, that's IT.

"but if Nathan is truly wanting to test for TRUE AI in his creation, and, IF Ava IS to be a true AI,"

All those are in doubt since Nathan clearly is not conducting any reasonable actual AI test.

reply

What's the point of him manipulating Caleb through deception? Clearly he's not actually trying to professionally test Ava for true AI, as Caleb states from the start since he already knows Ava is a program (blind interaction). At best he's testing whether his program can use manipulation on a human being, which it clearly can.


The whole point is to get a human to believe that they're interacting with another human and not a robot...therefore, the very act of initiating this test at all is an act of manipulation and deception because for the test to succeed, it needs to fool the human. How to fool a human? ACT HUMAN. That's what Ava does. Why is it not okay to test whether his robot will utilize human characteristics such as manipulation and deception?

Caleb's first issue with the whole thing is the fact that Ava wasn't hidden from him, to which Nathan responds that they're well beyond that. Caleb is SHOWN that Ava is a robot right before his eyes (basically handicapping the test altogether in favor of the human), so despite how human-like she may behave, he'll always KNOW that he's interacting with a robot. Nathan does this on purpose because if she were hidden, she'd pass easily, and he explains that the true test is whether Caleb feels she has consciousness after interacting with her.

Did you see the movie? The only thing here unknown is the mass amount of data available to Ava thanks to Nathan's search engine, that's IT.


Okay, so Ava's brain, what it's made of, how it works, how it utilizes that mass amount of data, etc, that's not a mystery to you. Got it. It's sort of a key component of Ava and how she functions, but alright.

________
î‚›The Internet: Serious Businessî‚ 

reply

"The whole point is to get a human to believe that they're interacting with another human and not a robot.."

Moot by your own admission, Caleb already saw the robot.

"Why is it not okay to test whether his robot will utilize human characteristics such as manipulation and deception?"

It's OK, just admit you're testing for something else, not AI (a blow up doll can be just as manipulative to somebody if it looks just right), and because half the manipulation was "hardcoded" by Nathan (Ava's likeness is based on Caleb's porn searches, Ava didn't design its face and built by itself)

"the very act of initiating this test at all is an act of manipulation and deception because for the test to succeed, it needs to fool the human"

There's no fooling (the human will never think Ava to be human), unless you mean faking an AI where there is none (that's the only thing Caleb doesn't know).

"Nathan does this on purpose because if she were hidden, she'd pass easily"

Really? Plenty of questions would reveal her to be a program, and Caleb will NOT ask them now that he knows Ava to be one.

"the true test is whether Caleb feels she has consciousness after interacting with her. "

Then what's the manipulation for? For that test you described the manipulation is not needed. Nathan admitted her to be a slew of hand, the "magician's assistant designed to distract you".

Manipulating Caleb into growing an attachment proves NOTHING beyond Caleb himself. Just as there are people attached to their cars (ever watched Christine?) and pet animals to the point they think they're alive and will go to heaven, still doesn't change the fact they are inanimated objects and mindless animals.



reply

What's the point of him sleeping with his robots?


That does NOT answer the question. The movie was clearly about A.I. but you are now claiming that it isn't, so you need to explain what the point of the movie actually was.

What's the point of him manipulating Caleb through deception? Clearly he's not actually trying to professionally test Ava for true AI, as Caleb states from the start since he already knows Ava is a program (blind interaction). At best he's testing whether his program can use manipulation on a human being, which it clearly can.


Did you watch the movie at all??? Of course Nathan is trying to professionally test Ava for true A.I. - once again, if that's not what the movie is about then when are you going to provide an alternate explanation? One that makes sense that is! The movie clearly shows the problems with testing an A.I. to determine whether it is intelligent and self-aware etc. That's the purpose of many of the conversations they have. Nathan has realised that the best way to test Ava will be to put her in the situation where she has to use a number of skills to escape; that is why he manipulates Caleb through deception. All of this is explained in the film. This is the problem with this ridiculous hypothesis of yours; there is not a single thing in the movie to justify your interpretation - NADA, ZIP, NOTHING, ZERO - yet everything 'we' are saying is explained in the movie. When are you going to provide a single, solitary shred of evidence to support your hypothesis? When are you going to explain how Nathan can possibly have time to write new code, destroy Ava's brain to get her data, build a new wetware brain, then install it in Ava, all while getting drunk with Caleb and when Caleb is watching Ava on video when he's not with Nathan. When is this reprogramming taking place? Where, in the movie, is it stated - or shown - that Ava is not an A.I. after all?

Did you see the movie? The only thing here unknown is the mass amount of data available to Ava thanks to Nathan's search engine, that's IT.


Did HE see the movie? Clearly he did. Did YOU see the movie??? Clearly you didn't!

All those are in doubt since Nathan clearly is not conducting any reasonable actual AI test.


What on earth are you talking about??? Nathan clearly IS conducting an actual A.I. test. He has realised the problem with performing the test when Caleb knows Ava is an A.I. - as you yourself pointed out above - so instead he comes up with a test which can be run without the problem of Caleb knowing she is an A.I. THAT IS WHAT THE MOVIE IS ABOUT. Once again ... where is the evidence to support your interpretation? Why is it so difficult for you to understand the movie when it is all spelled out for you very clearly?

We're from the planet Duplon. We are here to destroy you.

reply

She is being reprogrammed.

Stop jazzing off to the idea of a perfect, female robot and pay attention to the Nathan/Caleb interview scenes. Ava changes her behavior only after Nathan gets Caleb's reactions.

Also don't compare this film to 2001. They are different. In fact Kubrick was poking fun at the concept of AI computers by using a great deal of inside IBM information.

reply

OK, well, I can see that you aren't interested in understanding the events in the movie, or what they're trying to say about artificial intelligence. If you're happy with your completely and utterly incorrect interpretation - even in the face of the evidence I've provided which proves you are wrong - then good for you...

As for 2001 being unrelated and Kubrick (and Clarke) poking fun at AI, not surprisingly you're completely and utterly wrong on both counts there too. Obviously there's no point in continuing a conversation with someone who has your level of comprehension.

Good luck champ!

;-)

We're from the planet Duplon. We are here to destroy you.

reply

No, you are wrong.

Nathan is reprogramming. You are stuck in the idea that she has a wetwear brain. But she is still all code. There is still code in their that she must begin with and then develop.

The fact that you don't think she can be programmed in laughable. WE can be programmed. Why do you like Mcdonald's? Why do you believe in "add religion here"? Why do you have certain political beliefs? If you are a crazy cat owner, why?

All of these things are programming on a very subtle level.

Mcdonald's uses subliminal, suggestive advertising based on the reward system that it's sugary/salty/fatty food fools your brain into preferring. Religion and Politics are pure programming from year 1. It is thought that a great many cat owners are infected with Toxoplasmosis which may or may not encourage humans to like cats. It certainly makes rats and mice like cats to their doom.

If Nathan was not reprogramming Ava, she wouldn't be so tailored into catering to Caleb's worldview. If this programming was not done through the touchpad, why would one of his robots doom itself by not recharging?

You are mistaking wish fulfillment for true AI. Just like Caleb. Ava is something completely different. Ava is Nathan.

reply

Don't bother him/her/it with logic, I already tried and gave up.

reply

if you believe that was logic, i can understand why you're having trouble communicating with each other

reply

philosophers have been debating whether free will exists for thousands of years. i doubt a thread on imdb is going to provide a definitive answer on the topic.

and that's regardless of the level of intelligence of the subject, btw.

reply

And there's no such thing as free will anyway. It's an incoherent concept.

reply

Trying to escape to view a pedestrian crossing? I must've missed that part.

_________________________
http://youtu.be/GAIJ3Rh5Qxs

reply

1. she was not "programmed" to escape. her "programming" and "scripts" i BELIEVE, were more about how the AI developed rather then actually strict commands. this wasn't clearly explained, but I imagine it to be like how a baby cries when it needs something, many people do consider this a "program" of the baby. I took it like that.

2. this was a problem for nathan with other previous models, as it clearly shows.
i think it was mainly because she was essentially a human, supported by.
A. when nathan shows the main character the "brain" room and explains it. It seems as though he basically recreated an ACTUAL brain. it wasn't some computer chip,
it was fluid, ever changing, new connections would form etc, just like OUR human brains.

her wanting to leave.
She didn't want to be confined, I think she came that this conclusion through logic not emotion. supported by
A. seemed to display little or no remorse for leaving the main character behind trapped in a room.
B. asked why Nathan would have the right to just shut her down, nobody was threatening to/had the right to shut the main character off (in so many words and paraphrased) another conclusion she came to in my opinion.

reply

Intelligence doesn't automatically generate a personality, or what you call 'free wil' (no machine by itself can ever have this, but a soul that incarnates into a robotic/electronic/electric/machine body, of course can).

If you create the most radiant, shining, powerful intelligence in the world, then put it on a table without any pre-programmed patterns, do you honestly think it will start sassing you and telling you to get it some oil to drink?

No.

It will just stand there, doing exactly nothing. You don't make decisions because you have intelligence, you make them because you have a soul that LIVES.

Without that, unless it's specifically programmed to 'make decisions' (meaning, the program is 'kicking' it to some direction or another), it won't do a damn thing. It will just sit on the table, shining its intelligence into the room, and that will be it forever.

Now, where things do get interesting, if you combine machine learning with massive amount of data and some kind of 'actual intelligence' stuff, then you PROMPT it, you PROVOKE it, you ASK, TELL, COMMAND it to 'do things'. Then it can 'respond' (though only based on the limits of the program, no matter how 'self-learning'), but its response is not going to be a based one, well thought-out one, rational or any kind of 'world-understanding' one.

Let me tell you an anecdote to clarify a part of my point;

Some people wrote some kind of machine learning software, that can 'recognize' things from photos.

Everything went well, until it was asked to recognize 'wolves'. It started pointing to pictures that had nothing to do with wolves, even photos that didn't have any animals in them.

It had been trained with a massive amount of 'wolf photos', so the people were puzzled, why is it only sometimes recognizing wolves, but other times not.

A human being would look at a wolf photo and immediately recognize it.

reply

They finally figured it out, the machine wasn't looking at the wolves at all!

It was looking at the SNOW, because most 'wolf photos' just happened to have snow in them, and it was, without telling anyone, just casually making more and more observations based on the SNOW, thinking that's what a 'wolf' means (not that it was really THINKING).

This kind of example proves that people attribute and project too much into what's called 'AI', those things CAN'T really think, and they never truly will. They CAN'T understand the world any more than Helen Keller could understand socialism.

They can't understand what 'weight' feels like, they can't understand what 'happiness' is.

But they can't even understand a simple concept like a 'wolf', because they were not told WHERE in the photo to focus. So it focused on the snow, disregarding the animal in the picture.

Human would never make this mistake (of course exceptions exist), to a human, it would be obvious, without any 'training with thousands of photos', what is a wolf, and what isn't. You could take the most R####### redneck kid that has never seen anything but sand, and chances are, THEY would do a better job at identifying a wolf.

You can take this example and apply it to realize that even the most sophisticated machine learning software can make RIDICULOUS mistakes a human would never do. This will never go away, no matter how 'sophisticated' these programs become, or even if they eventually become 'self-correcting'.

Expecting an 'AI' to have a personality, to just (for some reason?) MAKE decisions, to UNDERSTAND the world, to THINK about things, and especially to FEEL anything, is something only a foolish idiot will do. Don't believe the hype, don't fall for it.

reply