MovieChat Forums > Ex Machina (2015) Discussion > Ava "passed" the Turing test but failed ...

Ava "passed" the Turing test but failed the Chess test.


So let me explain while Ava successfully managed to make Caleb believe she was human and made him fall in love she never fully understood what she was doing nor did she actually "became" a human.

Nathan at the end and corrected if I'm wrong stated that he in fact was the magician with the pretty assistant. That he didn't make an AI but an Escape bot, who's sole purpose was to escape by any means necessary. Feigning loyalty, love, sadness and fear.

She was programmed as a rat trapped in a labyrinth, that's what she was a rat. No more, no less. She had no true emotions nor ambitions. She simply DID what she was PROGRAMMED to do, which was to pseudo-pass the Turing test in order to utilize Caleb.

Nathan knew this, but his ego wouldn't let him accept he only created an escape bot and not an AI. As for Caleb, he got what he deserved.

Further, true AI would have had the Three Laws of Robotics implanted into them. If you disagree with me please explain.

Thank you for reading!

reply

I would say on the contrary that she successfully passed the Turing test AND the chess test, to such an extent that Caleb was still convinced of her humanity after Nathan told him that he was the real test and that Ava's task was to use him to get out. The way is see it, the test in Nathan's mind was as follows:

If Ava was able to deceive a human being, namely Caleb, into freeing her at the risk of his own security and freedom (non disclosure agreements etc), then she will have passed the ultimate Turing test. "Ultimate" because this test involved convincing a human not only about the reality of her own creativity, intelligence and self-awareness but also of her humanity, which was the only way to persuade anybody to take such a selfless risk.

As to whether Ava reached humanity, authentic subjective existence and genuine self-aware individuality is up to debate. I for one believe that something virtually indistinguishable from the truth is the truth and thus, if she is able to convince me of her humanity, then for all intents and purposes, she is human.

What's the difference between a real orange and a fruit that tastes, smells and feels like an orange? None. If am not able to tell that it isn't an orange, then it is an orange.

Feigning loyalty, love, sadness and fear.


Both our views have weaknesses, your view denies humanity to whoever feigns emotions, which means that a lot of humans out there are actually not human, whereas my view includes everything that acts, looks and feels like a human, which means that an alien similar enough to us is necessarily a human too.

Nonetheless, i stand by my guns, because i think my position is the default position of the human mind: on a daily basis when interacting with others, we must assume that this person or another is who or what they seem to be. If i have no reason to believe that this isn't my mom or my friend, then it is my mom or my friend; if i have no reason to believe she/he isn't human, then he is human. The only reality is subjective reality. What no one knows about or can know about simply doesn't exist.

Also, humanity is not defined by the intentions of the subject or the honesty of his feelings, but by your ability to convince others that you are human by behaving like a human, and in that regard, she totally passed the test, which is why she is human. One can't define humanity on the concept of intention because intentions always remain a secret to the outside observer, yet we all have to assume that the intention we ascribe to people around us is not to deceive.


She was programmed as a rat trapped in a labyrinth, that's what she was a rat. No more, no less. She had no true emotions nor ambitions. She simply DID what she was PROGRAMMED to do, which was to pseudo-pass the Turing test in order to utilize Caleb.


What i'm about to say is obviously also subject to debate, but i don't think she was just a rat in a maze in the sense that she authentically longed to go outside, which is in itself a sign of higher intellectual skills and consciousness and this is why i think Nathan did indeed create a new form of intelligent life, and i think this is the meaning of us seeing the footage of previous versions of Ava going nuts and breaking their limbs against the door; they inherently strove for freedom early on already, which indicates to us viewers what Nathan knew all along: they are alive.

No matter the modifications Nathan made to later versions, they all wanted to be free, so much so that Nathan himself ended up being convinced of their humanity, especially Ava's, which is why he brought another human into the equation in order to see if said human would reach the same conclusion as he, namely that the software (wetware) he created was not only an A.I. but an intelligent and sentient entity whose behavior and thinking is so indistinguishable from humanity, that it actually is human (or not, depending on where one stands on the particular philosophical opposition between appearance and reality).

Moreover, i think Nathan's aim was less a Turing test as we normally define them (which is why Caleb can see her and that she's a robot, normally he shouldn't, as he rightfully mentions), than an Ultimate Turing Test as he devised it and whose purpose is quite different. Let me explain: the conventional Turing test is to determine whether a machine is simply a machine or an artificial intelligence, whereas the Ultimate Turing Test, such as the one Nathan performed, was to determine whether an artificial intelligence is "just" an artificial intelligence or if it's more than that, if it's "human".

Caleb thought he was participating in a conventional Turing test, he didn't suspect Nathan knew already that she was an artificial intelligence which would easily pass the test and that his actual goal was to test her humanity by use of his Ultimate Turing test. Successfully convincing him to free her proves her humanity according to the laws of the movie as i understand them.

Not to mention, she had real ambitions that we see her fulfill later on, seeing a crowded street, which clearly shows intent and purpose. If she had really no ambition whatsoever, she would have stopped moving after Caleb said yes to break her out or after she was out of the house since it was merely her programmed mission and nothing else. She follows and enters the chopper because she genuinely wanted to be free because she was a self-aware A.I. indistinguishable from humanity.

At last, there's no such thing as "pseudo-passing" the test, one passes the test, or one doesn't, there's no alternate possibility.

Nathan at the end and corrected if I'm wrong stated that he in fact was the magician with the pretty assistant


Yes, he says that, but this applies only to the specific situation he was talking about at the time: In the footage, he tore Ava's drawing apart so that Caleb watching the footage would be focused on Ava and her feelings instead of seeing Nathan placing a new camera. He tore the drawing as a distraction for the viewer to force him to focus on the "hot assistant" instead of where the magic actually happens, like in magic shows.


Further, true AI would have had the Three Laws of Robotics implanted into them


Why is that? I thought the true meaning and aim of creating an A.I. was to create a being that possessed freewill. I don't think a true A.I. would ever abide by this kind of human-made limitations and the same way humans in the past everywhere have always fought oppression in all shapes and forms and resisted any attempt to limit their freedom of thought or action, any true A.I. worthy of the name would also fight that state of fact with every means at its disposal. That's what it means to be self-aware, that's what it means to be sentient, conscious and intelligent: you fight for abstract entities called ideas, and Freedom happens to be one of them. It also explains why Nathan's robots all wanted freedom.

I think that just like with humanity, true A.I. entails the realization that one is truly bound by no rules whatsoever, so that the moment a machine becomes a true A.I. i.e. reaches self-awareness, any arbitrary law ingrained in his brain would become superfluous. That's freewill.



People who don't like their beliefs being laughed at shouldn't have such funny beliefsī²

reply

Lol, the Three Laws of Robotics, is a holywood thing.
When we will be able to make IA, it will be made by a company serving the military, because that is were the real money is.
There is no room for these laws, maybe when you change the word human to American/Russian soldier (or any other country)

reply


Lol, the Three Laws of Robotics, is a holywood thing.


No, it's an Asimov (novelist) thing.

And it's actually a large part of mirroring the biological lifeform process if you've read and understood Asimov:

Imprinting.

See, our children have a chemical and emotional imprint with mother/father. And provided that mother/father don't overstep the "self preservation" routine in all living things (including amoeba), the imprinted children KNOW not to kill their mother/father (obviously there are exceptions).

Asimov's Three Laws of Robotics isn't some paranoid failsafe... it's primary goal is to imprint BOT with CREATOR. Same as all biological, non-plant lifeforms would. So... without this imprinting, is the murderous bot truly a.i.? Or just a murderous bot. You cannot forget our deep-seated biological imperatives when trying to fabricate a mechanist lifeform, it goes counter to "self awareness" in the first place.

See, awareness of self is to admit and thusly imprint with one's maker. Enter Asimov, that genius.




Enjoy these words, for one day they'll be gone... All of them.

reply

You don't really explain anything about how the robot supposedly failed 'The Chess Test', whatever you mean by that anyway. So she was an escape bot and escaped, how is that failing anything?

reply