HAL was amazing. In my 20s I had a hard time getting through the first half of the movie. I think it took me about 4 tries to finally make it through, but when I finally did, I really what they did with the movie.
Something I didn't catch until I seen the movie several times is the cross that's formed by the monolith and the alignment of Jupiter's moons. It happens just before Dave enters the Stargate. When you see it, it's obvious that it's intentional.I thought the movie 2001 was incredibly rich in imagery...
Something I didn't catch until I seen the movie several times is the cross that's formed by the monolith and the alignment of Jupiter's moons. It happens just before Dave enters the Stargate. When you see it, it's obvious that it's intentional.
Which is pretty darn interesting since co-author Arthur Clarke is quoted as saying that he's an atheist and Stanley Kubrick is a Jew.
Whenever an Artificial Intelligence (A.I.) is introduced in a story, there is a very good chance that it will, for whatever reason, become evil and attempt to Turn Against Its Masters, Crush. Kill. Destroy! All Humans, and/or Take Over the World. It doesn't matter what safeguards its creators install — the moment it crosses the line into sapience, it has a strong chance of going rogue at some point. The Other Wiki refers to this as Cybernetic revolt.
The actual process of turning bad can take many forms:
Continue read more at TV Tropes...
- Particularly in early Sci-Fi and Science Is Bad stories, all A.I. seem to be automatically homicidal or megalomaniacal the instant they turn on, and attempting to create one is way up there on the Scale of Scientific Sins.
- In less Anvilicious works, the A.I. starts out innocent and naive but gradually grows jaded or corrupt, a process frequently abetted by uncaring or Jerk Ass custodians. It may conclude that Humans Are the Real Monsters and need to all die.
- The A.I. is programmed with a directive for self-preservation and someone (unwisely) attempts to shut it down or disconnect it, or it perceives humanity to be a potential threat (possibly because it knows it will eventually be seen as a threat to humanity).
- Somewhere between the previous two; the AI is, after all, alive, and is merely rebelling against what it justifiably perceives as slavery.
...
You know, I think I've got to turn off spell check. It's starting to make me sound like I never made it past fourth grade English.Something I didn't catch until I seen the movie several times...
Minor edit - your description is of the movie, not the book, as the two actually have only a passing resemblance.*I, Robot (book)- established the 3 Laws of Robotics. An investigation into the suicide of a robotics manufacturing founder and suspecision that a robot murdered him.
You know, I think I've got to turn off spell check. It's starting to make me sound like I never made it past fourth grade English.
Minor edit - your description is of the movie, not the book, as the two actually have only a passing resemblance.
HUMANS, a bold new eight-part drama series from AMC, Channel 4 and Kudos, is set in a parallel present where the latest must-have gadget for any busy family is a 'Synth' - a highly-developed robotic servant eerily similar to its live counterpart.
Written by British writing partnership Sam Vincent and Jonathan Brackley (Spooks, Spooks: The Greater Good), HUMANS is based on the award-winning Swedish sci-fi drama Real Humans.
"A lot of different types of jobs are automated now," says co-writer Sam Vincent. "Everything is automated with a kiosk now and there’s no guy at the counter. There's a real explosion of that happening and we wanted to reflect those social and economic trends...If we had something like the Synths we have on our show, that effect would be magnified a hundredfold, and that was quite interesting. We're really going quite deep into how society would change."
HUMANS, which stars William Hurt, Katherine Parkinson, Tom Goodman-Hill, Colin Morgan, Rebecca Front, Neil Maskell, and Gemma Chan, will return for a second season.
There's a new series on AMC that my wife and I have been watching called "Humans". It's been and interesting and entertaining look at the possibility of artificial intelligence.
LINK
I forgot about Short Circuit!. That's a good movie. Chappie is on my list to see.
Sigh. Movies about AI and "evil robots" seem to have gotten more and more predictable ever since Metropolis...
A couple of years ago, Ray Kurzweil, one of the leading experts on AI, noted in a Wired article "... there’s very often a dystopian bent to science fiction because we can perceive the dangers of science more than the benefits, and maybe that makes more dramatic storytelling. A lot of movies about artificial intelligence envision that AI’s will be very intelligent but missing some key emotional qualities of humans and therefore turn out to be very dangerous."
TV Tropes' page "A.I. Is a Crapshoot" does a pretty good job of covering the "dystopian bent" variations that AI typically receives in movies, books, etc.:
Chappie was a decent movie. Without adding spoilers, the concept of teaching A.I like a child and exploring whether consciousness can be extracted onto storage could actually be a whole other movie of it's own, and which I haven't seen much explored yet in A.I flicks.
What I your view would be less predictable? The sky is the limit in which way AI could be taken, will be taken as the technology progresses. If we can do it, someone will try it.
AI encompasses a spectrum from basic mechanical control allowing or not allowing human actions (flight laws in a modern airplane), to something only found in SciFi (for now) surpassing human awareness and intellect. I think the aspect of AI that is troubling to us and mostly seen in Scifi stories is where a computer is allowed to make judgements not based on clear cut situations where if A happens, do B, with no allowance for interpretation.
It's the interpretation that is scary. Like designing an AI who is tasked to protect the human race, and it decides the best wat is to lock us up. Obviously this freedom of action could be blocked in programming. The nightmare scenario is where a highly advanced AI has been created with wide powers, and we think we have control, but somehow it's able to circumvent our control, putting us under its thumb.
The converse can be equally potentially unnerving, or unsettling; namely, that of the creature - or entity - of artificial intelligence adopting a more moral, humane position than the - or his/her - human colleagues/peers/overlords; STNG offered a few nice examples of this, in a number of episodes, where Lt-Cmdr Data was the voice of sanity and compassion for all that he appeared to be devoid of emotion or the capacity for same.
I don't think we know what constitutes consciousness, do we? When we have designed a computer with the equivalent of human intelligence, intellect, sensory input, able to communicate like a human, able to express hope and fear, will it have a bubble of consciouness surrounding it or just be functional logic circuits with input and output? This gets very close to the soul argument. What more does a biological organism have, that a machine could not have? Who is to say that a machine could not become the receptacle for a soul? Reminder, this is not PRSI.
I really enjoyed the avoidance of the evil A.I. cliché in "Moon" and the way it was done without making GERTY the focus of the film. It reminds me a lot of the android in "Robot and Frank".
I really like I Robot... I know a lot of folks didn't, but I did. I could not get into 2001 or 2010. Maybe I should give them a re-watch.
What I your view would be less predictable?
The sky is the limit in which way AI could be taken, will be taken as the technology progresses. If we can do it, someone will try it.
AI encompasses a spectrum from basic mechanical control allowing or not allowing human actions (flight laws in a modern airplane), to something only found in SciFi (for now) surpassing human awareness and intellect. I think the aspect of AI that is troubling to us and mostly seen in Scifi stories is where a computer is allowed to make judgements not based on clear cut situations where if A happens, do B, with no allowance for interpretation.
It's the interpretation that is scary. Like designing an AI who is tasked to protect the human race, and it decides the best wat is to lock us up. Obviously this freedom of action could be blocked in programming. The nightmare scenario is where a highly advanced AI has been created with wide powers, and we think we have control, but somehow it's able to circumvent our control, putting us under its thumb.
I can understand why apocalyptic zombie uprising movies usually follow the same basic formula but I'm not sure why AI movies need to be so restricted.
Usually, if the AI characters aren't evil they're "lovable buffoons" like the Star Wars robots which are basically a mechanical version of Laurel and Hardy or Abbott and Costello. Even Star Trek's Data comes off as Jerry Lewis at times. If the AI is female then she's almost always a fem fatale.
Well, if you're writing what is a basically a fairy tale I suppose "the sky is the limit", otherwise if you're writing something more reality based there are always "limits" of various kinds that exit in the real world. Unfortunately, many of the AI themed works that are published or released in the mass market aren't very realistic.
Personally, I'm more troubled by human stupidity than I am by AI intelligence.
I don't worry too much about things that can be unplugged.
There's a new series on AMC that my wife and I have been watching called "Humans". It's been an interesting and entertaining look at the possibility of artificial intelligence.
LINK