Okay, so I don't know if this is a more philosophical question, or practical. But, having an AI background (in the actual sense, mind you; not cyborgs and such), topics relating to "AI" (as seen in movies and on tv) tend to come up every now and then.
One issue is the notion of a "thinking" computer. More specifically, since it seems to be the typical sci-fi goal, an android.
Suppose one were to construct an android, and teach it english, and shapes and sounds, and all that fancy schmancy stuff.
And further suppose that the creator were to declare that he hadn't merely created a device capable of emulating human interaction, but one which was actually truely and legitimately intelligent. That it was conscious, self-aware, and had intelligence to rival (or surpass) a human's.
What would it take to convince you that he was telling the truth? That it really was intelligent?
The classic theorized test of "intelligence" is a Turing test. You have the computer/android/dealie communicate via a "teletype" (you can tell how long ago the idea was suggested. Today, it'd be the equivalent of a chat program) with a human.
If, solely through text, the human could tell it was a computer, then it isn't intelligent. If the human can't tell, then it is. Obviously, that raises all sorts of flaws. (What if the human's just really dumb? What if the conversation never strays outside of puppies, etc.)
However, this is a different slant on it. In this case, I'm not interested in what would be a good general criteria (yes, I know that's actually a plural word). I'm interested in what would convince you.
So, what would it take? How could an android convince you that he/she/it was actually intelligent?
One word learning. It would have to be able to modify it's code due to circumstance and that the modfication must lead in a postive direction in it's development. I guess the only way to test for that observing it learning.
Spookspon if I pay you 10 dollars can I become your god? Like I was thinking of making a religion based on me or puppets and I need a start up group.
Nice topic Earl. I'm pretty sure robots would have true consciousness long before humans were convinced of it.
I agree that learning is a good criteria. If it could learn a foreign language, a specialized sport, cultural etiquette, and you could watch it make mistakes and gradually improve it would be a good sign. I think things like forgetting, mis-interpreting, making unusual associations between things would be good signs of consciousness too. We usually think of robots as superior in every way to humans, but I think truly dynamic thinking systems would periodically make boneheaded mistakes just like humans, due to misguided attempts at quick thinking amidst massive amounts of old and recent memories.
I would also want to know how it works on the backend too. For instance, if language was pre-programmed into it I would be suspicious of it's true level of consciousness. Language is much more than associating words with images, but rather interconnecting experiences and appropriate reactions with the desire to communicate these to others. I would think consciousness would be much more real if it had to learn language through experience rather than having a dictionary dumped into its memory.
I also think consciousness is much more than a brain state, and is more of a brain state combined with the specific body configuration of that being, associating ideas with precise muscle memories. That's why I don't think you could just copy a brain and put it on a computer. The robot wouldn't know how to interpret the data because living organisms aren't exactly plug n' play. So anyway, this is even more reason I think it would require that he learn to walk/run/jump and otherwise move on his own to build his own experience which would be much more capable of true sentience than something that was pre-programmed to move this or that way.
Technically they could reproduce if they were intelligent. i.e. they could build more of each at an ever increasing rate.
God knows how I could determine if it were intelligent? Maybe trick it. Command it to destroy itself and see if defects and shows a self willingness for life? Ask it to create art or what it considers to be art. Maybe ask it to do something impossible like fly off of the top of a building and watch the response? It would need to show the ability to learn but then it could just be tricking me so how would I know.
All in all I'm going for the art. If the droid could in front of my eyes create a unique piece of art then I think I would be convinced.
@darkbluemullet what do you define as art? You could program a computer to randomly do tasks that affect the canvas in different unique ways?
@xXincognitoXx you could program a chess AI that acted like it had that you could make a sad function that if it's wining it gives you a chance. You write another program to cheat but then another one called ethics to prevent it. That still doesn't mean it is intelligent.
@tman101 Would it thought? If someone was responsible in their programming and design that could be prevented.
@Darkroot, that's just the thing. I wouldn't give it a canvas and say paint something, I would simply ask it to create art without instructions, tools, etc and see what it comes up with. An programme could well be built in to create a painting or sculpture or something but if I were to keep asking it to create more and more and it was different and varied every time then I think I would start to become convinced.
I think so incognito
also i think it wil confince me as it makes human mistakes. I mean even the little mistakes like forget something that is not very important. (Going to buy some bread and milk and forget the milk) Or when it say something that you don't want say or say something wrong. Example: saying a word like tooth when you mean food. Or say something you were think about but not want to say loudly.
When a android can make some of that little mistakes I shall be convinced.
No, I didn't just ask a question and then drop off the planet. To be honest, I didn't want to meddle and interfere with people's answers. :D
I think everyone with an opinion has expressed it by now, so it's time to reply. (Incidentally, this is something like the fifth time I've tried to have this conversation, but easily the best response I've ever gotten!)
Jay, that's a good idea. Something like humour had never occurred to me.
Darkroot, it's interesting that you brought up learning. I've actually written software that modifies its own code before (look up "genetic programming" for a really interesting read), but it's typically task-oriented. "Unsupervised learning" and metaprogramming don't usually go together, but that could, indeed, be a very interesting benchmark.
Steve, you bring up a topic that's actually immensely important.
"making unusual associations between things would be good signs of consciousness"
One of the most fundamental elements of human consciousness is context; the ability to form connections between different concepts. If a machine could really achieve such a thing, that would be staggering.
As for your suggestion that consciousness requires a body state... heh. I knew you'd be good at this. That's precisely the reason I said "android" instead of "computer". Some philosophers believe that intelligence is impossible without a body.
darkbluemullet, you bring up an interesting point: art. It actually reminds me of a scene in I, Robot (I know, it was a book too, right I never read it). Smith's character asks Sunny if he can write a symphony, to which Sunny replies, "No. Can you?" Always loved that part.
That said, there's actually been a decent amount of research done trying to get computers to create 'art' (both visual and musical). The idea is to take existing works of art, isolate elements/colours/textures/sounds/etc. that are pleasing to humans, and then to try to 'learn' how to produce new works that people like. Some works turn out better than others.
Something indirectly related to whether or not computers would take over, or get around 'firewalls' is motive. Another possible gauge of intelligence isn't whether or not it can be made to solve problems, but rather whether it will have the sense of 'initiative' to do so.
As for my own opinion... well, that's part of why I waited to comment again in this thread. I know too much about computers and AI to believe it's really ever possible. At the very least, I don't think I could ever believe it unless I knew precisely how that "intelligence" was achieved. And then... well... that'd beg the question of whether or not true intelligence was ever possible at all. There's a school of thought that consciousness and free will can't exist if you can fully understand and quantize them.
However, just for ha-ha's... I think a good (potential) sign of intelligence would be if it were able to realize that, in a pinch, a pair of pliers could also act as a hammer. Or that a cardboard box can also be a 'table' if that's all you have around to set your drink on.
Earl, I subscribe to the compatibilist determinism view, so I think quantifying consciousness doesn't preclude sentience of the kind that humans have. That being said, I'm pretty sure human intelligence will never be fully quantified since this mind-body-state thing is a constant flux and extremely dynamic, and probably depends on so many 3 dimensional particles being at precise locations that I'm not sure how you would read that in. I think I heard somewhere that it would take a bunch of empire state buildings full of hard-drives to hold the information about all the particles in a single humans body. Sure we can improve our storage capabilities, but how do you scan every single particle's 3D location without disturbing the thing being analyzed? Perhaps we don't need every particle and lots of it can be approximated to good enough result, but ultimately consciousness is a very personal experience anyway. I can't even quantify another human's consciousness or even guarantee that another human IS conscious (let alone robot consciousness). You can quantify the color red as the length of a wave, but I can (probably) never know how it is that you experience that color visually. Perhaps you could stream consciousness into another person's mind and get a good idea, but even then, wouldn't it end up filtering through your own understanding of the color red and not necessarily the original person's? I digress.
Also, in regard to motive: you would definitely want the robot you dumped millions of dollars in to not just step off the edge of the building or jump into a pool of water (unless it's a swimmer), and so you would want to make sure it was capable of basic self-preservation. If it was truly mobile, able to roam free terrain and capable of learning how better to preserve itself in dynamic conditions, then that's exactly the kind of motive that any living thing has to stay at the top of the food chain. However, there's no particular reason that agile learning robots need to desire preservation of other robots, unless these robots were designed or learned to build other robots and had a strong inclination to maintain their creations or something. So you might have a robot killing people but not necessarily banding together with other robots.
Also, I think it would be a bad idea to create more consciousness in the world that already has more than we can responsibly account for. It would create so many ethical dilemmas that our heads would explode. :D
This may of already been said since i only read the first page of posts but it would have to be able to be learning all the time, like humans are learning something new everyone day. It would be near impossible to enter everything about everything into an android since even we know barely nothing about the universe around us.
You must log in to post.