As I said yesterday, I have been asked some odd questions recently, this one, will robots take over the world, is one of them. First of all, let's examine the problem by backing up time a bit... (I'm going to date myself) I remember a time when we regular people didn't have personal computers, hell most of us didn't even have access to them at a College. I was one of the lucky ones that had access to the University Computer back in 1980. One of the classes that I was taking, general Statistics had a computer, well I'm not sure that it was theirs per say. But it was in the same building and we were given access. The point is... during this time, when computers were new to ordinary folk, there was a huge outcry of fear that these mechanical things would one day take over the world. I suppose in one way they have, in the technologically advanced countries of the world, computers are part of everyday life. What would we do without them now? Most of our children couldn't even add a few numbers together without a computer and the vast majority have no idea what cursive writing is or how to read it. (I've actually witnessed this phenomenon, the child thought that it was a different language.)
But this kind of take over is not what the populace was concerned with. They were thinking that the computers would decide that we were a lesser life form and kill us off. Granted, that is what we as humans do. But, why would a computer 'think' like that? Computers do not think. They are a program. We program our personal computers to think like us. But, they do not think independently. It is our moral values and our actual thought processes that cause us to act the way that we do. A computer has no moral values.
The question is, can a computer be taught moral values? I don't think so. A computer deals in absolutes and in logical probabilities. They can be taught to predict an individual's probable reaction to a given event, but only if they are given all of the parameters. Those parameters must include a significant number of instances when the individual reacted to a similar or identical situation. It's like a shot on a pool table. If you strike the ball here with the stick at this angle and with this amount of pressure behind the impact, the ball will go in this direction with this amount of force and go in the pocket. An individual will react to a given situation, if presented with the same circumstances, in exactly the same way every time. But, that is for only one individual and only one situation. The computer can only make a prediction based on past performance. It cannot factor in morality or changes of heart. Therefore it cannot learn morality. It can learn predictable patterns and base its own reactions on those parameters.
A computer is a lot like a sociopath. They have no emotions. They have no moral compass with which to guide them. They can only imitate an acceptable reaction. They feel nothing. Their decisions are based entirely upon what it takes to achieve their goal.
So, is it possible for a computer to become self-aware? Can it become a new life form and in that manner learn some kind of morality? Perhaps before we examine that idea we should define what it is to be alive. Life is defined as being self-aware, having the ability to reproduce or to replicate (like a single cell replicates) and to consume energy or matter to survive. Other requirements have slowly, as we have discovered more and more life forms, been dropped. Does a computer possess these abilities? Well, we know that machines can build machines and that they consume electricity to survive. All that is left for them to know that they exist. Would that make them alive?
I think that the more important question is, if they somehow did become self-aware, why would they want to hurt us? The simple answer is, because we programmed them to. To put it simply. A machine cannot tell the difference between a member of the military or a civilian. They can recognize children and can be programmed to recognize uniforms. Uniforms are easily changed. Such a command, even from our point of view would be stupid. I believe that we are having just this debate right now in Washington concerning Drone Strike planes. They simply drop a bomb on command. It kills everyone in the area. There is no moral question. The plane doesn't weigh the consequences. It just does what it is programmed to do.
So, would there be computers on our side, the side of the humans, if it came to such a war? Would home computers and benign operating systems come to our rescue? Personally, I think the whole idea is stupid. Computers already rule most of the world. The computer in our alarm clocks tell us when to wake up. The computer in our coffee machine decides what we will drink and at what temperature it will be. The computers in our cars and homes monitor our environments and adjust them. Computers run our lives by enhancing it. As I said earlier, what would the next generation do without them? Would they be as smart as they seem? Computers make us appear smarter, faster and better. In truth, how many of us still know how to spell or how to form a proper sentence, without the help of our writing programs?
The robot is inevitable. Soon it will be the one mowing our lawns and vacuuming our homes. We will welcome these services because it gives us more time to relax from our hard day at work pounding on keyboards and sending the information world wide... all by computer. Will these metal servants want to rule the world... they already would. They would be programmed to serve us, and that is exactly what they would do. If anything, man will become so dependent on the servants, they will become unable to do for himself. This has happened before. The difference was humans. Humans were the servants before, and humans will always strive to be free to serve themselves. Robots would not have that desire. Nor would they desire to rule any more than they already would.
Will robots one day take over the world, yes, it's inevitable. Will they kill all the humans when they do? Why would they? Humans are why they exist. Will humans continue to use machines and computers to kill each other? Yes, of course they will. Not everyone has ascended in their education far enough to eliminate this aspect of human nature. Also, not everyone is good. We have already addressed humanities two warring factions. The Bible is clear that two factions exist and they will be at war with each other until Jesus manages to build up a big enough army to wipe out the bad guy. Clearly, we have a ways to go.
I'm not going to let the robot thing bother me. I would rather like it if the vacuum ran itself, so long as it didn't run over the cat. Anyway, that was the question posed to me recently. Do you have one for me that you would like me to answer? Let me know.
Did you all notice that I added those pics to the pictures and interest page. I had some difficulty at first but I've fixed it. I also added a guest sign in page, it will take your comments. Finally, if you live in the area, please join us at the ELI Group fund raiser and help us get enough money to buy poor little Elijah a headstone.
Live well and be good to each other.