Unmanned future: do humans fit in?

Christiaan Patterson

Hang on for a minute...we're trying to find some more stories you might like.


Email This Story






Some of us at one point have contemplated the notion of artificial intelligence being in absolute control over the military, stock market or basically everything. The time has come to start filtering those imaginative ideas into reality since the near future looks to translate Hollywood movies into absolute truth.

The January 2011 issue of “Popular Science” discusses the notion of artificial intelligence being given control with humans as the supervisors.
The director of “Mobile Robots” at Georgia Institute of Technology, Ronald Arkin, claims that a robot would be a more ethical and humane choice to send into a battle zone than a person. Especially when it comes to emotions such as fear, anger or the desire for revenge then yes, robots are a good choice. But if a robot is programmed to kill all enemies or all human life that moves within its scanning range, then we all are doomed.

The article states the U.S. already utilizes more than 20,000 robots and unmanned vehicles alongside troops. These computers have been making decisions without the input of human beings since around 1988. Of course technology was not as sophisticated back then as it is now.

The problem that should concern us is not just giving machines greater amounts of intelligence but in human’s maintaining complete control. Giving a robot or computer control over every aspect of the military is one thing, being able to take back that control is quite another.

The reason for this concern is embedded in Moore’s Law, which describes the exponential rate at which electronic technology advances, hence the evolution of such high-end technological products in just a few decades. With that in mind, Air Force scientists are concerned that robots could self-evolve at a faster rate than mankind’s technology, making it very possible to have what “Popular Science” calls a “Terminator scenario.”

Famed scientist and author Dr. Isaac Asimov proposed one possible solution. His three rules for robotics coincide with the story told in the 2004 movie “I, Robot” starring Will Smith. Those laws in summary are: a robot may not harm a human or allow a human to come into harm, a robot must obey all orders given by a human unless it conflicts with the first law, a robot must protect itself, as long as it does not conflict with the first or second law.

Hollywood movies sometimes prove to have scenarios rooted in real life situations. Attempting to tell a piece of hardware, which consists of wires and processors and harbors no human capabilities such as emotions and most importantly, compassion, would be the same as commanding your toaster to not burn the toast. It never works.

These scientists seem to be under the illusion these machines are made completely safe. Machines, robots and computers are products made by human hands and therefore possess every fault people do.