The student media organization of California State University Northridge

Daily Sundial

The student media organization of California State University Northridge

Daily Sundial

The student media organization of California State University Northridge

Daily Sundial

Got a tip? Have something you need to tell us? Contact us

Loading Recent Classifieds...

An Unmanned Future: Where Do Humans Fit In?

Some of us at one point have comprehended the notion of artificial intelligence being in absolute control over the military, stock market or basically everything. The time has come to start filtering those imaginative ideals into reality since the near future looks to translate Hollywood movies into absolute truth.

This month’s issue of Popular Science discusses the notion of computers or robots being given control by use of AI and humans as the “supervisors.”

From the article, current war zones already utilize 20,000+ robots and unmanned vehicles alongside U.S. troops by both air and land. These computers have already been making decisions without the input of human beings since around 1988. Of course technology was not as sophisticated back then as it seemingly is now.

The director of Mobile Robots at Georgia Tech, Ronald Arkin, claims that a robot would be a more ethical and humane choice to send into a battle zone as opposed to living Homo-sapiens. Especially when it comes to revenge, emotions such as fear or anger, and/or the possibility of mutilation upon the enemy, then yes robots are a good choice.

However, if a robot is programmed to kill all enemy or all human life that moves within its scanning range, then we ALL are doomed. This could range from all kinds of civilian men, women, children and even animals because a robot cannot be compassionate where as a human, I hope, possess the capability to feel.

Another problem that concerns all personnel across the field, as well as myself, is not just giving machines greater amounts of intelligence but maintaining complete control. Giving a robot or computer control over every aspect of the military is one thing; being able to take back that control is quite another.

The reason for this concern is embedded in Moore’s Law, which describes the exponential growth of transmitters/processors inside electronics. This allows electronics to advance at a rate of approximately two years, hence the evolution of such high end technological products in just a few decades. With that in mind, scientists with the Air Force are concerned that robots could self evolve at a faster rate that mankind’s technology can keep up with, making it very possible to have what Popular Science calls a “Terminator Scenario.”

Another detrimental issue to utilizing robots and keeping it in constant communication with humans is by issuing it three laws. Dr. Isaac Asimov created the three rules for robotics which coincide with the movie IRobot starring Will Smith. Those laws in summary are: A robot may not harm a human or allow a human to come into harm; A robot must obey all orders given by a human unless it conflicts with first law; A robot must protect itself, as long as it does not conflict with the first or second law.

Hollywood movies sometime prove to have scenarios rooted for real life situations. Attempting to tell a piece of hardware, which consist of wires and processors and harbor no human capabilities such as emotions and most importantly, compassion, would be the same as commanding your toaster to not burn the toast. It never works.

These scientists seem to be under the illusion that these machines are made completely safe. Machines, robots, computers etc, are products made by human hands, therefore, possess every fault and mishap that a person does.

It’s foolish to believe that imperfection could create a perfect product. That is what they need to realize before continuing down this potentially dangerous and maybe even deadly path. There is no problem inventing machines that can aid in certain aspects of society, alongside humans, but I strongly believe that relying solely upon technology to make the decisions (which humans should be deciding) is one of the greatest mistakes humankind could make.

More to Discover