Mike Treder writes, "But what if, instead, the recursively improving computer brains of robot warriors allow them to become enlightened and to see the horror of warfare for what it is -- to recognize the ridiculousness of building more and better (and more costly) machines only to command them to destroy each other? What if they gave a robot war and nobody came?"
In contrast, Jordan Pollack writes, "Most people's expectations of robots are driven by fantasy [...] We have to master either software engineering or self-organization before our most intelligent designers can dare play in the same league as Mother Nature [...] In case you missed them, today's most popular robots are ATMs and computer printers."
They both bring up good points, but Pollack's is much more in line with reality. Robots inspire such heights of fancy that one cannot distinguish the history of the concept from the history of people jumping to some absurd conclusions. For example, South Korea is officially writing up a code of ethics to protect robots from human abuse. So, they've jumped right past questions of whether or not robots can even be abused and are making sure they are protected just in case. This is an interesting thing for a country's government to be throwing their resources and prestige behind considering there are plenty of people in their own country, a certain nearby country, and all over the world who are being abused by people right now. It indicates that 1) the level of absurdity of the things politicians do in South Korea is equal to the level of absurdity of the things politicians do in America or 2) they are more worried about hurting the hypothetical feelings of robots than the actual feelings of humans.
This sort of thing has been going on for a long time in one form or another. Right here in the States non-professionals are arguing over whether or not robots should be held responsible for their actions, while the military itself is dragging its feet over even defining the requirements. (if the requirements are not defined then no progress can be made)
People have always been worried that "the military" will jump feet first into autonomous killing machines, laughing manically the entire time and possibly dying ironically when their monstrous creation turns on them. The reality is that military professionals take their job very seriously and are incredibly reticent to give a machine the authority to do anything at all. Anyone who has had to deal with the results of lowest-bid government contracting is reluctant to trust anything built by a government contractor. Giving it a gun and orders to "Git 'em!" is the last thing any professional will do.
Additionally, why would the military ever embrace autonomous soldiers? If they ever worked correctly they would simply take the soldiers' jobs. The only incentive the military has for incorporating robots is to expand into capabilities they did not have before (reconnaissance) and to protect their own soldiers lives while increasing the danger to the enemy's soldiers. And what do we see? That the thousands of robots being used right now are gathering information and defusing bombs. That is the reality of the situation. As communication links improve you might see teleoperated fighting robots, but I do not think autonomous soldiers will ever be embraced by the military.