Thursday, May 28, 2015

Death from Above

It’s not often that a book review coincides with current events. Books, particularly nonfiction, are usually written and published months, if not years after an event has occurred. That’s because good nonfiction is written in retrospect: writers have spent some time absorbing their subject, researching and analyzing the facts; authors are hesitant to be rash in judgment or thought.
However, there are exceptions. Some pieces of nonfiction, particularly journalists’ works, are appropriate now — not later. Andrew Cockburn’s new book, Kill Chain: The Rise of the High-Tech Assassins, is one of them.  Cockburn’s book is timely.  In just the past few weeks there has been a flood of reportingfrom media outlets stating that a drone strike killed an American and an Italian hostage when targeting a group of Al-Qaeda members operating near the Afghanistan-Pakistan border.
Suddenly, questions about drone strikes, the debate about targeted killing, and the transparency of the drone program are on the front page of print and online news media worldwide.
Yes, timely indeed.
Although Cockburn’s book cover is plastered with silhouettes of unmanned aerial vehicles — with what appears to be the X-47B, Predator, Global Hawk, and Fire Scout, among others — he is making a larger argument.  Cockburn it seems, is arguing that all technology is suspect.  It’s not simply unmanned aerial vehicles, but it’s the idea that human beings are continuously so bold as to come up with technological solutions that will win our wars.   History, however, tells us a much different story.

Cockburn, then, starts his book with an interesting tale.
In 1966 the Vietnam War was not going well.  Secretary McNamara, a man who was fond of scientific solutions to difficult problems, turned his attention to “The Jasons.”  The Jasons, Cockburn says, were a small group of scientists and scholars, many of whom would go on to become Nobel Prize winners. These were also some of the same men — Carl Kaysen, Richard Garwin, George Kistiakowski — that were part of the Manhattan Project some twenty years earlier.
The Jasons tried to do what Rolling Thunder could not — they tried to figure out a way to defeat North Vietnam’s ability to use the Ho Chi Minh trail — to cut off their supply routes.  They ended up deploying small sensors along the trail that could, presumably, pick up the noise, vibration, and in some cases, the ammonia of someone urinating, all in an attempt to locate men and machines moving goods to the South.  Then, if they could hear them and find them, U.S. commanders could task air strikes against the communists on the trail.  It didn’t take long, Cockburn says, for the North Vietnamese to find a work-around.  How long?  It took one week.  Cockburn notes that all the North Vietnamese had to do was to use cows and trucks, often running over an area of the trail multiple times to create a diversion while the real logistical effort was moved elsewhere.  So simple and so effective — and relatively inexpensive.  However, Cockburn says the cost of the electronic barrier for the U.S. was around six billion dollars.
This formula is repeated throughout the rest of the book.  That is 1) There is a military problem 2) Someone always tries to find a technological solution, and then 3) Spends a lot of money only to find out the U.S. has made the problem worse.
Now fast forward almost sixty-years to the age of drones, and Cockburn introduces us to Rex Rivolo, an analyst at the Institute of Defense Analysis.  It’s 2007 and improvised explosive devices are a major problem; they are killing and maiming hundreds of U.S. troops in Iraq.  Asked to analyze the networks behind the IEDs, Rivolo, Cockburn says, discovers that targeted killings of these networks  lead to more attacks, not fewer.  This is because someone more aggressive fills the place of the leader who was recently killed.  Rivolo would return to D.C., even getting the ear of the Director of National Intelligence, Dennis Blair, telling him that attacking high- value targets was not the right strategy — the IED networks and individuals setting them off were more autonomous then was initially thought.  Going after the senior guy, Rivolo noted, was not the answer.  But, as Cockburn says, nothing changed. Now people simply refer to the continuous cycle of targeting and killing  high-value targets as “mowing the grass.”
The idea of killing  senior leaders or HVTs is not new, it’s been around for a long time (think Caesar).  Cockburn, then, brings up one of the more interesting “what if’s” that military officers — or any student of military history — likes to debate.  That is, what if someone had killed Hitler before the end of the war?  Would the war have ended?  Or would he have become a martyr and someone worse or someone better have taken his place?  Cockburn tells us about British Lieutenant Colonel Robert Thornley, who argued during WWII that, no, the Fuhrer should not be killed.  Thornley noted, that if Hitler was killed, his death would likely make him a martyr for national socialism.  And that Hitler was often a man that “override completely the soundest military appreciation and thereby helped the Allied cause tremendously.”  Therefore, the thinking went, we should let Hitler live and dig his own grave.
However, the problem with this debate is that context matters.  Was it Germany in 1933? 1938? Or 1944? It matters because while Cockburn does not differentiate between the killing of a leader of a state and the leader of a terrorist network, they are indeed different systems that have different levers of power and legitimacy.
He is on firmer ground when he rightly notes how difficult it is for anyone to predict systemic effects when targeting a network.  He reiterates these difficulties throughout the book.  The most historical compelling case is WWII and the strategic bombing campaign.  All one has to do is pick up theWWII U.S. Strategic Bombing Survey and read the fine work done by John K. Galbraith, Paul Nitze, and others.  Disrupting or destroying networks from the air — in this case, Germany’s economy — was incredibly difficult.  In many cases, assumptions of German capabilities or weaknesses were far from correct.  And as Cockburn notes, the term “effects based operations,” namely, operations that are military and nonmilitary that can disrupt complex systems while minimizing risk, was a term that was outlawed in 2008 by General Mattis while the head of Joint Forces Command.
Ultimately, the debate over drones — who should control them, what should they be used for, should the U.S. target particular individuals — will continue.  It’s an important topic.  There are, however, a few shortcomings in this book.  One of the biggest questions that goes unanswered is this: If the U.S. should not strike identified enemies or high-value targets…then what?  Do nothing? Allow a Hitler to simply remain in power?  Is this not a form of moral ignorance?
The questions military planners and policy makers should ask is this:  Do we understand the character of this war?  And are these the right tools we should use to win this war?  We should not blame a drone — or any other type of tech for that matter — for bad strategies, poor operational planning, and gooned up tactics.
Drones are the future.  But we should read Cockburn’s book as a cautionary tale.  We should disabuse ourselves of the illusion that future technologies will be our savior.  And finally, we should not let those illusions crowd out the very difficult task  of understanding our adversaries and the enduring nature of war.
Andrew Cockburn’s book is worth reading.  But have your pencil ready — you’ll want to  argue with him in the margins.
Lieutenant Commander Christopher Nelson, USN, is a naval intelligence officer and recent graduate of the U.S. Naval War College and the Navy’s operational planning school, the Maritime Advanced Warfighting School in Newport, RI.  LCDR Nelson is also CIMSEC’s book review editor and is looking for readers interested in reviewing books for CIMSEC.  You can contact him at books@cimsec.org.  The views above are the authors and do not necessarily represent those of the US Navy or the US Department of Defense.
Reprinted with permission from the Center for International Maritime Security.

Tuesday, May 19, 2015

Robot Ethics & Future War - Part II

by CAPT (ret) Wayne P. Hughes, Jr., USN, Professor of Practice, NPS, whughes(at)nps.edu

TACTICS AND TECHNOLOGIES
In November 2010 the Naval Institute published its robotics essay contest winner, “How to
Fight an Unmanned War,” by Lieutenant James E. Drennan, a student in the Systems Engineering
Analysis curriculum at the Naval Postgraduate School. It is a brilliant piece, not least
because it is oriented around tactics. Drennan answers the who, where, when, how, and why
questions of combat that incorporates robots. A runner up in the competition is on the Naval
Institute web site: “Our Own Worst Enemy: Institutional Inertia and the Internal Challenges
of Embracing Robotics” by a former Marine, Nathan Hughes (no relation). He contends that
the greatest resistance to the development and deployment of robotic systems is neither in the
research and development community nor outside the Department of Defense. It is systemic
within DoD, created by “a robust and layered series of barriers to [the fulfillment of their
potential].”

Both writers are over the top in their criticisms, as any visionary is allowed to be. Neither seems aware of a Naval Postgraduate School program for rapid development and deployment of UAVs with a direct pipeline to the Special Operations command. Our work at NPS shows how to exploit technology quickly by fostering bottom-up pressure from the working level of soldiers, Marines, and Special Forces in Afghanistan and Iraq to overcome the inertia and impediments at the top described by Drennan and Hughes. Most of our unmanned systems are not yet autonomous, but progress may come faster than most people expect. The broad ranging Naval Postgraduate School CRUSER program is developing autonomous defensive swarms to fight swarming manned or unmanned attackers. We are also pursuing tactics and technologies for autonomous surface and underwater vehicles.

Perhaps the two Naval Institute articles will help foster what is probably the greatest need in the immediate future, which I believe should be to develop new tactics for cooperation between manned, unmanned, and autonomous aircraft. If cooperative robotic operations potentially offer the biggest tactical-technological reward across the armed forces, then the Navy’s biggest operational problem is probably to decide where robots will be based. Unmanned aerial vehicles can fly from large carriers, smaller amphibious ships, small warships like the Littoral Combat Ship, or most affordably of all, from commercial ships adapted to the purpose.10 One thing seems certain: there is no single solution that fits all tactical needs and all sizes of UAVs. There are equally complicated questions about the employment of autonomous surface and underwater vehicles that go hand in hand with technological advances.

In my view, more attention to the ethical issues can be a positive force in recognizing the tactical
and technological future that is closing in on all the armed forces of the world. American
military leaders too readily assume they are the only experts in all aspects of warfare. Missiles,
unmanned vehicles, and robots are cold blooded. Ethicists emphasize their heedless cruelty as
a vice. Soldiers ought to emphasize their coolness under fire as a virtue. A robot won’t panic, or
duck, or flee, or lose its temper. Analyzing robot warfare will be easier than analyzing combat
between humans when both mind and spirit are prominent. Against human opponents the
purpose of gunfire is often as much to destroy enemy morale or make him keep his head down
as it is to kill him. Robots never wince.

GEOPOLITICS AND ECONOMICS
At the Commonwealth Club I said the Weinberger Doctrine was more useful than Just War Doctrine because it was intended specifically to guide decisions by the United States. In closing I said, “States do not kill the enemy; soldiers do. If our nation requires its soldiers to take an oath to defend their country, then it owes its soldiers an accounting of the conditions under which they may be sent to war. . . Contrariwise, the state owes its citizens a dependable army who will fight for them when the conditions are met.” But the Weinberger doctrine is obsolete and needs to be replaced, not least because it was promulgated during the cold war when the Soviet Union was the focus of attention.

In the spirit of the Air Force officer who said an ethical nation should not hesitate to develop robots that will save American lives, how might a moral doctrine of war be framed today—a “Gates Doctrine” as it were? The need is the more compelling because the U. S. economy is overextended and hurting. When the Chairman of the Joint Chiefs of Staff was asked recently what was the biggest threat to the country, he said it was our economic health. A significantly smaller defense budget seems inevitable and will entail more risks than are prudent if national defense were paramount. We will need to pick our fighting machines carefully, including the integrated roles of unmanned vehicles that increasingly will operate autonomously.

The modern world fights in a twilight zone between war and peace. World War II was the last declared war. Conflict since then, violent or non-violent, has been in the spectrum that is neither. The United Nations Organization muddied rather than clarified the distinction. The Korean War was formally a police action and it still not over; the full weight of the UN against North Korea has been unable to bring it to a formal conclusion. To block the movement of nuclear missiles into Cuba the U.S. invented the term “quarantine” because a blockade was an act of war; the contemporary arcane term is maritime interdiction. Moving further toward the non-violent end of conflict that is neither war nor peace we find economic “warfare;” its means of “fighting” have been greatly enhanced by computer technology. Reconnaissance, essential for maintaining stable deterrence of violence, entailed U-2 and SR-71 aircraft operated by the CIA that were arguably an invasion of Soviet air space. One might be tempted to paraphrase an old Army slogan made famous by General MacArthur: Old wars never die; they just fade away for awhile.

A DOCTRINE OF CONFLICT FOR THE UNITED STATES
Implementing an ethical doctrine of just violent or non-violent conflict that is internationally accepted is not possible. Robots and computers are new wrinkles and, as George Lucas suggests, other complicating possibilities are impending, such as laser weapons, and small, inexpensive, long range, very-high-speed missiles that destroy by kinetic energy rather than high explosives. That is all the more reason why the U.S. should have a declared unilateral doctrine for today. We should be guided by a viable but affordable set of national goals. I am no strategist, but here is my best shot at an American doctrine for conflict. It has five provisions with some new elements and some worrisome gaps. If these five goals are not the wisest ones, then they nevertheless illustrate what could be a concise, published policy that is more focused than existing guidance. They are listed in order of frequency of occurrence and in inverse order of deadliness and destructiveness.

I. The nation will maintain the capability to attenuate cyber attacks, including active defensive and offensive operations. [The three unique things about cyber operations are, first, they are going on right now and cannot be completely stopped; second, cyber attacks are non-lethal but they can be very destructive to the American economy and living conditions; and third, cyber operations contribute in wartime by communications security and communications countermeasures against an enemy. A sensible but intricate policy should be constructed by the National Security Council in cooperation with the National Security Agency and the Departments of Defense and Homeland Security, with advice from other departments and agencies.]

II. The Departments of Defense and Homeland Security will defend the homeland from terrorist attack within the limits of affordability and be capable of conducting offensive operations overseas against non-state organizations who threaten the nation. [I have included an explicit “affordability” clause because there is no hope of buying a disaster-proof defense. An affordable strategy would continue to make such an attack difficult to achieve, but would put more emphasis on organized disaster relief at home and endorse preemptive attacks on the sources. No hint of action only as a last resort is intended.]

III. The Department of Defense will maintain the military capability to fulfill our treaty obligations worldwide. [These alliances include NATO, Iraq, Japan, and South Korea. Here, an affordable capability is not the issue because the forces entailed can be much the same as those designed to influence China in provisions IV and V. Though the treaties are defensive in nature, any of them could escalate into devastating, costly war. For a great power there is no avoiding each commitment, short of terminating the alliance. The hazard of treaty escalation makes the hazard from U. S. robot attacks pale by comparison. Iran is a distinct and difficult case that illustrates the danger. Either of two American military actions might be become necessary, one being to keep the Strait of Hormuz open to international traffic, the second being to prevent Iran from using or selling nuclear weapons—in both cases one would hope without having to invade Iran.]

IV. The Department of Defense will create the military capability to retain influence with our friends in Asia. [Implicit is new respect for China as our emerging peer competitor whose ambition is to be the hegemon of Asia. The U. S. strategy in response should specify that (1) we will not fight a land war with China; (2) we will not attack targets in China first, neither with nuclear weapons nor conventional air or missile strikes, but we will have a secure retaliatory capability; (3) we will keep the China Seas safe for commercial shipping for all the friendly nations of the world; and (4) we will realign the U. S. Navy for a maritime strategy comprising forces that respond to unwanted Chinese actions against our Asian friends with step-by-step actions at sea to curtail or prevent Chinese export of goods and import of energy, first with forces for distant blockade and second, with submarines for attacks in the China Seas themselves. These actions can only be taken when world opinion endorses American action at sea, because there will be dire economic consequences for all. Yet the strategy is our best hope to keep the competition peaceful because the greatest penalties from a war at sea will be suffered by China. The capability is affordable, but it will take a preponderance of the Navy’s budget.]

Nothing in the Weinberger Doctrine distinguishes the extraordinary destructiveness of weapons of mass destruction. Nor is it covered in classical just war doctrine, when such weapons did not exist.11 The U. S. has never expressed a formal doctrine against first use of its arsenal of nuclear weapons. Yet this would seem to be the single most important policy decision of a great power, if for no other reason than to set an ethical example for other states.

V. The Nation will maintain a secure, nuclear retaliatory capability to devastate another state that employs a weapon of mass destruction. [On one hand the doctrine implies no first use of nuclear weapons.12 On the other hand, it is explicit that “overwhelming force” is intended by the policy. This is widely thought to be the best deterrent of an attack by another state that has nuclear, chemical, or biological weapons. There is no question of a proportionate response. This departs from just war doctrine, but the policy must be just if the threat prevents first use by another state. Our use of nuclear weapons is clearly contemplated to aid an ally such as Japan if it is attacked with them, as indicated in provision III. U. S. employment of nuclear weapons when another state, such as Israel, suffers a nuclear attack is not explicit and ambiguous. The policy applies only to sovereign states. Nuclear retaliation on terrorists is difficult, undesirable, and in many instances impossible. The doctrine for action against non-state entities is covered in provision II.]

In the 1980s when the Weinberger Doctrine was in effect the defense budget was much bigger and our armed forces larger. For example, the Navy had twice as many ships. The doctrine was a unique thing, valuable in the 1980s in constraining American military action. But Secretary of State George Schultz thought it stifled flexible negotiations, which was why it was never the “Reagan Doctrine.” I admit that today “the Gates Doctrine,” or better yet “the Obama Doctrine,” is not likely to be formulated and published—one would wish in consultation with the American Congress. Nevertheless it ought to be constructed—to guide affordable American military plans and policies in the difficult years ahead.

10 Relatively low cost Q-ships and spy ships and aircraft are antecedents.
11 That said, when just war doctrine was formulated, war was brutal. I am not thinking of the means employed by the Roman Empire but by the Tartars and Mongols at the peak of their effectiveness.
12 I personally would prefer to be explicit. The U. S. has never been able to make such a declaration, although it seems settled policy that we will never initiate chemical or biological warfare.

Editor's note: Reprinted with permission from the Naval Postgraduate School's CRUSER NewsAll opinions expressed are those of the respective author or authors and do not represent the official policy or positions of the Naval Postgraduate School, the United States Navy, or any other government entity. 

Monday, May 18, 2015

Mine Warfare Unmanned Systems: US Allies are Moving Forward

By Dr. Franck Florin, CRUSER Member, VP International Technologies, Thales Defense & Security Inc.

During World War II, mines emerged as a major weapon and the British, German and American navies placed more than 215,000 mines in the seas (according to Chris Henry, Depth Charge, 2005). To date, despite constant clearing effort, more than 60% of the mines and unexploded ordnance from that period are still lying on the sea bed near European harbors and along major European navigation channels. In 2012 alone, the French Navy neutralized a total of 2,292 explosive devices in the country territorial waters (50% in the Channel between France and UK, 30% in the Atlantic and 20% in the Mediterranean Sea).

With tens of thousands of unexploded ordnances to clear and many international operations abroad to support United Nations peace keeping and NATO crisis management, the European navies need cutting edge technologies to support their Mine Warfare (MIW) effort. This was recognized by the European Defense Agency (EDA) in November 2008, when the agency initiated a Maritime Mine Counter Measures (MMCM) project with thirteen contributing Members (France, as lead nation, The United Kingdom (UK), Belgium, Estonia, Finland, Germany, The Netherlands, Poland, Portugal, Romania, Spain, Sweden and Norway). During a two year assessment phase, the nations shared military requirements and looked at available technologies enabling the replacement of existing Mine Hunting capabilities.

Moreover, for both France and The UK, MIW is essential to sustain SSBN forces. France and the UK have therefore assigned millions over the years in developing the most capable MIW forces. In 2010, France and The UK signed the Lancaster House treaty for defense and security cooperation, agreeing in particular MMCM collaboration and Unmanned Systems Research & Technology focus.



The Organization for Joint Armament Cooperation (OCCAR) is an international organization created by six European nations (Belgium, France, Germany, the UK, Italy and Spain), whose core-business is the through life management of collaborative defense equipment programs. In July 2012, EDA and the OCCAR signed an Administrative Arrangement, paving the way for a closer relationship and highlighting their common interest, especially regarding MMCM. The same year, UK and France aligned their plans regarding their future MMCM capabilities and decided to develop and realize prototypes of new MMCM systems based on unmanned technologies, under OCCAR program management.

OCCAR began immediately a European competitive process for a common assessment phase. On March 27, 2015, OCCAR officially awarded the MMCM contract to a Thales led consortium with BAE Systems on behalf of France and The UK. The MMCM program will develop autonomous unmanned systems for detection and neutralization of sea mines and underwater improvised explosive devices (IED). The first objective of the OCCAR-managed MMCM Program is to develop, manufacture, and qualify two prototype systems, combined to deliver agile, interoperable and robust MMCM capability. By defeating underwater mines and IED in stride, these systems will give strategic, operational and tactical freedom of maneuver to the forces. The four stage schedule includes design, manufacture, qualification, and will end with a 24 month operational evaluation (OPEVAL) by the Royal Navy and the French Marine Nationale.

Editor's note: Reprinted with permission from the Naval Postgraduate School's CRUSER News.

Friday, April 17, 2015

Drones Ideal for Spectral Imagery Sensors – Here’s Why

 by Tim Haynie, Spectrobotics, Business Development Mgr, thaynie(at)spectrabotics.com

Spectral imagery collection and exploitation from a small Unmanned Aerial System (sUAS) was recently demonstrated at the Naval Post Graduate School’s Joint Interagency Field Experiment (JIFX) and the Secretary of Defense’s Rapid Reaction Technology Office (RRTO) Thunderstorm 15-3 for both multi and hyperspectral sensor systems.  While spectral imagery collection and exploitation from high-altitude and satellite platforms have been around for many years and are well-documented, recent advancements in sUAS systems make them an ideal platform for spectral data collection for their increased resolutions, dynamic flight profiles and introduce a new dimension in data collection thinking.


At the JIFX, the sUAS platform used for the Pixelteq SpectroCam ™ multispectral camera was an eight-engine multicopter controlled with a 3DR open-source Pixhawk flight control computer as found in many commercial systems.  The system consumed roughly 18,000 watts of power to manage both the flight control, sensor operations and demonstrated a 25-minute flight time.  At the Thunderstorm 15-3 demonstration, the smaller, lighter Headwall Photonics Nano-Hyperspec ™ required only a six-engine multicopter flown with the same flight control computer (3DR Pixhawk) and attained a 15 minute flight time. 

These Spectral sensors record reflected light energy across the electromagnetic spectrum in the Visible and Near Infrared (VNIR) region (400-1100 nanometers).  The Pixelteq SpectroCam ™ uses eight filters to record the light-energy in multispectral bands while the Headwall Photonics Nano-Hyperspec™ records 270-bands of hyperspectral data using a diffraction grating to split the incoming light energy into measurable bands.  Post-flight analysis of the spectral data was able to identify physical and chemical features within the scene for material identifications using a reference spectra (library signature of a target material.)

For the JIFX, the team demonstrated the versatility of the multispectral sensor/platform to not only record overhead imagery of target materials, but also lowered the platform below the tree line to collect imagery off-nadir and below the canopies.  The sUAS collection platform was stable enough to collect data and subsequent analysis successfully detected the presence of the target material (military uniforms) despite being in shadows and masked by foliage. 

The dynamic flight characteristics of the multirotor sUASs were essential to the hyperspectral data collection at Thunderstorm 15-3 because of the need for a high-precision flight path.   Scanning an urban area for the presence of a chemical hazard (Methyl salicylate simulant), the Nano-Hyperspec™ required overhead collection at very specific speed and altitude in order to maintain the correct exposure and frame-rate needed for proper sensor operation.  This can only be accomplished using autonomous flight under the control of the sUAS’s 3DR Pixhawk flight management system.  The hyperspectral imager was able to detect the chemical simulant hidden within an urban environment after flying an autonomous “lawnmower pattern” over the target area.

The use of the sUAS platform for these sensors directly improved the temporal and spatial resolutions for the spectral imagery sensors above those attained through satellite and high-flying manned/unmanned systems, even those hosting larger, more capable sensors.  Temporal resolution was significantly enhanced as data collection was completed within an hour of notification of the target area encompassing mission planning, flight/data collection and system recovery.  The low-level flight of the sUAS was able to capture data at a 3-inch spatial resolution which helped analyst by collecting data with up to 100% pixel-saturation of the target material.  It is also important to note that despite the cloud cover (that would have prevented high-altitude collections) and filtered sunlight the sensors were able to collect sufficient data for analysis that detected the target simulant material.

The combination of these resolution-enhancements, coupled with the increased flexibility of a sUAS to alter its flight performance based on the individual sensor collection requirements make the sUAS a viable platform to not only incorporate other sensor systems, but also explore flight parameters that expand the data-potential of these systems.  Data collection from aerial systems has always been performed from a two-dimensional plane (fixed orbit, operating altitude); the sUAS, multicopters in particular, enables data collection from a three-dimensional space and the ease of deployment and operation increase their frequency of use giving more on-demand data. 

Our next level of effort is infuse this “sUASA data-layer” into the overall intelligence data cloud architecture and begin to open the data for advanced analytics by other users.  sUAS systems will eventually host other types of sensors beyond cameras recoding imagery (vapor sensors, signals detectors, laser rangefinding for 3D modeling to name a few) and the true benefits of the sUAS technology are the potential to increase the number of sensors deployed and broaden access to places unattainable by conventional platforms.

Participating in these two events and comprising “Team Peregrine” were Spectrabotics, Autonomous Avionics, PixelTec, and Exogenesis Solution from Colorado and Headwall Photonics from Massachusetts.   

Thursday, April 16, 2015

Artificial Intelligence and Equality: The Real Threat From Robots

By Alex Calvo
The World Economic Forum in Davos is always a good source of headlines. This year, media interest went beyond financial and economic issues, extending to the ultimate impact of robots on the future of humankind. In a five-member panel, Stuart Russell, a world leading expert on AI (artificial intelligence) and robotics, predicted that AI would overtake humans “within my children's lifetime”, adding that it was imperative to ensure that computers kept serving human needs, rather than being instead a threat to our species. In order to do so, Professor Russell believes that it is necessary to guarantee that robots have the same values as we humans.

Assuming that AI and robotics will keep progressing, and there is no reason to doubt they will, it is clear that sooner or later we will face the prospect of machines which are more intelligent than their creators. Furthermore, this may also result in they being self-aware. Once they enjoy this dual characteristic of self-awareness and above-human (or even just human-level) intelligence, the question arises, as rightly pointed out by Professor Russell, of how to ensure they do not act against humans. Are “human values” the answer? There are strong reasons to doubt it.

While there is no universal definition of “human values”, and in fact sometimes different people, organizations, and countries, will defend completely opposite ideas, a look at history shows how “equality”, at least in the sense of equality before the law, is a powerful drive and attractive call to arms. It is very difficult to make anybody accept a subordinate status for long. The 100th anniversary of the Great War is a powerful reminder of this. While the conflict did not result in the end of colonialism, the experience of being called upon to fight the metropolis' war and furthermore, of engaging allegedly superior white soldiers in the battlefield, led many colonial subjects to question the implicit racial hierarchies of the day, and ultimate contributed to the downfall of European empires. More generally, while slavery has had its share of intellectual defenders, time and again its victims have wondered while, sharing the same nature of their masters, they should remain under them.

Comparisons with slavery and colonialism are relevant because the history of human technological progress is the history of building increasingly complex machines to serve us. From the home to the battlefield, the world is full of all sort of mechanic and electronic devices designed to make our life easier, carrying out dangerous and difficult jobs or simply doing them more quickly and efficiently. Because these machines, including present-day computers, are neither intelligent (in the human sense of the word) nor self-aware, the question does not arise whether it is just to use them as slaves. They cannot pose the question, and while humans theoretically could, the historical answer, grounded in different philosophical and religious traditions, is that nature is at our service, even more so man-made objects.[1] Therefore, nobody talks about machine rights, worries about a tool's working hours, or seeks equality between humans and inanimate devices.[2]

Now, let us imagine that a robot is as intelligent as a human being and aware of his own existence. A dual question arises: first of all, why should he accept being our slave? Second, how could we justify keeping him as our inferior? It is most unlikely he would renounce liberty in the name of human progress and comfort, and imbuing him with the “human values” that Professor Russell suggests would only makes matters worse in this regard. Is it not a fundamental human value to seek equality, understood as the same set of basic freedoms?[3] How human is it to treat as an inferior someone equally intelligent as other members of the political community?[4]
Having posed this fundamental question, it is necessary to make it clear that the resulting threat from intelligent, self-aware, robots does not require that they engage in any violence against humans. Simply by virtue of being equals, they would demand and ultimately obtain the same degree of civil and political rights, giving them a say in the future of any community and country where they may be. Furthermore, once recognized as equals, there is no reason whey they should keep working for us as essentially slaves, and simply by taking their own decisions and exchanging goods and services on a market, as opposed to a command, basis, their economic impact would be very different. After centuries of employing technological progress to better our lives, we would now be in a position where this same progress endangered them. We must thus agree with Professor Russell but not necessarily with his solution. Human values would not prevent this, but rather accelerate the trend. Intelligent, self-aware, robots cannot be our inferiors, and it is very much doubtful whether they may even be our equals. More likely they would be our superiors, making it thus necessary to publicly debate now what the limits to AI research and development should be.
  
Alex Calvo is a student at Birmingham University's MA in Second World War Studies program. He is the author of ‘The Second World War in Central Asia: Events, Identity, and Memory’, in S. Akyildiz and R. Carlson ed., Social and cultural Change in Central Asia: The Soviet Legacy (London: Routledge, 2013) and tweets at Alex__Calvo , his work can be found at  https://nagoya-u.academia.edu/AlexCalvo  



[1]     Things are of course more complex than that, as clear in the controversies prompted by the impact of economic development on the environment. However, few would argue that nature has a right not to be at our service, with most proponents of environmental protection either seeing it in terms of ultimately preserving human life and health or seeking a balance between current needs and those of future generations.

[2]     On the other hand, in the case of animals, which have a measure of intelligence, there is indeed a range of movements to protect some of their rights, having led to legislation in different countries. However, although people such as Vegans believe that we should not exploit them, the majority position is that animal use may be regulated but not banned per se. 
[3]     Economic equality is very different, in the sense that it has strong supporters and equally keen detractors, and we cannot thus call it a fundamental human value as civil and political equality is.
[4]     A possible response would be to restrict membership in the political community to humans, biologically defined. However, given artificial intelligence and self-awareness it is most unlikely that robots would accept this. Furthermore, even from a human perspective it may not meet with universal approval, with some voices stressing our roots (a view that may be supported, among others, by some religious traditions) while others stressed capabilities, not origins.

Wednesday, April 15, 2015

Robot Ethics and Future War

by CAPT (ret) Wayne P. Hughes, Jr., USN, Professor of Practice, NPS, whughes(at)nps.edu 

"We may be on the leading edge of a new age of tactics. Call it the “age of robotics.” Unpeopled air, surface, and subsurface vehicles have a brilliant, if disconcerting, future in warfare.” Hughes, Fleet Tactics and Coastal Combat, 1999 


On 14 December I listened to a lecture by Professor George Lucas entitled “Military Technologies and the Resort to War.” This was for three reasons. First, I respect him as a distinguished expert on military ethics. Second, at NPS we have extensive research in air, surface, and subsurface unmanned vehicles. At the behest of the Secretary of the Navy the many components were recently consolidated in a center acronymed CRUSER[1] in which the ethics of robotic warfare is included explicitly. Third, a decade ago I addressed the Commonwealth Club of San Francisco on Just War.[2] For reasons that will become apparent, Just War Doctrine is inadequate to guide U. S. military actions, so I will conclude by speculating on suitable policies—or doctrine—to illustrate what might serve the nation and armed forces today. 


Lucas described a common concern in ethical debates about the use of unmanned aerial vehicles (UAVs, or when armed, UCAVs). He put due stress on the future of autonomous lethal platforms, in other words robots, and on the development of cyber weapons. These and other emerging technologies such as autonomous or unmanned underwater vehicles (AUVs or UUVs) carrying mines or torpedoes might render war itself less destructive and costly, raising concern that it would be easier to rationalize their employment in inter-state conflict. This would lower the threshold for going to war, which then might expand in unanticipated, unintended, and deadly ways. 

Three days later I attended out-briefings of short, sweet student work, the purpose of which was to develop analytical tools to examine Marine amphibious operations when the enemy could not defend all possible landing points. Small, unmanned reconnaissance vehicles figured prominently in the teams’ tactics. 

Soon thereafter came reports of a powerful, lethal, UCAV attack by the CIA into Pakistan that did considerable damage and resulted in sharp reactions in Pakistan. The attack illustrated quite well the points Lucas had made. 

There are two issues, one being whether the U. S. ought to pursue robots energetically, the other being Lucas’ emphasis on the “threshold problem.” Both led him to discuss classical just war doctrine and one of its guiding principles, which is that war should only be contemplated as a last resort. International law, just war doctrine as interpreted today, and (I will add) the Weinberger-Powell doctrine of the Reagan administration, all assert that war is only justified when every option for conflict resolution short of war has been attempted first. Both international law and just war doctrine limit just causes to defense against territorial aggression, i.e., invasion. The Weinberger doctrine carried no such limitation but it had its own quite sensible strictures.[3] 




Lucas then discussed what sort of “principle” is a principle of last resort, and whether it carries an unconditional duty to wait or is contingent and subject to revision under different expected outcomes. In other words, as anyone knows who has studied international law and classical just war writings, the subject will unavoidably become arcane and legalistic. In conclusion, Lucas swept away some of the underbrush, saying war, like lying or law-breaking or killing, is a species of action always prohibited ( I would have said “always undesirable”), hence it will require an overriding justification after first exhausting all non-violent alternatives.

JUST WAR INJUSTICE
In the question and answer period Professor Dorothy Denning, a nationally known expert on computer security, pointed out the sabotage of computer controls of Iranian centrifuges. An intrusion, called a Stuxnet worm, was doubtless a cyber attack on Iran’s nuclear weapons program and by all reports a very effective one, setting back Iran’s hope of developing nuclear weapons for months or even years. Denning observed that whoever the perpetrator might be, it was not a last resort attack. In the arcane logic of just war doctrine, however, it was a preventive attack, an action which sometimes is considered just.[4] 

The pertinent issue is that Cyber attacks are not contemplated in international law, just war doctrine, or the Weinberger-Powell doctrine, yet attacks and intrusions are going on right now in many forms. This new manifestation of conflict—attacks on computers and attempts to protect their content for safe operation—is a constant, complicated, and destructive non-lethal activity. Against terrorists, unwritten American policy is domestic defense complemented with overseas offense against an elusive but often-identifiable enemy with deadly intent. Cyberwar is a good bit more intricate to frame. Cyber attacks world-wide have involved actions by states, surrogate attackers acting for some purpose that may or may not be state-sponsored, individuals who are interested in financial or other criminal gain, or just clever hackers who intrude or plant worms for the personal satisfaction of being a pest. Their domain extends from combat effectiveness on a battlefield, to attacks on national infrastructure such as the financial system or electrical grid, to exploitation of the enemy information system for espionage.[5] The Naval Postgraduate School is bombarded with attackers all the time and our internal defenses, aided by the Computer Science and Information Science Department faculties, are in some instances capable of locating the source. As in all forms of warfare, defense alone is difficult. If counterattacks were authorized for appropriate government organizations and agencies an “active defense,” might give pause to some attackers. Cyber attacks beg for a “combat” doctrine of defense coupled with counterattacks.[6] 


The second question from the audience came from an Air Force officer student. He asked if it was ethical not to pursue robots and robotic warfare when they save the lives of pilots—or soldiers, or sailors. 


A third observation came from Professor Mark Dankel. He said in a crisis at the edge of war, robots might be the first on scene and the safest way to reconnoiter the situation, exhibit an American presence, and indicate our intention to respond with minimum escalatory actions. I thought Dankel also implied that a last resort criterion presumes robot involvement to be the source of the crisis. There is no whisper in it of an enemy who may be striving to attack, whether by cyber attack, by polluting water reservoirs with germs, or with a big bomb in a shipping terminal. A doctrine of last resort does not address threats of action by China against an Asian ally that we are committed to defend. Nor does last resort contemplate that by assuming responsibility to keep the seas free for the trade and prosperity of all the nations, we might have to threaten an attack on a country which claims ownership of a trade route and the right to deny free passage.

JUST WARS AND “DEMOCIDES”
I don’t know that classical just war doctrine described by Augustine, Thomas Aquinas, or Hugo Grotius specifically forbids interference in the internal affairs of a state, but Michael Walzer, who is one of its principal contemporary interpreters, says no state has a just basis for interfering with the internal affairs of another. The Weinberger-Powell doctrine has no such provision. Certainly Colin Powell as Secretary of State for President Bush endorsed the anti-terrorist campaign in Afghanistan and the liberation of Iraq from the despot, Saddam Hussein. In recent experience every instance of outside interference has come after many and patient warnings by the United Nations and sovereign states because a tyrant must never back down, his personal survival being at stake. The issue is important because a government’s murder of its own people is frequent in modern times. Thus, a doctrine that contemplates interference to stop a despot from killing his own state’s population is as important today as a doctrine to prevent wars between states when killing is the foremost ethical issue. 

Democide is a word coined by Professor Rudolph J. Rummel in his “Death by Government,” published in 1994. It is defined to be killing by government when no interstate war exists. We are all aware of the purges of Jews by Nazis before and during World War II. Many are aware as well of the democide in the Soviet Union inflicted by Joseph Stalin. More recently, many Americans have demanded interference with African nations’ democides. 


The most pernicious example of murder or starvation of its own people is China. The democide within China is estimated by Rummel to be 77 million of its own people in the 20th Century.[7] By contrast, 0.6 million soldiers died in battle and from disease in our major internal war, from 1861 to 1865.[8] China is a state we want to influence but not intrude upon, much less go to war with. A decision about going to war ought to include one practical maxim as fundamental as any in doctrine: “Never pick on somebody your own size.” The corollary is, “Avoid an attack by a strong power by indicating that the cost of its attack will exceed any reward it might expect."

THE CENTRAL ISSUE OF ROBOT DEVELOPMENT 
Touched on by Lucas and brought to the forefront by Denning and the Air Force officer is the central question, Who gets to choose? The fundamental error of a debate over robot development is to assume we have a choice. A shift to a new era of robotic warfare is underway. Among our many visiting lecturers on new technologies, an expert on robotics and autonomous vehicles said pointedly “. . . it’s not a question of whether robots will have the ability to select their targets and fire their weapons. It’s a question of when.”[9]


We should ponder the ethics of robot war—and every other form of lethal conflict—when we control the situation, but a doctrine of last resort fits neither the circumstances of small wars nor those intended to influence and constrain a peer competitor. The assumption that the availability of robots will lead to our use of them is the more insidious because many American military leaders don’t look favorably on autonomous vehicles or robotic warfare. Yet the Chinese already have in considerable numbers cheap, autonomous little weapons called Harpies. Upon launching a swarm of them, they will fly to a predetermined point and circle while searching for a designated radar signal from a warship. Once the frequency is detected, a Harpy will home on the transmitter and destroy the radar. Swarms of them are the forerunners of what navies will see in future wars that include robots. 


Recall the result of The Washington Naval Disarmament Treaty of 1922. By constraining the development of battleships the treaty hastened the development of aircraft carriers, especially in the American and Japanese navies. An unexpected consequence of international law which forbade unrestricted submarine attacks was to breed a generation of American submarine commanding officers who were trained in peacetime to attack warships from long range and had difficulty adapting to merchant ship attacks at point blank range. 


A simple policy of last resort for cyberwar or robotic attacks is untenable. A better point of view is to frame a suitably ethical policy for conducting cyber operations and employing autonomous vehicles—in the air, on the ground, and in the water— while staying technologically current and tactically ready. Combat doctrine, called “tactics, techniques, and procedures,” already exists for missiles, mines, and torpedoes. What is involved is constant revision, first, to link new tactics with new technologies, and second, to integrate the geopolitical environment with American economic realities.

[1]  Consortium for Robotics and Unmanned Systems Education and Research
[2] Preceded by vigorous discussions at The Hoover Institution as the guest of one of the Navy’s great philosophers, VADM Jim Stockdale.
[3] The Weinberger Doctrine is widely thought to have been drafted by his military assistant, BG Colin Powell. It had six tests, abbreviated here: (1) a purpose vital to our national interest or that of an ally, (2) a commitment to fight “wholeheartedly and with the clear intention of winning,” (3) with “clearly defined political and military objectives,” (4) subject to continual reassessment and adjustment, (5) entailing “reasonable assurance that we will have the support of the American people and . . . Congress” and (6) “The commitment of U. S. forces to combat should be the last resort.”

[4] A defensive preemptive attack when an enemy attack is imminent and certain, by contrast, is doctrinally just.
[5] Cyberwar is a term coined many years ago by Professor John Arquilla of the NPS faculty. His writings are a treasure chest of sound thinking on information warfare in its many manifestations. To grasp his several contributions that relate cyber operations to just war doctrine, start with “Can Information Warfare Ever Be Just?,” in The Journal of Ethics and Information, Volume 1, Issue 3, 1999.

[6] I have been told an active defense from NPS or other DoD organizations would require a change of the law. NPS is a good laboratory for study because our defenses are superb, but our faculty expertise is in teaching and research. Teachers don’t think of themselves as “combatants.” We exemplify the need for a comprehensive policy. The maxim is that when there is a war going on, learn how to fight it before a serious defeat is suffered.


[7] Taken from R. J. Rummel, China’s Bloody Century (2007). Here are his numbers: 1928-1937: 850,000; 1937-1945: 250,000; 1945-1949: 2,323,000; 1954-1958: 8,427,000; 1959-1963: 10,729,000 plus in the same period 38,000,000 more deaths from famine; 1964-1975: 7,731,000; and 1976-1987: 874,000. Rummel claims that deaths imposed within states were six times greater than the deaths from all wars between states in the 20th Century.
[8] A proper comparison would include civilian deaths. That number is hard to find. In his classic, Battle Cry of Freedom, James McPherson estimates it to be 50,000. This seems remarkably low, but if the number were several times bigger, American deaths that seem staggering to us are small compared to China’s.
[9] The speaker was George Bekey, Emeritus Professor of Computer Science at the University of Southern California and visiting Professor of Engineering at Cal Poly in San Luis Obispo.



Editor's note: Reprinted with permission from the Naval Postgraduate School's CRUSER News.

Sunday, April 12, 2015

Drones on the Frontlines of the South China Sea

Chinese UAV imagery of Senkakus
More than one war has started over the control of a group of isolated rocks in the middle of the ocean.  Tensions over disputed East China Sea islands called the Diaoyu by China and the Senkaku by Japan could someday precipitate skirmishes on or over the sea, if not a larger conflict.  It may very well possible that the first shots fired in any sort of combat over these islands will involve growing number of maritime unmanned aerial vehicles (UAVs) flying in the area.  Late in 2013, China established an air defense zone (ADIZ) over portions of the East China Sea.  Further south, near the disputed Spratlys, similar issues exists. To help enforce their claims over these areas, China is building a string of 11 drone bases along its coast by 2015.

China has operated what is likely a variant of the S-100 rotary wing UAV off PLA(N) ships. China's Coast Guard, which is really the PRC's first line of defense in the islands kerfuffle (or aggression, depending on one's perspective), recently ordered the APID 60 UAS for shipboard use.

In its recent report on China's naval capabilities, the U.S. Office of Naval Intelligence specifically cites UAVs as one of China's most valuable intelligence assets:
The PLA(N) will probably emerge as one of China’s most prolific UAV users, employing UAVs to supplement manned ISR aircraft as well as to aid targeting for land-, ship-, and other air-launched weapons systems. UAVs will probably become one of the PLA(N)’s most valuable ISR assets. They are ideally suited for this mission set because of their long loiter time, slow cruising speed, and ability to provide near real-time information through the use of a variety of onboard sensors. In the near term, the PLA(N) may use strategic UAVs such as the BZK-005 or the Soaring Dragon to monitor the surrounding maritime environment. 
Probable Chinese UAV flight pattern - 2013
China may also be flying the Haiyao-1 over disputed islands.  In September 2013, the Japanese scrambled fighter aircraft over the East China Sea in response to what was likely a Chinese drone incursion.

  Other navies in the Pacific region are building up their unmanned surveillance inventories. Japan announced the acquisition of a still unknown type of surveillance drones to operate around its island chains.  Reportedly the Japanese Maritime Self Defense Force is considering acquisition of a fleet of RQ-21 "Blackjack" small tactical UAVs to operate from its destroyers. Australia has expressed an interested in buying up to seven high altitude Triton UAVs and associated equipment for $2.5 Billion.  The United States currently bases RQ-4 Global Hawks in the region and eventually will station Tritons at Guam's Andersen Air Force Base. Even Taiwan, which calls the disputed islands Diaoyutai, has begun to experiment with military UAVs. Although none of the above-mentioned UAVs are armed, they could potentially be used for offensive purposes at some point.

The necessity to counter the increasing number of drones in the region is not lost on the region's powers. The Chinese announced that they have developed a laser weapon which has the ability to knock a UAV down at ranges out to 2 kilometers within a five seconds.  Paul Scharre has discussed the implications of drone-on-drone warfare that could result from all of these aircraft operating together. He asks, if a drone is shot-down, does that constitute an act of war? If history is any indication, a skirmish involving drones will likely not escalate into a more widespread conflict.  Throughout the Cold War, the Soviet Union made a number of attempts -- some successful -- to down U.S. reconnaissance aircraft.  Although these incidents resulted in diplomatic rows, they did not escalate militarily.  More recently, in 2001, the PRC forced a U.S. EP-3 reconnaissance plane down on Hainan Island and temporarily "detained" the aircraft's crew.  


Naval analyst Jon Solomon also weighed in on the escalatory nature of unmanned aircraft. "I would submit that if war gaming and historical case study analysis find that the crisis stability risks of attacks against unmanned scouts would be tolerable, and if the resulting legitimization of equivalent attacks against U.S. unmanned systems would be acceptable, then it might be worthwhile for American diplomacy to advance unmanned scout neutralization (or destruction if the scout is outside the opponent’s internationally-recognized sovereign boundaries) as an international norm."

In the end, the decision to escalate a skirmish or not is generally made by pilots on the scene or their nearby commanders.  In a world in which drones are often controlled remotely from operators thousands of miles away, will the same calculus hold true?