Saturday, September 26, 2020

Some thoughts on the law of unintended consequences (2020)

The story may be apocryphal. Se non è vero, è ben trovato. The British colonial government in India was concerned about venomous cobras. It therefore decided to offer a bounty to locals for every cobra they would catch. Initially, the number of cobras fell. But then business-minded locals began to breed cobras in order to continue receiving rewards. As soon as the British government found out what was going on, it discontinued the policy. With cobras offering no further rewards, the locals released the cobras and the cobra problem ended up being worse than before. The policy intervention inadvertently modified incentives and changed behaviour that led to a perverse outcome. Unintended consequences need not be perverse. They can also be adverse in terms of causing costs unrelated to the original objective or intended outcome. Unintended consequences can also be beneficial (e.g. invisible hand).

Robert Merton (1936) has termed this phenomenon the “law of unintended consequences” (Or more precisely: “The Unintended Consequences of Purposive Social Action”). Other examples include a seatbelt mandate that leads to increased traffic-related deaths. Drivers feel safer, leading them to drive more recklessly (so-called risk compensation). The law of unintended consequences often pops up in cases of quantification-based management (Muller 2018). Not only are what is measured and what is meant to be targeted not completely congruent. The focus on one (quantifiable) target may lead to a modification of behaviour in order to better meet the quantitative rather than the intended target. This is also known Goodhart’s law: when a measure becomes a target, it cases to be a good measure.

Merton attributed the failure to anticipate unintended consequences to a variety of causes: (1) ignorance, (2) error in analysis, (3) immediate interest overriding long-term interests, (4) basic values prohibiting action that would prevent adverse longer-term outcomes and (5) self-defeating prophecies (where a solution is found before the problem occurs). More broadly, unintended consequences may be unavoidable in the context of complex systems, where outcomes cannot be controlled (“nature”). Unintended consequences may also be due to analytical ‘errors of analysis’. A failure to evaluate counterfactuals, to take into account the targeted party’s perceptions and interests and/ or to anticipate second- and third-round effects can lead to unintended, albeit in principle avoidable consequences (“epistemic”). Relatedly, unintended consequences may also be due to psychological biases, including self-deception and stupidity (“psychology”). Again, correcting for biases should in principle help avoid unintended consequences or at least make them predictable.


Normal accident theory, for instance, postulates that in complex systems characterised by non-linearity and tight coupling accidents and mistakes are unavoidable (Perrow 1999). This view has not gone unchallenged. That said, complex systems do exhibit behaviour that is difficult, even impossible to predict with any degree of probability. While not a magic fix, a sensitivity to initial conditions and an acknowledgement of the existence of non-linearities may make decision-makers more aware of the potential pitfalls of certain courses of actions. By limiting over-confidence, it may also help limit psychological sources of mistakes. If the system is truly complex, even all of this will fail to help decision-makers avoid unintended consequences.

World War I may be attributed to complexity, non-linearities and tight coupling. The action of one individual (Serbian nationalist assassinating the Austro-Hungarian crown prince and his wife) triggered a string of events (tightly coupled actions) that ultimately led to the death of 20 million people. Relatedly, the so-called “security dilemma” may be thought of in systemic, but perhaps not complexity terms. Here the consequences of action are unintended, but relatively easy to anticipate. If country A increases its military power, opposing country B will do so, too, in turn leading country A to increase its military power further. 

“Blowback” is an example of maybe ignorance, maybe immediate interests overring longer-term considerations causing unintended and perhaps unanticipated consequences. The US decision to support fundamentalist militants to fight the Soviet occupation of Afghanistan reflected a desire to weaken the USSR. However, when the occupation ended, the militants set their sights on destabilising US allied governments in the region (Johnson). Maybe policymakers did not anticipate such a turn of events. Maybe they favoured short-term expediency over longer-term and admittedly somewhat uncertain adverse consequences.

The increase in China-US tensions seems to have led to greater animosity and a greater sense of vulnerability on both sides (Jaeger 2020). This much, while perhaps unintended, could have been anticipated. But Sino-US tensions have also arguably spilled over into Sino-Indian relations and led to a significant increase in bilateral frictions. While perhaps inevitable, this is leading to a (tighter) coupling between Sino-US and Sino-Indian relations, thereby giving rise to the geo-political construct of the Indo-Pacific. One can only speculate why Beijing is willing to adopt a more assertive stance vis-à-vis India (if this is indeed what is happening) given that an unintended, but easy-to-anticipate consequences is a closer US-Indian strategic partnership.

US support for China’s international economic integration led to the emergence of China as a strategic competitor. While China’s economic development was perhaps anticipated, its speed and impact almost certainly were not. China’s emergence as a geopolitical rival was neither intended nor anticipated. The anticipation (or hope?) that China might become a responsible stakeholder may be interpreted as stupidity, error of analysis or as an example of immediate interests overring longer-term strategic considerations. The latter is perhaps more applicable to Nixon’s opening of China rather than the post Cold War US policy encouraging China’s integration into the global economy.

The Truman administration considered launching a pre-emptive nuclear war against the USSR. An unintended, but anticipatable consequence of not launching such an attack was the emergence of the USSR as nuclear. This would be a clear-cut example of basic values prohibiting a policy and thereby leading to an unintended, but anticipated and, from the US point of view, adverse outcome – at least in material and power terms.

Or to take the Munich Agreement. Prime Minister Chamberlain’s decision to cede the Sudeten areas to Germany in 1938 averted a military conflict. Only detailed historical research can reveal why Britain decided not to stand its ground. The decision is often chalked down to stupidity. After all, it improved Germany’s geo-strategic position tremendously. Then again, the decision did buy Britain time to ramp up its armament production and – which would prove absolutely vital – strengthen its air force and air defence system (Kennedy 1992). Maybe the Munich agreement is an example of an unintended consequence, namely Chamberlain’s belief that it would bring “peace for our time”. But it may also have been an expedient but deliberate short-term decision to avoid war in the short term, while fully recognising that it would strengthen Germany incomparably. In this case, the decision brought about consequences that were unintended but (largely) anticipated. 

Unintended and unanticipated consequences can often be attributed to cognitive biases. The US neither anticipated nor intended the limited military intervention to transmute into a full-scale war, let alone one it would lose. The US had just defeated Nazi Germany and Imperial Japan. The US was not a war-weary and economically weak France. The unintended consequence of losing the war was not anticipated. The US may have felt it had no choice but to intervene given the belief in the domino theory. This, however, would only explain why the US got involved rather than explain why it failed to anticipate the unintended consequence of defeat. Unintended consequences as far as defeat was concerned were not seriously evaluated.

Pearl Harbor was the unintended and unanticipated consequence of a more hawkish US policy towards Japan. Tightening US economic sanctions and moving the Pacific Fleet to Pearl Harbor were not intended to provoke an unanticipated Japanese attack on the US naval base. (It may have sought to provoke Japan to attack US forces in the Philippines). Here the unintended consequences were not due to short-term expediency, not self-deception, but instead may be attributed to cognitive biases and/ or the ability to put oneself in the other party’s shoes. Ultimately, cognitive biases and group biases led the US failure to anticipate that Japan, instead of being deterred by hawkish US policies, would respond by launching a pre-emptive against the US.

None of this is meant to suggest scholars and practitioners are invariably incapable of dealing with the “law of unintended consequences”. Think of, for example, the reluctance of Northern European EU member-states to agree to permanent financial transfers during the euro crisis a decade ago. The Northerners did – correctly, one may argue – anticipate moral hazard and its potentially negative longer-term consequences for euro area stability. Or maybe the domestic politics of some Northern European states did not allow them to agree to permanent financial transfers. Again, this is for historian to figure out.

It is extremely important to think about “purposeful social action” (aka policies and policy decisions) in systemic terms and in terms of the law of unintended consequences. This is something that comes more natural to ecologists, biologists and engineers, and maybe certain stripes of sociologists. In spite of concepts like systems and bureaucracies featuring in prominent International Relations theories, the concept of unintended consequences is somewhat under-appreciated, under-studied and, importantly, under-taught. Students of international politics and foreign policy practitioners would do well to familiarise themselves with complexity theory, unintended consequences and cognitive biases. This should allow scholars to provide better explanations and policymakers to make better decisions – or at least to make decisions whose consequences are less unintended and/ or better-anticipated.

Thursday, September 10, 2020

Dis/ advantages of strategic dis/ ambiguity (2020)

What is strategic ambiguity? Here are a few examples. The United States maintains a policy of strategic ambiguity with respect to Taiwan (Haass & Sacks 2020). Israel maintains a policy of strategic ambiguity with respect to its nuclear weapons status. Strategic ambiguity is a policy where a state leaves purposefully vague how it might respond to another state’s behaviour. This vagueness creates uncertainty as well as ambiguity in the mind of the party the policy is directed at. Ambiguity leaves a state purposefully uncertain about another state’s policy. 

Ambiguity makes it more difficult, perhaps impossible to calculate risks by creating Knightian uncertainty. A policy of strategic ambiguity may also leave ambiguous not just the probability but also the type of policy response. This makes it even harder for the other party to predict the policy response. Usually, a policy of strategic ambiguity aims to have a deterrent effect by making it difficult, perhaps impossible to calculate the expected costs and benefits of an action. In the case of Taiwan, whether or not the US will come to the defence of Taiwan in the event of a Chinese attack is meant to deter a Chinese attack on the island. In the case of Israel, whether or not it possesses nuclear weapons is meant to deter an attack by its opponents. 


But why would ambiguity be preferable to an ironclad commitment to pursue a certain course of action? For example, why is US ambiguity preferable to an outright guarantee to intervene on behalf of Taiwan in case of a Chinese attack? And what does Israel gain by leaving its nuclear status ambiguous? After all, the US has made less ambiguous defence commitments to other countries in Asia. And both Pakistan and India have been forthcoming about their status as nuclear powers. 

Explaining why a policy of strategic ambiguity comes about is different from an evaluation of its strategic benefits. Ambiguity often is necessary; otherwise there may not be an agreement in the first place (Iklé 1964). The Sino-US rapprochement would not have been possible, had it not been for the ambiguous stance towards Taiwan. Had Washington maintained an ironclad Taiwan security guarantee, no agreement would have been reached. An ambiguous formulation was necessary to make the so-called third communique possible. Sometimes ambiguity may also be advantageous in terms of avoiding international criticism. This may have contributed to Israel’s decision to maintain a policy of strategic ambiguity. A state can then largely avoid the costs of dis-ambiguity, while largely reaping most of its benefits. Israel’s opponents are unlikely to risk all-out war even if there is only a slight chance of Israel retaliating with nuclear weapons. (In fairness, Israel’s policy is called ambiguous. But it is its official policy rather than its de facto nuclear status that is ambiguous.)

What are the strategic benefits of strategic ambiguity relative to dis-ambiguity? Let’s start with some game theory. Threats and promises aim to alter the other party’s expectations of the issuing party’s future actions. Promises commit one to reward the other party and threats threaten to punish it in the event of pre-specified actions. In order for promises and threats to be effective, they need to be credible. There are different mechanisms to enhance the credibility of one’s commitments (e.g. doomsday machine in Dr. Strangelove). This is particularly useful when a threat may not appear to be credible. For example, the US threat to retaliate against the USSR in case of a Soviet conventional attack on Western Europe was thought to lack credibility. After all, the USSR could legitimately doubt that the US would be willing to provoke a devastating attack on the United States in an attempt to defend its European allies by attacking the USSR. Thomas Schelling famously introduced the notion of a “threat that leaves something to chance” (Schelling 1960). Such a threat is credibility-enhancing. Similarly, the doomsday machine increases the credibility of nuclear deterrence. An unambiguous, irreversible commitment to pursue a course of action conditional on the action of the party is precisely what enhances credibility. This has its obvious problems.

Dis-ambiguity may create a stronger deterrent effect than ambiguity provided the relevant commitments are credible. But it also forced the party pursuing a policy dis-ambiguity to make good on its threat, lest it loses credibility. If the stakes as well as the risk of misperception and miscalculation are high, this is a policy a state may not want to adopt. This is of course one of Schelling’s major insights: credibility can be increased by reducing flexibility.

A corollary is that ambiguity makes it more difficult to undermine the other party’s credibility. At the same time, it affords the opponent an opportunity to adopt salami (or cabbage) tactics that make it more challenging for the deterring party to respond. If the lines are not explicit, the other party may probe. The probed party has less of a credibility problem due to a lack of explicit commitments. But if it maintains a general instead of a specific commitment, it risks a weakening of its position if it fails to push back. Commitments, credibility and flexibility are closely inter-linked. 

By limiting the role of credibility, ambiguity provides flexibility at the cost of reduced deterrence. Nonetheless, leaving one’s opponent guessing can be a sufficiently powerful tool to put in place a sufficient level of deterrence, even if explicit deterrence is more powerful. The problem with ambiguity is that a policy of ambiguity may be perceived as an unwillingness to enter commitments and therefore as strategic weakness. Much depends on the degree to which the opponent is risk averse. If very risk averse, ambiguity will probably work. If adventurous, ambiguity may help bring about the conflict that was meant to be avoided. This is where psychology prevails over rationality. Sherlock and Mycroft Holmes understood this difference.

The questions of ambiguity, credibility and flexibility are relevant in the context of the emerging Sino-US antagonism. China and the US look like two freight trains that are inexorably moving towards each other (Allison 2017). Both sides have laid down some red lines (e.g. China vis-à-vis Taiwan independence; US vis-à-vis its commitment to come to the defence of Taiwan). With regard to the more immediate friction in the East and South China Sea, however, the US seems to have avoided (or failed) laying down red lines. Both China and the US have taken to ignoring or at best protecting other’s action (e.g. island building vs freedom of navigation operations). Neither side has drawn a (explicit) red line and continue to pursue a policy of ambiguity. While this may help avoid confrontation, it also allows for a further deterioration of the situation in the sense that both sides will continue to push their policies absent unambiguous and credible red lines laid down by the other side. Ambiguity offers both sides a degree of tactical flexibility. Yet it also fails to stabilise the situation by drawing unambiguous red lines that might lead the conductor to bring the freight train to a stop rather than steer it towards collision. 

This is a highly reductionist account of emerging Sino-US competition. It nonetheless offers insights worth exploring further. After all, was it not the existence of – for the most part – red lines that contributed to the relative stability of Cold War superpower competition? Food for thought.