Thursday, January 30, 2020

Consilience, extraneous models & international politics (2020)

E.O. Wilson (1998) popularized the idea of consilience, that is, the unity of science. Originally proposed by William Whewell, consilience seeks to unify knowledge “by the linking of facts and fact-based theories across disciplines to create a common groundwork of explanation”. Wilson believes that the unity of knowledge will be achieved through the application of the methods used in the natural sciences to the social sciences. Consilience is envisioned to overcome the traditional Diltheyan separation of Natur- and Geisteswissenschaft. Wilson’s own theory of eu-sociality and his intellectual ventures into socio-biology are examples of this unificatory approach to human knowledge.

It would be unfair to say that the social sciences primarily provide idiosyncratic, historically oriented accounts of unique events, while the natural sciences provide nomothetic explanations of phenomena that are in principle replicable. History, the Geisteswissenschaft par excellence, tends to favour an interpretive approach to, and idiosyncratic accounts of, past events. It often focuses on the subjective beliefs and psychology of great men and it often adopts a narrative approach relying on tropes (Carr 1961; Collingwood 1946; White 1987). This in itself does not make the writing of history an illegitimate intellectual pursuit (Evans 1997). History as a discipline is of course much varied in terms of its methods and analytical foci. History can be and is written from a macro, a comparative and even a quantitative perspective (e.g. Écoles des Annales). Macro history focuses not only on large, impersonal structures, but it is also closer to the idea of consilience, as it draws on insights, fact, models and theories from other scientific disciplines. The account offered by Jared Diamond (1997) of how and why the West conquered the world is a case in point. Applying models and theories used in one field in another field of scientific enquiry can be intellectually and epistemically fruitful.

But what exactly are models and theories? The term model is often used very loosely and is seldom defined with much precision (Achinstein 1965). First, model can refer to a metaphor. The metaphor can be of a mechanical or biological nature. Eisenhower’s domino theory would be a case in point. The relationship between the entities posited in the model (dominoes) and the real-life entities it seeks to represent (states) is very loose, as is the posited mechanism (dominoes falling). Secondly, model can refer to an analogy. Analogies, compared to the vaguer model-as-metaphor, postulate correspondences in need of empirical support. States in the international system behave like firms in a market. States like firms strive to survive competition. The correspondence relationship between the entity posited the real-life entity, including the underlying processes, is tighter than in the case of a metaphor. Third, model can also refer to carefully a constructed system that is based on a small set of parsimonious and simple premises. The model may or may not represent the relevant process, but it postulates an empirically verifiable or falsifiable relationship between different variables (e.g. money supply and inflation). Observable implications can be derived from the model. And fourth, model can refer to a theory. A theory can be thought of as a model of a higher degree of generality. It also has a deductive structure. But empirical assumptions need to be added in order to arrive at a specific interpretation and to derive empirical implications. Game theory, for example, is such a general theory. The theory is general. Specifying the empirical assumptions about players’ payoff structure determine what kind of game is being played. In other words, the link between the elements contained in a metaphor or analogy and the world are far looser and far less specific and more difficult to operationalize than in the case of a model or a theory. This lack of specificity makes it easier to abuse metaphors.

A metaphor is best thought of as a half-way house between a model and a simple image. The domino theory is a metaphor of a ‘mechanical’ process, but it is also an image. By comparison, ‘iron curtain’ or ‘cold war’ is an image that fails to represent a mechanism or process and fails to postulate a relationship or mechanism. The image ‘cold war’ does suggest non-military conflict, but this is as far the image goes. Models-as-metaphors and models-analogies typically are richer and more specific in terms of their implications. This lack of specificity explains why especially images but also metaphors and analogies are often (ab)used to ‘sell’ policies. Political marketing and propaganda after all seek to appeal to simplicity, make use of repetition and exploit well-known cognitive biases as well as the affective meaning of words and of the images words evoke (e.g. ‘axis of evil’ as in WWII axis powers and metaphysical evil). In particular, the lack of specificity makes it more difficult to ‘falsify’ images, not least because they often operate at an affective rather than analytical level. Framing is an important dimension of political mass communication/ propaganda (Wehling 2016). 

What do models do? What should they do? Ideally, models describe, explain or interpret, and predict. Description requires the selection of important or relevant analytic and/ or explanatory aspects of the phenomena under investigation. Explanation requires a causal account, whether in counterfactual or probabilistic terms or in terms of a mechanism or a process, while interpretation typically seeks to understand a phenomenon by identifying the subjective reasons, meanings and/ or beliefs that account for a phenomenon (aka thick description). Last but not least, models – or at least explanatory as opposed to interpretive models – should allow to forecast future events, at least in principle. Generally speaking, explanatory and interpretive models value different epistemic goals differently (e.g. in-depth understanding vs accurate predicting) and they often ask different questions. Why did statesman X decide to attack country Y under circumstances Z versus why do states decide to attack other states in general or given circumstances Z?




Consilience suggests that it can be insightful to apply ‘models’ that have proven successful in one scientific field in another. Analogies, metaphors, models and theories from one field can generate interesting insights and new ways of approaching and explaining an issue in an unrelated field. That this can be politically controversial is illustrated by E.O. Wilson’s experience after he proposed to apply insights derived from studying ants to human societies. Somebody poured water over him at a conference. 

Game theory has been profitably applied in the realm of international politics as well as in a host of other fields such as evolutionary biology and economics. Game theory is concerned with the logic of interactive, rational behaviour and decision-making. The prisoner’s dilemma, for example, suggests that states will find it difficult to avoid arms races because they are worried about defection. As a matter fact, the security dilemma, the most important assumption of Realist IR theory, is conceptually based on the logic characterizing the prisoner’s dilemma. Similarly, the game of chicken expounds the logic of state behaviour in a situation like the Cuban missile crisis. More generally, game theory has a great deal of interesting things to say about conflict and cooperation in international politics (Schelling 1966; Kaplan 1991). In the economic sphere, game theory offers similar insights into the logic that prevents the provision of public goods (Olson 1965), causes environmental degradation and the over-exploitation of non-renewable resources (Hardin 1968) as well accounts strategic stability in the nuclear realm (Schelling 1966). Game theory also provides an interesting account of how cooperation may emerge in the first place, including in the international state system (Axelrod 1989).

Cognitive psychology may help explain irrationally seeming foreign policy decisions (Jaeger 2020). Research into cognitive biases has found that individuals do not always act in accordance with what the standard rationality prescribes (Kahneman 2011). This has led to the behavioural revolution in economics and finance. The sunk cost effect may explain why armed conflicts last longer than they rationally should, as the side (or sides) that has (have) incurred costs (irrationally) assign greater importance to the losses already incurred (retrospective) than (rationally) focus on the prospective losses and gains. Naturally if armed conflict is viewed as just one iteration of a sequential game and reputational gains (willingness to incur losses) strengthens the state’s future power and bargaining position, then it may make sense to incur additional prospective losses on top of retrospective losses, even in an unwinnable war. 

Combining game theory and cognitive biases may help explain otherwise puzzling foreign policy decisions. Experiments have demonstrated that people do not act rationally in ultimatum and dictator games, suggesting that to a certain degree altruistic and cooperative behaviour is hardwired. It would clearly be going too far to suggest that Germany’s failure to exploit its military advantage in Dunkirk in 1940 was due to some residual altruism or a worry to be seen as non-cooperative. (Interestingly, historians continue to debate what exactly led to the decision to halt the final attack on the British Expeditionary Force.) However, cognitive psychology rather than cool calculation may on occasion help explain the failure of states to act in accordance with the standards of rationality. Marrying game theory and cognitive psychology can be a rich source of new hypotheses and potential explanations. Once again, failing to impose a harsh peace treaty in order to eliminate future military threats for good may be due to rational calculation or it may be due to cognitive biases (Vienna 1815 vs Versailles 1919).

The theory of evolution posits variation, inheritance, differential survival/ reproduction and adaptation as its fundamental features. States may not reproduce in a strict sense, but they certainly face selective and competitive pressures. Being Costa Rica (that is, a state without armed forces) may have worked for Costa Rica in the Central American context. It has not worked for Poland in the inter-European state system of the late 18th or the smaller German states in the late 19th century. Maybe the microeconomic analogy/ model would suffice to capture the competition/ survival aspects of international politics (Waltz 1979). However, neo-realism has little to say about variation and adaptation. Admittedly, adaptation in the case of states in the international system is probably better seen as Lamarckian rather than Darwinian. However, it is clear that domestic reform that enhances the international power of state can be interpreted as an adaptation. Prussia’s relative greater economic efficiency and investment in its military capabilities are an adaptation of sorts that allowed Prussia to survive and later thrive in the European state system. Prussian reforms following the defeat against Napoleon was similarly important. Absent these reforms, Prussia as an initially small and resource-poor state might otherwise have been absorbed by Austria. 18th century Poland failed to adapt and disappeared from the European map for more than a century (Jaeger 2019). Granted, theory of Darwinian evolution is more complex and does not translate one-to-one to international politics. But the interplay of domestic change and international pressure provides offer a richer and less reductionist account of international politics than Waltz’s more parsimonious approach.

Geopolitics – combining physical and sometimes human geography and international politics – fell somewhat out of favour following WWII, mainly ecause it was associated with the militarily defeated and morally bankrupt losers of WWII. Nonetheless, geography has always been indisputably relevant to international politics and neither Haushofer nor Ratzel nor Lebensraum rhetoric has changed this fact. Often seen as a classic left-wing trope, resource competition remains as relevant as ever in international politics, as it does within states (Diamond 2012). To the extent that geography shapes filters the international pressures a state faces, it can also help explain the evolution of domestic institutions and culture. Prussia may well have been “an army with a state” (Voltaire) and in part this is explained by Prussia’s precarious geographic position. The reason why neither the US nor the UK ever saw the emergence of an overbearing military bureaucracy and/ or a militaristic culture is to a significant extent due to their ability to rely on a navy rather than a mass army to ensure their territorial integrity – the military-industrial complex withstanding.

Path dependency is a model that is widely used in historical sociology. It posits that the scope of change is limited due to institutional and/ or cognitive factors (Pierson 2011; Mahoney & Rueschemeyer 2003). The underlying assumption is similar to the notion of mutation in evolutionary biology. Mutation does take place, and while it may not always be gradual, successful mutations tend to be limited in terms magnitude. Similarly, economic development is usually considered to be path-dependent. Economies don’t go, can’t go from being very poor to hyper-developed in a decade. Economies move up a technological ladder, even if some do so remarkably quickly (e.g. Korea). Their development path is constrained. Similarly, the history, culture and domestic-institutional characteristics tend to limit the path available. Historical intuitionalism and path dependency do allow for the possibility of radical change, but this is the exception rather than the rule.

The concept of the balance-of-power has long been central to the study of international politics. The meaning of the concept varies significantly, even more so than the meaning of model (Little 2007). In its simplest form, the metaphor (or analogy?) implies that an equilibrium is necessary to ensure the stability of the international state system. Stability does not imply absence of war, but instead refers to a situation where the emergence of a hegemon turns a competitive state system into a hierarchical system, often in the form of a formal (or informal) empire. In order to maintain an equilibrium, adjustments need to be made constantly. The state system will tend towards stability due to the self-interested behaviour of states. It is largely self-regulating, invoking concepts from cybernetics (Wiener 2018). The equilibrium concept is also widely used in classical economics where exogenous shocks leads to changes in supply and demand, often via the price mechanism.

Less of a model and more of a method, Bayesian statistics was used successfully during WWII to crack the German enigma code, accurately estimate German tank production (compared to intelligence estimates) and predict the location of German submarines in the Atlantic. If it is true that Stalin refused to believe that the USSR was about to be invaded in the spring of 1941, it is clear that had he had a good grasp of Bayesian probability, he ought to have changed his mind over the course of 1941 given the need to update his prior. 

Systems’ theory and especially the theory of complex systems can be applied to international politics in very interesting ways. Complex systems are characterized, among other things, by (1) non-linearity, (2) adaptation and (3) feedback loops. Non-linearitiesas well as tight coupling are characteristics of complext systems. Small mistakes/ insignificantly seeming actions can rapidly cascade through the system due to a lack of redundancy and lead to catastrophic outcomes (Perrow 1984). The alliance system on the eve of WWI can be regarded as such an interconnected system with insufficient redundancy. What appeared to be a minor tussle between Austria-Hungary and Serbia (butterfly) and a limited commitment (Germany’s blank cheque or unlimited support for Austria-Hungary) led to the WWI, the death of millions of soldiers and civilians (Spanish flu), the fall of monarchies, the break-up of empires etc. The notion of unintended consequences is not unique to complex system, but it happens to be one of their key features. Although it is difficult to establish in retrospect with much certainty, it is quite possible that most major decision-makers wanted to avoid war. Admittedly, historians continue to disagree of who wanted what and when (Fischer 1967; Clark 2012).

Long story short. International politics is not just a question of the level of analysis (Singer 1960; Waltz 1959). It is not just a question of whether the international system determines tate behaviour or whether individuals determine or whether it is individuals in the face of the constraints imposed by the international system. Other models can be fruitfully brought to bear on various aspects of international politics. Intuition pumps (Dennett 2014), tools for thinking (Nisbett 2015) and many-model thinking (Page 2018) offers a variety of ‘models’ and frameworks and tools to analyse, interpret and/ or explain international politics. From a pragmatic point of view, models extraneous to International Relations offer new ways of tackling interesting questions in international politics. The extent to which this offers better explanations or provides better predictions is an empirical issue. Ideally what starts as an intuition (Dennett) is transformed into a metaphor or analogy and then into a logically cohesive model or even an even more general theory. Consilience, even if it will never be realized, is an epistemically exciting and worthwhile endeavor.

Friday, January 24, 2020

Cognitive biases and foreign policy decisions (2020)

In his classic The Essence of Decision (1971), Graham Allison proposed three models to explain foreign policy decisions: the rational actor model, the governmental politics model and the organizational process model. Allison used the last two models to explain foreign policy behaviour that deviated from the predictions of the rational actor model during the Cuban missile crisis. The rational actor model is, well, a model and as such its assumptions do not necessarily have to be realistic. Its heuristic usefulness, one may argue, lies in the accuracy of its predictions (Friedman (1953). That said, microeconomics has incorporated more realistic assumptions into its models in the past few decades, including notions such as bounded rationality, satisficing, information gathering costs, cognitive biases and so on. Not only has this made the models look more realistic, but it has arguably also helped improve their predictive accuracy. So has modelling micro behaviour in systemic and interactive terms (Schelling 1960; Schelling 1978; Waltz 1979).

Daniel Kahneman’s Thinking Fast and Slow (2011) has popularised the importance of cognitive biases. The focus on cognitive biases has led to the behavioral revolution in economics and finance. Cognitive biases have had less of an impact on foreign policy analysis (Mintz & DeRouen 2010). This is a little surprising given Allison’s insight that a state’s foreign policy actions often deviate substantially from the predictions of the rational actor model. As mentioned, Allison attributed these deviations to political and bureaucratic factors rather than systemic cognitive error. This does not mean that foreign policy decisions can be solely explained in terms of individual or social psychology. In fact, psychological explanations are often highly idiosyncratic (e.g. the Oedipal Bush Jr went to war against Iraq because Saddam Hussein tried to call Bush Sr.) and Karl Popper would certainly have dismissed such explanations as un-scientific. Governmental politics and organizational process models tell us a great deal about how the choices of ultimate decision-makers are constrained or influenced and suggest how what looks like a foreign policy decision is no decision at all and instead just the actions of lower-level bureaucracies. Foreign policy decisions are rarely simply cognitive exercises. Nonetheless, the question to what extent empirically well-established cognitive biases influence foreign policy decisions is not asked often enough.  

Granted, it can be difficult to establish an unambiguous empirical link between a cognitive bias and an actual decision, including foreign policy decisions. Whether cognitive biases were actually operative in particular situation can only be established with the help of a detailed historical analysis. And even detailed historical analyses may fail to unambiguously establish that it was a cognitive bias “who did it”. After all, the reasoning and thought processes of decision-makers are often not known and sometimes decision-makers themselves are not fully aware of the reasons (or biases) that made them take certain decisions. Here is JFK: “The essence of ultimate decision remains impenetrable to the observer – often, indeed, the decider himself” (quoted in Allison 1971). It is nonetheless worthwhile to explore to what extent cognitive biases can in principle account for foreign policy decisions, even if a water-tight empirical link is difficult to establish in practice.

The sunk cost fallacy refers to a situation where an actor continues a path of action because of the costs already incurred although it is not meeting his or her expectationsThis may explain why armed conflicts last so long even if their military and political outcome can be predicted quite accurately. Why does the losing side in an armed conflict often fail to cut its losses early on rather than continue fighting to the end? Why did the US keep on fighting in Vietnam, even though it knew it was not going to win? Why did Germany fight for another three and half years after Stalingrad? A staggering 2/3 of all German military personnel losses occurred in 1944-45. The sunk cost fallacy can account for what on the face of its looks like irrational behaviour. Decision-makers attempt to recover the costs incurred by an action, even though these retrospective costs should be, economically speaking, irrelevant to any decision today. The focus should be on prospective costs (and benefits). However, cutting one’s losses is a difficult decision, not just psychologically but also politically. (This is why investment banks’ trading desk put in place stop-loss triggers to prevent traders from desperately seeking to recover their losses.) Having expended human and material resources in a war while failing to gain any tangible advantage makes it difficult to “cut one’s losses”. Agreeing to re-establish the status quo ante was next impossible as WWII dragged on, even though the military stalemate in 1915 or 1916 would have strongly counselled that. Cognitive biases offer a possible psychological explanation. Of course, other explanations may also help account why armed conflicts last longer than they rationally should, including the impact of a peace settlement on the political leaders’ domestic standing, concern about a state’s international reputation in terms of their willingness to incur costs to defend allies (Kissinger 2003), or excessively optimistic assessments of the military situation and the prospects of winning the conflict by bureaucratic interests such as the military (McMaster 1998). The sunk cost effect may also explain why Japan, and especially the Japanese army, rejected US demands to withdraw from mainland China in 1940-41. The reluctance to give in to US demands as contained in the Hull note ultimately led Japan to attack Pearl Harbor in pursuit of a high-risk strategy that ultimately ended in total defeat (Utley 2005).

The way issues are framed is important in terms of the willingness of actors to take a risk-seeking or risk-avoiding course of action. The framing effect may have influenced Japan’s decision to attack Pearl Harbor. Prospect theory suggests that an actor faced with a choice leading to a loss will tend be more risk-seeking and vice versa. Gains are valued less than losses. Framing US demands as a prospective loss and putting less value on the potential gains of improved relations with the US, Japan opted for a high-risk strategy. The Japanese leadership, including Yamamoto and Tojo were acutely aware of just how high the risks were, but it seemed the only course of action that might allow to avoid losses. Again, other explanatory approaches may help explain Japan’s decision. For instance, the Japanese military was wedded to a doctrine of ‘decisive battle’ a la Clausewitz and Mahan. It may have made strategic sense not to be seen as giving in to US pressure for fear that showing weakness might lead the US to make further demands that would diminish Japan’s position further. But the framing effect may well have informed the decision to opt for an extremely high-risk strategy that ultimately led to disaster. 



The recency effect and availability heuristics overestimate the relevance and salience of recent experience. This may explain Britain’s fateful decision in 1956 to seize the Suez canal. People often take mental shortcuts and fail to evaluate in what relevant way the problem they are dealing with is different from their recent experience or a recent event. This may help explain Britain’s catastrophic decision to invade Suez in 1956. Anthony Eden had resigned from the cabinet in protest over Chamberlain’s appeasement policy before WWII. Following a horrific war that may have been avoided had Western powers not pursued appeasement, it is perhaps not surprising that Eden came to see Nasser as another Hitler, who needed to be confronted right away. While research suggests that heuristics can be quite useful and surprisingly efficient in terms of making decisions (Gigerenzer 1999), the complex and interactive nature of foreign policy decision-making suggests that analogies are decidedly less useful in international affairs. At a minimum, a systematic evaluation of the relevant similarities and differences is crucial to evaluate to what extent analogies or heuristics can offer useful guidance (May & Neustadt 1986). Often analogies are used to ‘sell’ foreign policy to a domestic or international audience. Similarly, Truman used the Nazi German foreign policy analogy to justify US intervention in Korea in 1950. The analogy may or may not have been used primarily to rally domestic support. As a guide to action the Nazi Germany analogy was flawed in important respects, as in the case of Korea it did not take into consideration the possibility of Soviet or Chinese counter-intervention and it overestimated the degree to which North Korea represented a threat to the balance-of-power in East Asia. In the case of Suez, it failed to take into account the Eisenhower administration opposition to the UK-French-Israeli intervention. But it is easy to see how Eden and perhaps Truman came to see Hitlers everywhere given the recency bias.

The overconfidence effect occurs when a decision-maker’s subjective confidence in his or her ability is greater than his or her actual ability or performance. This bias may have led Germany astray in its decision to attack the USSR in 1941. Leaving aside strategic calculus and ideological motivation, the German military’s confidence to defeat the USSR in a matter of a few months was misplaced. The conflict lasted four years and ended with Germany’s defeat, a defeat largely attributable to the Red Army rather than the Western allies. Napoleon’s defeat at the hands of Russia should have counselled caution. Perhaps it did. And perhaps Germany had good reason to believe that the USSR would be defeated quickly. The Soviet army had a lost a large number of senior officers during the Stalinist purges and the Soviet army had performed poorly in the 1939-40 Winter War against Finland. However, it is not difficult to see why the German military suffered from over-confidence. Superior Blitzkrieg tactics had helped it defeat Poland, Norway, Denmark, Belgium, Holland, France and Greece, not to mention the British in Cyprus. The standard account of Germany’s defeat often invokes strategic miscalculation. At the tactical-operational level, however, it was faulty intelligence, particularly about the number of Soviet divisions and the USSR’s ability to replenish them that made a German victory next to impossible. Germany may also have overestimated its own capabilities due to the over-confidence effect. Napoleon’s decision to invade Russia in 1812 may similarly have been affected by the over-confidence bias. Both the grande armée and the Wehrmacht had won victory after victory – often against of odds. The over-confidence effect made it more difficult to properly assess the risks of invading Russia. To the extent that underestimating your opponent’s capabilities is the flipside of overestimating our own, both France and Germany fell afoul of the over-confidence effect.

The superiority illusion elevated to a social-psychological level may explain why Germany underestimated the USSR and the US underestimated Japan in military terms. Nazi discourse of racial superiority and Slavic inferiority led German military intelligence to underestimate Soviet strength. Similarly, widespread US racist views of Japanese, whose short-sightedness allegedly made it impossible for Japan to have first-rate fighter pilots, led the US military to underestimate Japan’s capabilities (Dower 1986). Not many analysts believed that Japan would be able to pull off carrier attack on Pearl Harbor and in part this can be attributed to such racial stereotypes. It is difficult to establish to what extent key decision-makers themselves subscribed to such views and to what extent they regarded them as useful tools to rally political support for their policies. The superiority illusion may lead decision-makers astray not just when it comes to dealing with adversaries. The bias may also help explain why the US was confident that it could win the Vietnam war, while France had failed. (Admittedly, the US was substantially more powerful than France in the 1950s.) Similarly, the US in 2001 must have felt immensely superior to the USSR in 1980 and this led US policymakers to believe that they could succeed where the USSR had failed a little more than a decade earlier (aka Afghanistan).

The importance of groupthink in foreign policy decision-making was established by the classic work of Irving Janis (1972) about the failure of the Bay of Pigs invasion and need not be recounted in detail. The catastrophic decision to go ahead with the CIA-sponsored Bay of Pigs invasion can be attributed to groupthink. Not only were the key decision-makers a homogenous crowd. Young and inexperienced administration officials deferred to experienced Eisenhower administration holdovers and failed to question key details of the operational plan. Groups can be highly effective in terms of producing high-quality decision provided certain conditions are met, including diversity, independence, decentralization, aggregation and trust (Surowiecki 2004). Luckily, JFK learnt his lesson and avoided groupthink during the Cuban missile crisis.




In the case of confirmation bias and cognitive dissonance, a decision-maker disregards evidence that tends to disconfirm the original hypothesisMany historical accounts depict Stalin as refusing to believe that a German attack would take place in 1941, all the evidence to the contrary. Had Germany defeated the USSR in late 1941, this may have gone done as the most egregious and far-reaching example of cognitive dissonance in history. Or it may not, for others have argued that the USSR, including Stalin, was fully aware that an attack was imminent. After all, three million soldiers - even along a 2,900 km long border that runs from the Black Sea to the Barents Sea - are difficult to hide. The USSR did not want to give Germany any pretext for an invasion in order to maintain the moral high ground and (Weber 2019). During armed conflicts, military leaders often find it difficult to overcome cognitive dissonance if what the incoming information suggests is that the conflict is unwinnable. This may apply to Hitler in 1945 as much as to Westmoreland in 1968.

Attributing another agent’s behavior to their ‘nature’ (whatever that may be), but one’s own behavior to situational factors, is called fundamental attribution errorWhile others launch wars of choice because this is simply their nature (Iraqi invasion of Kuwait in 1991), one is forced to wage war due to situational factors (US invasion of Iraq in 2003). (From the Iraqi point of view, it would no doubt be the other way around.) Given the existence of the Schlieffen plan and the prospect of a two-front war, the German general staff would have been reluctant to call WWI a war of choice following Russia’ partial mobilisation, while others of course believed that this is precisely what it was. More generally, Prussia’s and later Germany’s aggressive foreign policy backed up by military might was attributed to culture or nature rather than geo-graphic and geo-political position. A small, vulnerable, resource-constrained (Prussia) or a larger, but vulnerable and resource-constrained (Imperial Germany) faced with actual or perceived ‘encirclement’ will lead a country to to rely on a strong military and pre-emptive military strategies (Jaeger 2019). If it does not, it may risk ending up like Poland at the end of the 18th century. Similarly, George Kennan’s famous article in Foreign Affairs (1947) attributed Soviet behavior to Russia’s aggressive nature rather than the USSR’s sense of vulnerability following two massive invasions (Napoleon, Hitler) and a Western intervention in the Russian civil war in support of the Whites following WWI. The question what determines behaviour, agency or structure, is an old one and interesting answers have been proposed (Giddens 1984). Psychologically, overemphasizing dispositional and underemphasizing situational factors may lead to adverse outcomes that may have been avoided. Maybe Japan would have pursued a less aggressive foreign policy in the 1930s had it been possible to address it security concerns. If one views Japan as hopelessly militaristic and aggressive, such a potential solution is precluded right from the start.

Cognitive biases may not offer a complete explanation of catastrophic foreign policy decisions. Nonetheless, foreign policy makers would be well-advised to be aware of the nefarious effects cognitive biases (and analogies) may have on their decisions. Decision-makers should therefore use analogies and heuristics cautiously and avoid applying them one-for-one. Decision-makers should put in place a protocol to help mitigate cognitive biases. Decision-makers should establish a process and structure that can help minimise the effect of cognitive and avoid groupthink (as JFK did during the Cuban missile crisis). Decision-makers should view their decisions in an interactive and systemic context and be aware of possible unintended consequences. Decision-makers should try to control the lower-level bureaucracy and monitor their actions in terms of the signals their actions might send to the other party. An awareness of cognitive biases, historical decision-making mistakes and potential traps may help improve the quality of decisions. It is to be hoped that the increased popularity and awareness of cognitive biases will help inform the decisions taken by foreign policymakers.

Thursday, January 2, 2020

European banking union - a marathon, not a sprint (2020)

The euro area was originally set up as a ‘decentralized’ regime where countries would share a common currency but where they would remain responsible for their public finances and their banking systems. While fiscal and debt limits were put in place (Maastricht criteria, Stability and Growth Act), it turned out to be difficult to enforce them. Financial ‘decentralisation’ was legally enshrined in the famous no-bail-out clause. However, with the onset of the euro area debt crisis, the creditors came to realise that they could not ignore economic and financial instability in other member-countries, if only because instability in even a small economy risked setting off a euro area wide financial crisis. The debtor countries, faced with a choice between a disorderly default and euro exit, also came to see the wisdom of making the euro area more resilient. While both debtors and creditors agree that further institutional reform is necessary to prevent future crises, they have been at loggerheads over the details of the various reform proposals and, more specifically, the distribution of its potential economic and financial costs and risk attaching to them. 

Institutional reforms in the wake of the euro area crisis focused on stricter government fiscal and debt limits, the establishment of a bail-out mechanism and the setting-up of a bank resolution regime to limit the risks of future systemic financial crises. Weaknesses in the institutional armour remain: limits on national fiscal policies are difficult to enforce; the financial rescue fund is too small to deal with a sovereign crisis in a larger euro area member-state; the lack of a common-deposit insurance scheme does not allow the euro area to address the risk of euro wide banking sector instability and so on.

Three basic risk-sharing regimes can be distinguished. First, conditional lending allows for financial resources to be made available in exchange for macroeconomic adjustment and policy reform. The lenders will get repaid in full, unless the borrower defaults or requires debt relief. Second, an insurance scheme requires all member-states (or banks) to pay into a fund and the fund then distributes financial resources based on a pre-determined rule or criterion. In this regime, it is possible for some countries (or banks) to emerge as consistent net contributors and others as consistent net recipients of funds. Limiting access to the fund would limit the financial liabilities of the net contributors, of course, but unless some sort of ex-post conditionality is attached to the use of fund resources, there is indeed a risk of the scheme turning into what Germans call a ‘Transferunion’. Third, there are debt mutualisation and financial guarantee schemes, where member-states jointly and/ or severally guarantee other member-states' liabilities or the liabilities of its banking sector. Of course, guarantees can be limited size-wise. However, all other things equal, the risk of moral hazard is greatest in this scheme.

In all three risk-sharing schemes, the size of financial liabilities can be limited ex-ante. The problem with limiting the available financial resources is that it might make a crisis response less credible. In other words, creditors’ concerns about limiting their financial liabilities due to moral hazard limit the ultimate effectiveness of all the respective schemes. Financially virtuous countries prefer ex-ante control of economic and banking sector policies in order to rein in risk-taking and moral hazard and to limit the size of future contingent liabilities. By contrast, financially less virtuous countries will prefer regimes that offer financial support that is less limited size-wise and comprises greater risk-sharing, including transfers. While both creditors and debtors have similar objectives, that is, the prevention of future financial crises, they not surprisingly disagree over the distribution of the financial risks and economic costs required to realise this objective. Similarly, the political conflict over the present banking union reform is about the potential distribution of risks in pursuit of preventing or ring-fencing future crises.

On the sovereign risk side, the euro area system today provides for a combination of constraints on national fiscal policy and public finances, on the one hand, and conditional lending to prevent or deal with sovereign crises, on the other hand. By comparison, national banking sectors continue to be backstopped by their governments, government’s access to the bail-out fund - as well as the single (common) resolution mechanism and fund in case banks need to be wound down. Institutionally, an euro area wide deposit insurance is an important piece of the institutional architecture that is missing to make the euro area more resilient in the face of national sovereign or banking crises. A common deposit insurance mechanism (second approach) may lead to debt mutualisation through the proverbial backdoor. So does the single resolution fund, of course. If a country is prone to banking crises or banking failures, it will tend to receive net financial resource transfers, even if the transfer is funded by banks (and especially banks in located in creditor countries) rather than taxpayers. Given the size of the potential liabilities involved in guaranteeing the deposits, the question becomes who backstops the backstop? Until the deposit insurance fund is large enough, it will necessarily have to be supported by government in order for it to serve as a credible instrument of crisis prevention.

Severing the so-called sovereign-bank nexus by way of a deposit insurance scheme would be helpful in terms of reducing the risks of runs on national banking systems, which subsequently may cause a sovereign crisis. It would also help reduce the risk of a sovereign crisis causing a banking crisis. A common deposit insurance scheme would also make it easier to allow sovereign defaults, easier but not easy! The direct ‘balance sheet’ effect of a sovereign default would be more manageable and the spill-over effect between member-states' banking systems would be less severe, even though financial market volatility, declining asset valuations and economic weakness would negatively affect banks. But they would be less likely to have to deal with a destabilising run of their deposits on top of all this. Most importantly, with a deposit insurance in place, a liquidity-driven banking sector crisis would become less likely and it would go some way in terms of mitigating the negative consequences of a sovereign default on the national banking system – and therefore on the rest of the euro area.

In designing financial risk-sharing regimes, creditors seek to minimise potential liabilities, while maximising crisis-fighting capabilities. In order for the creditors to sign up to a scheme that effectively guarantees other member-states’ banking sector deposit liabilities, they will demand greater ex-ante control of financial risk-taking (Single Supervisory Mechanism) and the reduction of banking sector risks (‘legacy assets’) in order to limit future contingent liabilities. Following the establishment of single supervisory mechanism and the fiscal compact, creditors now insist that banks value their holdings of government debt on a risk-adjusted basis. But this would effectively force the banks in countries with low credit ratings to raise large amounts of capital. From the creditors’ point of view, this would help limit their potential future liabilities in terms of deposit guarantees. It would also help limit moral hazard, that is, governments' ability to pursue irresponsible fiscal policies by leaning on domestic banks to buy their debt. From the debtors’ point of view, a banking sector backstop is welcome, but eliminating the what is effectively the lender-of-last-resort function of the national banking system is unacceptable. From a creditor perspective, limiting the ability of banks to absorb government debt is to be welcomed, as it may help impose market discipline on debtor governments. From a debtor perspective, it sharply diminishes the lender-of-last-resort function of the national banking system and will increase the risk of a banking and/ or sovereign crisis. This issue is at the centre of the disagreement between creditors and debtor countries in their negotiations about a common deposit insurance scheme.

Creditors have an interest in making the euro area more robust and in ‘completing’ banking union through a common deposit insurance scheme. So do debtor countries, if only to avoid national financial crises. What creditors and debtors disagree on is who is to shoulder what share of the potential financial (and economic) risks. Creditors will want to minimise their financial risks by transferring as much as risk as possible onto debtor countries, while retaining the ability to safeguard systemic stability. Debtors fear that such a move would be self-defeating in terms of limiting the degree of necessary risk-sharing during a crisis. The trick is to find a solution that allows to safeguard systemic financial stability, while making the arrangement politically acceptable to both creditors and debtors. It is clear that finding an agreement will be a lengthy process in spite of the recent proposals put forward by German Finance Minister Scholz. It requires technical agreement on the level of resources available to counter system crises and it requires agreement on the distribution of financial and economic costs and risks between creditors and debtors. Therefore, the strengthening of the European banking union will take time and a deposit insurance scheme will be phased in over many years (like the Single Resolution Fund). 

It will also take a long time to complete banking union because neither creditors nor debtors are under significant pressure at the moment to sign up to what they perceive as potentially costly reforms. The need to advance banking union deepening is generally recognised, but the absence of instability does not make it politically urgent. During the next crisis, firefighting will be the primary concern, not how to make the building more fire-proof. One more reason why completing banking union will be a marathon, not a sprint. 

Sino-US technology & security competition (2020)

While US trade policy may in part be driven by domestic-political considerations, the broader, strategic goals of US foreign economic policy have begun to emerge more clearly. US policies are aimed at both slowing down China’s economic rise and preventing it from becoming a global leader in the key technologies of the future. The economic rents attaching to technological leadership may or may not offset some of the incontrovertible welfare losses stemming from present US trade and investment policies. Economic welfare is not Washington’s primary objective, however. Losses to US welfare are considered an acceptable price to pay in the quest to maintain global technological leadership in view of intensifying US-China security competition (Jaeger 2019). As Stephen Krasner put it a long time ago: high politics beats low politics. National security beats economic welfare. The US is increasingly concerned about relative rather than absolute gains (or losses) as far as its bilateral economic relationship with China is concerned. This is unlikely to change as long as China's rise  continues and it continues to act more assertively on the world stage. 

Technological ‘dis-integration’ (or decoupling) will be very difficult to avoid. Washington’s ‘weaponization of interdependence’ (including technology) will lead Beijing to seek to reduce its dependence on the US by accelerating – to the extent possible – domestic innovation and technological upgrading (e.g. semi-conductors) and by diversifying its technology supply chains. Beijing will be doubling down on the very state-supported innovation policies that Washington finds unacceptable. The US will seek to limit China’s access to advanced technology as well as the means to innovate and produce advanced technology through targeted investment restrictions, tighter US export controls and the more forceful application of extra-territorial legislation (e.g. Huawei). It will also increase the diplomatic pressure on other countries not to adopt Chinese technologies (e.g. Huawei 5G) and not to supply China with advanced technology (e.g. Taiwanese semi-conductors). This is bound to lead to the emergence of a more segmented international technology regime and will affect trade, investment, finance, data flows, even the flow of students and researchers.

Economically, technological leadership often involves significant economic rents, but more importantly it may give countries a decisive technological-military advantage (e.g. US nuclear supremacy following WWII). Concerns about the revolutionary, transformative impact of AI, biotechnology, robotics and quantum computing have emerged as the main driver underpinning US policy towards China. These technologies are now seen as ‘dual use’ and Washington is intent on preventing China from acquiring them first. A bi- or even tri-furcated international technology regime is set to emerge. This is a battle – at least in the eyes of the US administration – for the commanding heights of the 21st century economy, as VP Pence put it (Hudson Institute 2018).

Both Washington and Beijing are increasingly seeing their relationship as a zero-sum game in the technological and military realm. Perception is reality. But is this an intellectually and historically defensible perspective? The rise and fall of great powers and the security competition it typically engenders is a fairly well-established historical fact (Thucydides; Gilpin 1981; Kennedy 1987; Allison 2017) with only few exceptions (Shake 2017). But is the concern about technological breakthroughs and military advantage equally well-justified? Technological breakthroughs have given economies and states an important comparative advantage on occasion; but it is also true that peer competitors have typically managed to acquire advanced technologies pretty quickly, if not through trade, investment or licensing, then through reverse-engineering or outright theft (e.g. US nuclear technology). Successful commercialisation is a different matter, but the successful integration of new technology into existing military capabilities, while not without challenges, is generally achieved pretty quickly. The time it takes to establish parity is certainly of concern to military planners and political leaders, but historically technological lead-economies and states have rarely managed to maintain an unchallenged leadership position for long – at least not relative to their closest competitor and especially not if the peer competitor decided to put substantial resources behind the quest for technological parity (e.g. Soviet and Chinese nuclear and missile technology). Such an observation is unlikely to put strategists and military planners much at ease, however. The fact remains that Washington is concerned about China establishing a leadership position in key future technologies, especially in areas of important military application. 

As far as technology, innovation and technology diffusion is concerned, there are several broader considerations that need to be mentioned in order to assess the broader implications of technological decoupling. First, there is the aggregate economic welfare argument. Technological advances typically raise economic productivity. Their diffusion is desirable. However, it is also important to ensure that those investing in research and development are rewarded for their risk-taking. Hence the existence of national and international intellectual property rights regimes. Otherwise desirable technological innovation may not take place at all. This issue is very much at the heart of US concerns about Chinese policies, including IPR protection and cyber-theft. Second, there is the national security argument, mentioned above. Technological breakthroughs, even if the technologies can ultimately be replicated, can confer a significant short-term advantage to a state in military terms. In extremis, a short-term technical advantage can translate into a strategic victory (or defeat). National security considerations militate against technology diffusion, but they obviously have an economic cost. Thirdly, there is the relative economic advantage argument. If one country makes a breakthrough, it may be able to establish an unassailable monopoly or lead position (‘winner-takes-all’). This may happen because the new technology is characterised by increasing returns to scale or because competitors find it difficult-to-impossible to replicate the technology. It may also lead to a broader enhancement of national economic competitiveness, though national competitiveness is a controversial concept (Krugman 1994). Related productivity are a real prospect, however. Historically, states or at least potential peer competitors have been able replicate new technology fairly quickly. States that are lagging technologically can also deny foreign companies to exploit their competitive advantages by shutting them out of the domestic market while national efforts to replicate the new technology have proven successful (e.g. US tech vs Chinese tech firms in the Chinese market). This is not meant to be a definite conclusion, rather food for thought for those who believe technological leadership is unassailable.

From a global economic point of view, technological dis-integration will be welfare-decreasing. As in trade, US technology policies may have been interpreted as seeking to create an improved international regime that not only generates a ‘fairer’ distribution of benefits (through IPR protection), but also global aggregate economic welfare gains. In reality, it is clear by now that the US has shifted towards a techno-nationalist stance due to what it considers – rightly or wrongly – the national competitiveness and national security implications of technological competition with China (second and third argument). If technology were all about generating greater aggregate economic welfare (non-zero-sum game), contentious issues could in principle be addressed by establishing a level-playing-field. At a minimum, this would require convergence towards a less state-interventionist, more market-based national economic and regulatory technology regime in China that guarantees IPR and prevents forced technology transfer. However, this is a distant prospect. Washington is quite simply lacking sufficient trust in Beijing and Beijing is not going to shift towards an open, market-based, level-playing-field type of regime as long as it remains confident that its present political economy is better suited to beat the US in the technology race. It will be even less inclined to shift away from government control against a backdrop of intensifying strategic competition. It is quite possible that China will at some point improve its IPR protection regime. But as long as it is catching up technologically and has little by way of indigenous technology worth protecting, the incentives are heavily stacked in favour of the status quo (Huang & Smith 2019; Lardy 2018). After all, it affords Chinese companies IPR protection in US and European courts, while US and European companies find it hard to have their rights enforced in China. It is also questionable whether any type of domestic reform would lead to less competition in the military-technological realm where espionage is, of course, unavoidable practice. 

Assuming China does not fundamentally change its state-backed technology policies and domestic innovation regime, who is going to win the technological race? Some analysts predict a ‘second great divergence’ after the ‘great divergence’ following the Industrial Revolution. The present technological revolution, they say, will lead to a quantum leap in productivity, natural monopolies and large economic rents. It will also entrench the winner’s unassailable technological leadership position. Whether technology can or cannot successfully be replicated in a technical sense is difficult to predict. The argument most often cited is that China, for example, has access to significantly more data due to its demographic, significant use of digital technology, limited restriction on data privacy and supportive government policies. AI technology is highly reliant on large amounts of data. The establishment of AI supremacy will once and for all deny other competitors from challenging the lead innovator. Maybe. On the flipside, history has shown that states have proven quite adept at replicating, reverse-engineering or, if need be, stealing new technology. The burden of proof is on ‘supremacists’ arguing in favour of ‘divergence’.

Are US concerns about Chinese government support giving Chinese technological development a strategic advantage justified? Intuitively and historically, a sufficiently wealthy and advanced state should be able to successfully deal with what may be called a la Rumsfeld a technological known “unknown” (the technology already exists, you just need to figure out how to produce it yourself) by throwing sufficient resources at the problem (e.g. China, USSR and nuclear and missile technology, even poorly-resourced North Korea’s nuclear programme today). States often can and do often successfully design incentives to solve a pre-determined problem, especially in the military realm. But the incentives are typically not designed to find ‘new’ technologies (e.g. teflon, internet). This is probably where a market-based approach is superior, on average and in general, and where an organically evolved technological eco-system is more innovative than government-supported technology policies. It turns out that government subsidies and supportive government policies seem to have given China an edge when it comes to commercialisation and the application of certain new technologies (Lee 2018). The US, however, seems to retain the edge in terms of more advanced technological eco-systems, superior basic research and the number and quality more top-end researchers and research outfits (Financial Times 2019). This seems to favour the US in technology race, but it is difficult to make predictions with a reasonable degree of confidence. Once again, food for thought.

Technological leadership is primarily about military or economic advantage, including indirect advantages. It is about being able to set the international standards and building and perhaps controlling the global tech infrastructure as well as the technology used in other countries. Not only does this carry with it potential security implications (e.g. 5G), it may also help the technological leader lock in economic advantages. The US government, mainly through the vice president, says that Washington does not want to see the emergence of economic blocs and diverging standards, which would translate into lower efficiency. True, but form a security point of view, the country setting the standards and building the infrastructure on the back of its technological lead will tend to be more secure. After all, would the US accept Chinese standards and Chinese-built infrastructure in order to preserve a unified global technology regime? Hardly. In VP Pence’s own words, it is a fight over who will control the commanding heights of the 21st century economy. Beijing very likely shares the US vice-president’s view.

The Sino-US relationship will continue its secular shift from cooperation to competition and possibly (non-military) conflict (Jaeger 2019). Technological de-coupling will be difficult to avoid. The US wants to prevent China from acquiring advanced technology and becoming a technological leader, while China is eager to reduce its dependence on US technology and innovation. To the extent that the US weaponizes (asymmetric) interdependence, China will intensify its efforts to diversify to reduce its dependence. Weaponisation, especially against the backdrop of increased security competition, runs the risk of almost certain dis-integration (or bi-furcation) of the global technology regime. We have already seen a tightening of US export controls, US technology transfers and the US (inward) investment regime. US-China competition is also increasingly going to affect third countries in terms of trade, investment, capital and technology transfer, including Taiwanese semi-conductor producers, Vietnamese exports and FDI and financial openness.

Technological innovation will be a major area of competition between China and the US. US tech companies and entrepreneurs (Gates, Schmitt) oppose ‘dis-integration’/ de-coupling, but the increasing domestic political headwinds in the US make it unlikely that their view will be heeded. US Congress has been shifting towards a more adversarial stance vis-à-vis China for quite some time now. The White House, historically keen to bat off anti-China legislation, pursuing a more conflict-oriented China policy. US business is increasingly frustrated by increased competition from Chinese companies and market access and technology transfer and IPR problems in China is far less pro-China than it used to be and is in fact supportive of many of the economic objectives of US foreign policy, if not necessarily its means. 

Trade, investment and the cross-border flow of people will be affected by increasing Sino-US techno-competition, including in third countries. Agreeing on a largely market-based, reciprocal and open regulatory regime that creates a level-playing field might appear to offer a way to avoid tech-competition. However, the lack of US trust in Chinese policies, the necessary changes to China’s political economy (and judicial system) this would require and the reluctance of China to embrace such large-scale change in light of increasing geo-strategic competition make such a possible solution highly unlikely. And even if such an agreement could be struck, it is not clear that it would prove a lasting solution as soon as either side begins to lose the technological race. Technology will therefore remain at the heart of US-China competition and it will affect a wide range of bilateral (and multilateral) economic, financial and investment policies for a long time to come.