“The Sources of Soviet Conduct” by George Kennan is one of the most famous articles about international politics ever written. Kennan attributed Soviet aggressiveness in international affairs to Russian nationalism and neurosis, but also saw it as basically not aligned with the Russian people. (Read: not the people are our enemy, but its rulers.) This is a perfect example of what Kenneth Waltz would later call a ‘second image’ account. It is the nature of the domestic political regime that accounts for, for example, the propensity of a state to engage in armed conflict. Waltz (1959), of course, concluded that it was the third image (self-help international system) that accounted for inter-state war rather than the second (state) or first (man) image. In International Relations theory, this has come to be known as the level of analysis problem. This is also an issue quite familiar to psychologists.
The so-called fundamental attribution bias describes the tendency of agents to attribute other people’s behaviour to their nature or internal qualities, while attributing their own behaviour to external constraints. In the case of Kennan, the agent was the Communist regime, not the Russian people. By contrast, the US had sought post-war international cooperation but was forced into superpower competition by external circumstances (Soviet expansionism), or so the established Western narrative went. Psychological experiments have shown that people attribute an adversary’s (malicious) behaviour to their (evil) nature, but rationalise their own (often exact same) behaviour as being forced upon them by circumstances (including others’ behaviour). To paraphrase Talleyrand: this attitude is worse than a moral misjudgement, it is a mistake. Predicting as well as influencing another party’s behaviour requires one to understand what drives it.
Misunderstanding what drives an adversary has been a frequent source of US foreign policy mistakes. North Vietnam did not give in to US deterrence by punishment after the US began to bomb North Vietnam. US decision-makers were surprised. Vietnam lost 5-10% of its population during the war, while the US lost 50,000 soldiers. (As my Professor Fred Halliday once pointed out: there is moving memorial commemorating the 50,000 American lives lost; but there is no memorial commemorating the two million or so Vietnamese lives lost, at least in not in DC.) Similarly, FDR’s decision to tighten economic sanctions and move the Pacific Fleet to Pearl Harbor not only failed to deter Japan from further expansion in Asia; it actually prompted the attack on the US. Of course, foreign policy makers, like ordinary people, sometime miscalculate because they fail to understand the other party’s calculus. But how do we, and how can we, be sure that we understand the other party’s calculus? Both LBJ and FDR misunderstood North Vietnam and Imperial Japan’s calculus.
Intentionality figures very prominently in understanding, explaining and predicting others’ behaviour. Economists proposed the concept of revealed preferences. What a person does is necessarily a reflection of its interests or utility function. Observing a person’s behaviour over time reveals this person’s preferences and intentions. But this is too simplistic. After all, going to the gym is may not be my prime goal; it may just be a mean to staying fit or looking good. In other words, the ultimate goal or preference and therefore intention can never be known with certainty. Maybe it can be guessed from others do or say. However, especially in antagonistic contexts, the other party may engage in an action that is meant to mislead about her real and/ or/ stated preferences, goal and intentions. For actions to be epistemically useful, they need to be interpreted and an observer needs to ascribe meaning to them. This creates a further problem. Sometimes (often?), the meaning of an action differs across context or between different observers or across cultures. Interpretation is observer and context dependent. Once you have formed a view about another person’s character (nature) or intentions or preferences, based on empirical evidence or prejudice or whatever, the person’s action will likely be interpreted accordingly. Cognitive dissonance, another cognitive bias, makes it psychologically difficult to be judicious, in addition to the epistemic problem of ‘knowing’ what drives or motivates the other person’s behaviour.
This does not fatally weaken the presumption of purposeful, intentional, goal-directed behaviour. It does suggest though that the assumption/ presumption is to be made judiciously and thoughtfully. Graham Allison (1971) made this point famously in his study of the Cuban Missile Crisis. The rational-actor model assumes rationality and intentionality. This raises the “when you have a hammer, everything looks like a nail” problem. What if the other party does not act rationally? Allison famously proposed the “organizational process” and “governmental politics” models as alternatives to the rational-actor model. These models help account for certain Soviet actions during the Cuban Missile Crisis that the rational-actor model failed to explain or rationalise. The point, of course, is this. Not all observed behaviour is purposeful and intentional. Upon learning of the death of the Turkish ambassador at the Congress of Vienna, Metternich is supposed to have said: “I wonder what he meant by that”. Anthropomorphising the state by presuming rationality and intentionality often works, but on occasion it fails badly – whether as an explanation or a predictive tool. It is difficult enough what the rationale or the intention is. But sometimes there simply is no intention or rationale behind an action or a behaviour.
Human cognition seems to be hard-wired to attribute – or at least to look for – intentionality, rationality as well as causality. Intentionality is then regarded as causality. Humans have a tendency to generate more or less coherent narratives where there are often really only disjointed facts. If nothing seems to make sense, they invoke a hidden hand (aka conspiracy). The even do this where alternative account are perfectly coherent. Human seek to make sense and if there is not sense, they tend to impose sense (Brown et al. 2014). Humans also look for and believe they have found causality where there is only randomness (Kahneman & Tversky 1973). From an evolutionary point of view, this is easy to understand. Reducing complex to simple patterns and reducing complex phenomena to simplistic narratives often offer ‘good-enough’ guidelines. The resulting heuristics can even be more efficient from a cost-benefit point of view (e.g. shadow in the deep grass in the savannah). It does not matter whether or how accurate simplified models or heuristics are (Gigerenzer 2007Kahneman 2013). What it matters is that they are not detrimental to evolutionary success. Intentionality is one model we use frequently and, on average, successfully. Epistemically it is nonetheless a difficult, even problematic concept.
First, intentionality is a tricky concept, philosophically speaking. Daniel Dennett (1987) put it well: “The intentional stance is the strategy of interpreting the behaviour of an entity (…) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a consideration of its ‘beliefs’ and ‘desires’”. Note that this is a stance, a way of looking and understanding behaviour. This is precisely why humans see intentionality where there occasionally is none. They attribute meaning and intentionality to random events. Humans impose patterns where there are none. They presume intentionality even if there isn’t any. The intentional stance attributes beliefs and desires to agents and assumes that their actions reflect these beliefs and desires. From the actions one can infer, so the argument goes, the intentions based on beliefs and desires.
Second, another agent's intentions can never be established with certainty - and this assumes that an actor has intentions in the first place This is problematic at several levels. Motives and desires are easy to come by (to assume, presume, project). Why did she read a book? Because she was curious – even if the real reason was to impress her dinner party guests. Curiosity is a plausible motive. But then curiosity can in principle explain any action. So explaining actions in terms of motives is or can be problematic. We may simply the wrong intention to another person’s actions. Moreover, it may be well-nigh impossible to find out what the actual reason/ intention behind reading the book was. And maybe there no (conscious) intention behind it at all. After all, we don’t have access to another person’s so-called qualia. Last but not least, even if curiosity was that what led her to read the book, does this really constitute an explanation in an epistemological sense? Does the answer to a why question constitute an explanation ever? As Jeffrey Kasser points out, is to say that instinct led the turtle to go back to the sea after hatching really an explanation? Is this not like Voltaire’s quip about the cause of why opium makes you sleepy? Because of its sleep-inducing qualities.
Third, to the extent that intentionality is tied to the concept of rationality, things become more complicated, conceptually and empirically. Rationality is about means and end. It typically involves assumptions about well-behaved (transitive) and stable preferences. It often assumes maximisation as opposed to satisficing. It often disregards issues such as information costs, bounded rationality, misperception, strategic interaction (where concealing what one wants is often desirable). Actors may also modify their goals for all kinds of reasons over the course of time. In short, the standard assumption of rationality are extremely restrictive and rarely encountered in the real world. Worse, outside narrow, controlled experiment, it may be impossible what change in the underlying assumptions may account for the observed modified behaviour. While this sort of skepticism affects all models, it does raise the question how epistemically useful the intentionality model (or presumption) is.
Fourth, analysts are biased in terms of attributing action to dispositional or contextual factors depending on who performs the action (fundamental attribution error). Often this bias is compounded by the in-group vs out-group bias. Analytically, it is difficult to understand why we would see others as being motivated by qualitatively different factors than ourselves. Evolution may be to blame (Greene 2014). Fifth, whose intention is it anyway in the case of complex or collective social entities? As Allison suggested, state behaviour is often produced by organisational or bureaucratic factors and this casts doubt on the rationality assumption. Not being privy to internal deliberations, all we observe (JFK observed) was Soviet behaviour. And often cognitive dissonance does allow to rationalise behaviour as rational even though it is not and/ or is not intended to be (sic!).
In short: analysts tend to attribute beliefs, desires, intentionality and rationality to state behaviour. This is generally a good place to start. It is also an easy place to start. But ontological, epistemic and pragmatic problems counsel caution as well as the judicious use of the rationality presumption/ model. To what extent intentionality provides a good explanation – or indeed any explanation – in part depends on what constitutes an explanation (Jaeger 2020). If predictive accuracy is the standard, the model may make the right prediction for the wrong reason. If the cause (intention) cannot be observed or otherwise ascertained, a whole range of intentions (causes) may be invoked to explain (rationalise?) behaviour. The fact is that sometimes there is no intention, conscious or unconscious, yet analysts happily attribute the behaviour to an intention and hence indirectly to underlying beliefs and desires. This is what Dennett refers to as ‘intentional stance’. The intentional stance is psychologically ubiquitous, but epistemically shaky. Recognising this may represent progress. We have come a long way. After all, not too long ago, we used to attribute eclipses and floods to intentionality.