Thursday, October 4, 2012

Who Said What in Last Night's Debate?

Last night's Presidential debate in Denver seems to have ended with a media-declared Romney victory. Buzzfeed's Ben Smith called it 42 minutes in, Chris Matthews was livid and Big Bird is fearing for his job.

I listened to the debate primarily audio-only and watched my twitter feed more than the actual two candidates and I got the sense that Obama performed a bit better than the immediate internet consensus, though I generally agree that Romney came off as more confident and on-message. It would be a mistake to read too much into the content of the debate (as Intrade has) and I agree with John Sides that Romney's win won't do much to move the polls.

What did interest me post-debate was what divisions between Obama and Romney could be seen in the types of words they used in their speeches. Governor Romney seemed particularly focused on the economy (as the fundamentals would suggest) and the President seemed generally aloof and unfocused on any one particular issue or line of attack, apart from a somewhat extended discussion of Medicare late in the debate. More generally, what did the debate reveal about partisan divisions in rhetoric?

I made a plot of all of the important words used in the debate (taken from ABC News' transcript) and computed a value for each one based on how likely a given word was to appear in Governor Romney's speech vs. how likely it was to appear in President Obama's speech. For more details on the method I used, see the bottom of the post. The words on the left are words that tended to be used more by the President and the words on the right are more common in Romney's speeches. I've also re-sized each word based on how often it appeared in both candidates' speech - larger words are generally more frequent.


It's a bit hard to make out a lot of the words since there are a lot of irrelevant or only incidentally partisan words clustered around the middle. I re-did the same plot two more times, first only including the words that were used more than five times overall and again with words that were used more than ten times (the most common words in the debate).


What can we take away from this?

Intuitively, the results seem to reflect clear divisions in rhetoric between the two parties, though the distinction is surprisingly less sharp than the much more heated rhetoric of the campaign would suggest. Governor Romney tended to focus more on economic issues (as expected), while President Obama focused on issues that he generally "owns" (health care and Medicaid in particular). Some partisan divides in rhetoric are evident - for example, tax policy: words like "wall street", "loophole", and "profit" and "corporation" are more frequent in President Obama's speech.

Moreover, the data seem to confirm the general takeaway that Governor Romney was smoother and more focused in his message than President Obama. Of the most frequent words, most appear to cluster either around the center or the right. Governor Romney's rhetoric was also markedly more generic than President Obama's, which reflects his newfound shift towards the center.

Substantively, the candidates spent the debate discussing the same issues and largely on the same terms. Most words, and particularly the most common ones, cluster around the center, meaning that both candidates are roughly equally likely to use them in their speeches. Apart from tax policy, neither candidate is looking to reframe an issue in a particularly unique way. Rather, each discusses the same issue (like the deficit), using very similar frames. This is just as expected - the candidates are not looking to differentiate themselves ideologically.

But despite the overarching similarities in the two candidates' rhetoric, it's interesting to note the subtle partisanship in some of the candidates' word choices. As expected, both candidates spent a lot of time talking about the "middle class," but the phrase "middle class" is almost exclusively an "Obama"-word. Governor Romney prefers to use "middle income" rather than "middle class," perhaps to avoid the more "leftist" connotations of the term "class."

Likewise, we see a rather marked difference when the two candidates discuss education - President Obama tends to focus on college while Governor Romney talks about the K-12 system.

It's also very clear that President Obama is a huge fan of using the word "folks" rather than "people."

Ultimately, though, what's really interesting is not the words that were used, but the words that were not used. The word "women" was entirely absent from the debate (giving a new meaning to the "just for men" meme that started trending on twitter). "Immigration" was also nowhere to be found. Trade received a passing mention from President Obama and the closest this debate got to "wonky" was a rather vague discussion of health care reform where neither candidate seemed entirely comfortable. For those wanting an actual debate over issues (i.e. people who have already decided who they are voting for), this, like every debate, was lacking.

Given both the role of the debates in the election process and the overall incentives facing both Obama and Romney at this point in the campaign, this is all entirely expected - campaigns are meant precisely for people who don't pay attention to campaigns.

Thoughts?

---------------------
Note on methods

I used a relatively straightforward technique to generate partisan scores for each word in the debate. After splitting the debate transcript into three separate documents for Obama, Romney and Lehrer, I removed all punctuation and capitalization from the texts along with any uninformative "stop" words (the, a, an, as, etc...) from each. I then applied a "stemming" algorithm to consolidate similar words into a single root ("regulate", "regulating", "regulation" all reduce to "regul"). 

I counted the number of times each word occurred in each speech, adding 0.5 to all of the zeroes (to prevent division by zero in the next step). That is, if a word appeared 5 times in one candidate's speeches and zero times in another, I treated that 0 as 0.5 words.

I then normalized the data by dividing each word count by the total number of words used by the candidate. This gives the relative frequency of each word/word stem. I converted the frequencies to odds (frequency/(1-frequency)) and for each word in the data, divided Romney's odds by Obama's odds. This generated an odds ratio, with ratios greater than 1 representing words that were more likely to be used by Romney and ratios less than 1 representing more "Obama"-oriented words. Finally, I logged the odds ratio to get a linearized variable that I plot on the x-axis. Words taking positive values (on the right) are more likely to appear in Governor Romney's speech and words taking negative values (on the left) are more likely to appear in President Obama's speech. Words clustering around 0 are equally likely to appear in both candidates' rhetoric.

Each word is re-sized by the log of its overall count in the dataset, and colored red-to-blue based on the x-axis variable.

A better (though more complex) way to visualize this sort of data was also developed by Monroe, Colaresi and Quinn (2009).

Tuesday, June 26, 2012

The Fundamental Uncertainty of Science

While I have not had much time for the mini data-gathering/research projects that I usually try to post on this blog, I found the recent flurry over Professor Jacqueline Stevens' New York Times editorial "Political Scientists are Lousy Forecasters" (and the follow-up on her blog) worth commenting on a bit more.

The political science blogosphere has since responded in full-force (and snark). I agree entirely with the already stated criticisms and will try not to repeat them too much here. The editorial is at best a highly flawed  and under-researched critique of quantitative political science and at worst a rather cynical endorsement of de-funding all NSF political science programs on the grounds that the NSF tends to fund studies using a methodological paradigm that Professor Stevens does not favor. I'll err on the side of the former.

But one quote from the piece did irk me quite a bit:
...the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture. (emphasis mine)
This statement is on-face contradictory. The entire point of statistical analysis is that we are uncertain about the world. That's why statisticians use confidence levels and significance tests. The existence of randomness does not make all attempts at analyzing data meaningless, it just means that there is always some inconclusiveness to the findings that scientists make. We speak of degrees of certainty. Those who use statistical methods to analyze data are pretty clear that none of their conclusions are capital-T truths and the best political science tends to refrain from any absolute statements. Indeed, this is a reason for why a gap tends to exist between the political science and the policymaking communities. Those who enact policy want exact and determinate guidance while political scientists are cautious about making such absolute and declarative statements. It is depressing to see these sorts of caricatures of quantitative methods being used to denounce the entire field. Simply put, just because physics is very quantitative and physics appears to describe very clean and determinate relationships does not mean that all uses of math in social science result in only simple, exact and absolutely certain conclusions.

But putting aside that highly inaccurate picture, Professor Stevens' definition of what constitutes scientific knowledge is remarkably limiting. Prof. Stevens is staking out a very extreme position by implying that the existence of randomness - "messy realities" as she calls it - makes all attempts at quantification meaningless.  She argues for a very radical version of Popper's philosophy of science, positing that any theory should be considered falsified if it is contradicted by a single counter-example. It's unfortunate that Prof. Stevens glosses over the extensive philosophical debate that has followed in the eight-or-so decades after Popper, but this is inevitable given the space of a typical NYT column. Nevertheless, it is very disappointing that the OpEd gives the impression that Popperian falibilism is the gold standard of scientific method and the philosophy of science, when in fact, the scientific community has moved far beyond such a strict standard for what constitutes knowledge. While I won't go into a full dissection of Popper, Kuhn, Lakatos, Bayesian probability theory, and so on, it suffices to say that Stevens' reading of Popper would discount not only political science, but most modern sciences. Accounting for and dealing with randomness is at the heart of what so many scientists in all disciplines do.

By rejecting the idea that probabilistic hypotheses could be considered "scientific," Professor Stevens is perpetuating another caricature - one of science as a bastion of certitude. It's a depiction that resonates well with the popular image of science, but it is far from the truth. I'm reminded of a quote by Irish comedian Dara O Briain:
"Science knows it doesn't know everything, otherwise, it would stop."
All science is fundamentally about uncertainty and ignorance. Knowledge is always partial and incomplete. There was actually an interesting interview with neuroscientist Stuart Firestein on NPR's Science Friday on this topic a few weeks back, where he offered this valuable quote:
...the answers that count - not that answers and facts aren't important in science, of course - but the ones that we want, the ones that we care about the most, are the ones that create newer and better questions because it's really the questions that it's about.
Ultimately, I would argue that probabilistic hypotheses in the social sciences still have scientific value. Events tend to have multiple causes and endogeneity is an ever-present problem. This does not automatically make systematic, scientific and quantitative, inquiry into social phenomena a futile endeavor. Making perfect predictions the standard for what is "science" would dramatically constrain the sphere scientific research. (See Jay Ulfelder's post for more on predictions). Climate scientists constantly debate the internal mechanics of their models of global warming - some predict faster rates, some slower. Does this mean that the underlying relationships described by those models (such as between CO2 concentration and temperature) should be ignored because the research is too "unsettled"? While deniers of climate change would argue yes, the answer here is a definite no. 


Or to take an example from the recent blog debates about the value of election forecasting models. Just because Douglas Hibbs' "Bread and Peace" model (among other Presidential election models) does not perfectly predict President Obama's vote percentage in November, does not mean that we can learn nothing from it. One of the most valuable contributions of this literature is that systemic factors like the economy are significantly more relevant to the final outcome than the day-to-day "horserace" of political pundits.


What should be said, then, about Prof. Stevens concluding suggestion that NSF funds be allocated by lottery rather than by a rigorous screening process? Such an argument could only be justified if there were no objective means to distinguish what is and is not "scientific" research. If the criteria for what passes for real political science is simply the consensus of one group of elites, then from the standpoint of "knowledge," there is no difference between peer review and random allocation. This in fact would be the argument made that Thomas Kuhn, Popper's philosophical adversary, made about all science. But while Kuhn's criticism of a truly "objective" science was a useful corrective to 20th century scientific hubris, it too goes too far in this case, justifying an anything goes attitude towards scientific knowledge that is all too dangerous. Penn State Literature Professor Michael Bérubé' wrote a rather interesting article on this exact topic as applied to science at-large, noting the worrying congruence of the highly subjectivist approach to "science studies" adopted by some in leftist academia and the anti-science rhetoric of the far-right.
But now the climate-change deniers and the young-Earth creationists are coming after the natural scientists, just as I predicted–and they’re using some of the very arguments developed by an academic left that thought it was speaking only to people of like mind. Some standard left arguments, combined with the left-populist distrust of “experts” and “professionals” and assorted high-and-mighty muckety-mucks who think they’re the boss of us, were fashioned by the right into a powerful device for delegitimating scientific research. For example, when Andrew Ross asked in Strange Weather, “How can metaphysical life theories and explanations taken seriously by millions be ignored or excluded by a small group of powerful people called ‘scientists’?,” everyone was supposed to understand that he was referring to alternative medicine, and that his critique of “scientists” was meant to bring power to the people. The countercultural account of “metaphysical life theories” that gives people a sense of dignity in the face of scientific authority sounds good–until one substitutes “astrology” or “homeopathy” or “creationism” (all of which are certainly taken seriously by millions) in its place.  
The right’s attacks on climate science, mobilizing a public distrust of scientific expertise, eventually led science-studies theorist Bruno Latour to write in Critical Inquiry:
Entire Ph.D. programs are still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth…while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we meant? Why does it burn my tongue to say that global warming is a fact whether you like it or not?  
Why can’t I simply say that the argument is closed for good? Why, indeed? Why not say, definitively, that anthropogenic climate change is real, that vaccines do not cause autism, that the Earth revolves around the Sun, and that Adam and Eve did not ride dinosaurs to church?
In the end, Bérubé calls for some sort of commensurability between the humanities and sciences, and I think this kind of coming together that is actually becoming the norm in political science academia, particularly as political theorists and quantitative political scientists still tend to fall under the same departmental umbrella:
So these days, when I talk to my scientist friends, I offer them a deal. I say: I’ll admit that you were right about the potential for science studies to go horribly wrong and give fuel to deeply ignorant and/or reactionary people. And in return, you’ll admit that I was right about the culture wars, and right that the natural sciences would not be held harmless from the right-wing noise machine. And if you’ll go further, and acknowledge that some circumspect, well-informed critiques of actually existing science have merit (such as the criticism that the postwar medicalization of pregnancy and childbirth had some ill effects), I’ll go further too, and acknowledge that many humanists’ critiques of science and reason are neither circumspect nor well-informed. Then perhaps we can get down to the business of how to develop safe, sustainable energy and other social practices that will keep the planet habitable.

Tuesday, June 5, 2012

Is There a Role for International Institutions in Regulating "Cyberweapons"?

David Sanger's extensive New York Times piece about the United States and Israel's covert cyberwarfare operations on Iran's nuclear facilities is the first article I've seen that explicitly confirms the two countries' involvement in Stuxnet's development. But this revelation isn't particularly surprising. Given the virus' complexity and purpose, the list of possible developers was rather short. Rather, what I found most interesting was this section towards the end:
But the good luck did not last. In the summer of 2010, shortly after a new variant of the worm had been sent into Natanz, it became clear that the worm, which was never supposed to leave the Natanz machines, had broken free, like a zoo animal that found the keys to the cage. It fell to Mr. Panetta and two other crucial players in Olympic Games — General Cartwright, the vice chairman of the Joint Chiefs of Staff, and Michael J. Morell, the deputy director of the C.I.A. — to break the news to Mr. Obama and Mr. Biden. 
 An error in the code, they said, had led it to spread to an engineer’s computer when it was hooked up to the centrifuges. When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world. Suddenly, the code was exposed, though its intent would not be clear, at least to ordinary computer users.  
...  
 The question facing Mr. Obama was whether the rest of Olympic Games was in jeopardy, now that a variant of the bug was replicating itself “in the wild,” where computer security experts can dissect it and figure out its purpose.  
 “I don’t think we have enough information,” Mr. Obama told the group that day, according to the officials. But in the meantime, he ordered that the cyberattacks continue. They were his best hope of disrupting the Iranian nuclear program unless economic sanctions began to bite harder and reduced Iran’s oil revenues. 
Within a week, another version of the bug brought down just under 1,000 centrifuges. Olympic Games was still on.
The excerpt highlights one of the unique and troubling aspects of "cyberweapons" - their use against adversaries permits their proliferation. Despite all of the effort at keeping Stuxnet both hidden and narrowly tailored, the virus escaped into the public and its source code is open to be analyzed by pretty much everyone. While competent coding can make it difficult to reverse engineer and re-deploy the virus against other targets without a significant investment of time and resources, it's still a distinct possibility. Cyberweapons create externalities - side-effects that don't directly affect the militaries using them, but can have spill-over consequences on other sectors of society. For example, a SCADA worm like Stuxnet which targets industrial control systems could theoretically be re-targeted at civilian infrastructure like power or manufacturing plants.

Certainly most governments using cyberwarfare will likely want to limit these externalities since they do create an indirect threat (such as non-state actor attacks on critical infrastructure). This is evidenced by the fact that the U.S. and Israel not only tried to designed Stuxnet and its ilk to be difficult to detect, but also to have very tailored aims. The virus was designed to work on the specific reactor designs possessed by Iran, thereby somewhat limiting the initial damage of a leak (imagine what would have happened were Stuxnet to deploy its "payload" on all computer systems that it lands on). Nevertheless, these externalities exist so long as governments with the capacity to do so continue to use cyber espionage and attacks. The logic of collective action suggests that governments are also unlikely to unilaterally refrain altogether from utilizing these technologies since a blanket ban would be both impossible and entirely unverifiable due to the dual-use nature of the weapons.

This got me thinking a bit about what sorts of institutions could help mitigate some of the consequences of "leaks". Proposals for an international cyberweapons convention have been thrown around, but most have been very vague and poorly defined. Kaspersky Labs founder Evgeny Kaspersky recently suggested a treaty along the lines of the Biological Weapons Convention or the Nuclear Non-proliferation Treaty (the Russian government has also floated similar proposals). However, an out-right ban on "cyberweapons" would be highly unlikely and generally impractical. As I mentioned, verifying compliance would be substantially more difficult than it has been for either the BWC or the NPT. Given that both have been violated by a number of states party to them via clandestine programs, a "cyberweapons" ban would be toothless, even if it only banned particular types of attacks (such as those on SCADA systems). Moreover, states find cyber-capabilities significantly more versatile and useful than either biological or nuclear weapons. The category of "cyberweapon" is broad enough to include highly-developed viral sabotage (Stuxnet) to simple distributed denial of service (DDOS) and these sorts of technologies are useful not only to militaries, but also to intelligence services. Finally, the dual-use nature of information technology and its globalization make locking-in a "cyberwarfare oligopoly" a-la the nuclear monopoly of the NPT near-impossible. The "haves" cannot credibly promise disarmament to the "have-nots" and the "have-nots" face significantly lower barriers to developing basic cyber-espionage or warfare capabilities.

Monday, April 16, 2012

Plans for Next Fall and Blog Updates

I've been rather slow at updating the blog recently, mostly as I have shuttling back and forth between graduate school visits. It's been somewhat exhausting, but the entire process has been a remarkable journey. Thanks to all of the profs, admins, current graduate students and fellow members of my cohort for making these visits so phenomenal.

So, the decision deadline has passed and, this fall I will be joining the PhD program in Government at Harvard University.

While the end of the grad school application process should free up more of my time, it also coincides rather well with finals season, so updates will continue to be rather sporadic (maybe I can learn to write shorter posts :) ). I'm looking forward to having some time in May to write a bit more frequently!

Tuesday, April 3, 2012

Not feeling the 'physics envy'

Kevin Clarke and David Primo have an op-ed in the New York Times that critiques the dominance of what they call "hypothetico-deductivism" in political science - the idea that in order to study politics "scientifically," one must follow a specific method:
This might seem like a worthy aspiration. Many social scientists contend that science has a method, and if you want to be scientific, you should adopt it. The method requires you to devise a theoretical model, deduce a testable hypothesis from the model and then test the hypothesis against the world. If the hypothesis is confirmed, the theoretical model holds; if the hypothesis is not confirmed, the theoretical model does not hold. If your discipline does not operate by this method — known as hypothetico-deductivism — then in the minds of many, it’s not scientific.
Such reasoning dominates the social sciences today. Over the last decade, the National Science Foundation has spent many millions of dollars supporting an initiative called Empirical Implications of Theoretical Models, which espouses the importance of hypothetico-deductivism in political science research. For a time, The American Journal of Political Science explicitly refused to review theoretical models that weren’t tested. In some of our own published work, we have invoked the language of model testing, yielding to the pressure of this way of thinking.

The NYT piece is a summary of the argument that they make more extensively in their book A Model Discipline. I have yet to read the book, so I can't speak to any differences/nuances developed there that don't necessarily come out in the op-ed. As far as I can tell, Primo and Clarke's main point is that political science has become too much of a methodological monoculture and that we should not be opposed to engaging in theoretical work that is not necessarily empirically testable or empirical work that doesn't aim to "test" a pre-determined theory.

I entirely agree with the call for diversity in methods - the question should guide the choice of tool and not the other way around. There's plenty of great theoretical work that is impossible to test systematically, but nevertheless useful. Moreover, empirical research that isn't guided by any particular theory can still generate interesting questions and find surprising relationships between variables. The rise of "big data" makes the search for these sorts of correlations even more relevant since there is so much information that has yet to even be examined by political scientists.

But I don't get the sense that the next generation of political scientists is necessarily being taught that "hypothetico-deductivism" is the way to do research. Sure, maybe journals have underlying biases, but I don't think that bias is unique to methods - top-tier journals have a reputation to protect and will by necessity be risk-averse in publishing anything "new." As far as training goes, from talking to professors during graduate student visits, I certainly did not get the sense that all theoretical models must have empirically testable implications and that all empirical research should be backed up by theoretical models (however haphazard). I actually brought up the topic of empirical testing in a few of my conversations with some of the formal theorists and got more or less the same response as what Primo and Clarke seem to be arguing. Maybe I'm underinformed since I'm only an incoming graduate student, but if there's significant pressure towards "hypothetico-deductivism," I'm definitely not picking up on it.

The last thing that slightly irked me about the article, and this is likely more the fault of the New York Times op-ed board trying to make an editorial on the philosophy of science as applied to poli-sci appealing to the general public, is the subtle invocation of the old trope that "social science" isn't a "hard" science like physics or chemistry and social scientists shouldn't bother trying to emulate the "real" sciences. Often this tends to be accompanied by lazy bromides about how human behavior is "inherently unpredictable" and that it's impossible to predict the really important events in political history. However, this doesn't seem to be what Primo and Clarke are arguing at all (which is why I'm puzzled by their use of the phrase 'physics envy' - as Erik Voeten pointed out, even physicists don't subscribe to hypothetico-deductivism as the only way to do physics). Rather than dismiss rigor in political science, they're calling for more of it - for more creative and insightful methodological approaches to poli-sci questions. In this sense, I think they're presenting an argument similar to the one Dan Nexon made last week on the "overprofessionalization" of academia. Certainly Primo and Clarke are not calling for total method-free "thinking" about politics - that's what political analysis and the NYT/WaPo op-ed pages are for.

If there's a square peg and round hole problem, it's definitely not between social science and "science."

Thursday, March 22, 2012

Electoral Fraud and the Russian Presidential Election - Part 2

In the previous post, I examined some of the more basic graphical indicators of electoral fraud in the Russian presidential election.

How else can election data be analyzed for evidence of fraud? One of the most common approaches is to study the distribution of a particular digit in the results. The Guardian posted a brief article that evaluated the first digits of electoral returns using Benford's Law, which posits that numbers arising from certain natural processes will have leading digits that are distributed logarithmically (1 is more common as a leading digit than 9). While the election results for Putin don't appear to conform to Benford's law, it is unlikely that the law is a relevant metric of voter fraud. Since precinct or region-level returns do not encompass enough orders of magnitude, the method is a poor indicator. However, Walter Mebane has done extensive work applying Benford's law to the distributions of the second digit in electoral data and this method may be more fruitful in detecting fraud in Russia.

Conversely, one can examine the last digits of the election returns. Bernd Beber and Alexandra Scacco used such an approach to reveal likely electoral malfeasance in Nigeria and in the 2009 Iranian presidential election. They posit that, in a "clean" election, the final digit in the raw vote or turnout counts at the precinct-level should be uniformly distributed. Since a single vote is inconsequential in deciding an electoral outcome, the last digit is essentially an error term (the full, more complicated, proof is in the first link above). However, if electoral results are tampered with and the result sheets are being filled in arbitrarily, the distribution of last digits may deviate from uniformity. This is because humans tend to be terrible at generating truly random sequences of numbers. Beber and Scacco cite a number of studies of that suggest cognitive biases toward smaller over larger numbers, avoidance of repetitive sequences (like 333), and preference for adjacent numbers. Comparing the results of the Swedish parliamentary elections to electoral data from Nigeria's Plateau state, they find strong uniformity in the former and significant deviations in the latter.

I applied Beber and Scacco's method to election returns from Russia, looking specifically at the last digit distribution in reported numbers of registered voters at the precinct and district levels. If election officials are not out-right fabricating candidates' vote totals, but instead votes are being inflated via ballot-stuffing, then by necessity, registered voter counts still would need to be altered slightly in order to accommodate these “artificial” ballots. In order to avoid impossible and embarrassing reports of greater than 100% turnout, some fudging of the numbers might be needed.

A quick aside on terminology/method. The Russian election commission reports results aggregated at three levels – the republic/province level (equivalent to states), the “sub-republic” level (essentially, city/county subdivisions in each province) and at the precinct level (with data from each local polling center or uchastkovaya izbiratel'naya komissia (UIK)). I use the “sub-republic” data for the Russia-wide test and precinct level data in testing individual provinces. I also exclude any turnout figure that has less than 3 digits to ensure that the last digit is sufficiently "irrelevant".

First, at the national level, there is some evidence that the last digits for registered voter counts do not follow a uniform distribution. The graph below shows that the data contain significantly more 2s than expected (outside the 95% confidence bound). Additionally, a chi-squared test returns a p-value of .029, suggesting statistically significant (at alpha = .05) deviation from uniformity.



Is there variation across regions? Anecdotal evidence suggests so. The most egregious reports of fraud tend to come from “peripheral” regions, particularly Chechnya and Dagestan which consistently report absurd levels of support for Putin/Medvedev and United Russia. Reports from Moscow and St. Petersburg (the centers of the protest movement) tend to be more subdued. Indeed, Moscow City was the only region where Putin was only able to obtain a plurality of the vote as opposed to a majority.

Conducting the last-digit test on registered voting data from each polling-place in these four regions seems to confirm that fraud levels vary significantly within Russia. The graphs below suggest that neither Moscow nor St. Petersburg show any significant deviation from uniformity. Chi-squared tests for both regions are also not statistically significant.


Chechnya, where Putin received 99% of the total vote, and Dagestan, where Putin's numbers were slightly lower (93%), tell a different story. Both show dramatic deviation from uniformity with a tendency to emphasize lower numbers, particularly zero and five. Chi-squared tests for both are also significant at the 1% level.



Obviously this is very cursory analysis, but it does suggest that the last-digit method is a pretty good tool for finding hints of fraud in raw election returns. Any thoughts?

Thanks to Bernd Beber for making available the R code for running the last-digit tests and generating the graphs.

Saturday, March 10, 2012

Electoral Fraud and the Russian Presidential Election - Part 1

To no one's surprise, Russian President Prime Minister President-elect Vladimir Putin won last week's election with a sizable (reported) 63.6% of the vote.

As with pretty much any Russian election over the past decade, evidence of electoral fraud has begun to surface. Reports of "carousel voting," paid voters being shuttled by bus to vote at multiple polling stations, ballot stuffing, and an impossible figure of "107% turnout" in a Chechen precinct all suggest some degree of manipulation. Was this fraud then systematic or idiosyncratic? In the wake of the 2011 Duma elections, many Russian bloggers used statistical analysis techniques to uncover strange patterns in the reported results for United Russia. Scott Gelbach posted an English summary of some of these findings. Do the same observations hold for the presidential election?

I gathered the precinct-level data reported by the Russian Election Commission and looked at the three main "problematic" distributions. The first is vote shares across precincts:



The distribution is certainly less skewed than it was for United Russia, which can be attributed to the fact that the genuine popularity of Putin compared to his party decreased the necessity of much falsification. Nevertheless, one again sees the distribution suspiciously widen at the right end and the existence of a significant number of precincts where essentially everyone voted for Putin is likewise odd, particularly given the anecdotes from regions like Chechnya. However, the non-normality of this distribution is not necessarily conclusive evidence of fraud. It does, however, illustrate a heavily skewed and non-competitive electoral system. More odd are the spikes in the precinct counts at what appear to be the round numbers and simple fractions in the 60 to 80 percent range. Gelbach notes a similar phenomenon in the results for United Russia, though it is certainly less pronounced here

What of the distribution of turnout*? Gelbach argues that this should be roughly normal "to the extent that voters are making idiosyncratic decisions about whether to vote rather than do something else." Yet again, one sees an upward sloping tail on the right end and a huge spike at 100.



Grouping the turnout data into smaller intervals, one sees some spikiness at crucial benchmarks, though it is less pronounced compared to the results from December:


Finally, turnout and percentage vote for Putin are highly correlated:



As with United Russia's voting percentage, Putin's results are strongly associated with turnout. As a number of people have pointed out, this does not necessarily indicate fraud. For example, a strong GOTV campaign may mean that a candidate tends to get more voters as turnout increases. Yet as Gelbach noted: "the magnitude of the relationship in Russia is such that United Russia is scooping up essentially all of the marginal votes over a certain level." In the case of Putin's results, the relationship is not as strong (again, owing to the fact that Putin's actual level of popularity is still relatively high). Nevertheless, the correlation at the upper levels of turnout is such that it's difficult to conclude that manipulation was insignificant.

Again, the patterns in the electoral results are suggestive of some fraud, but the degree appears to be lower than in December. This may be partly due to the decreased necessity of boosting Putin's results. Indeed, the curious decision to install web-cams at all polling places may be evidence that the Kremlin, knowing that the incumbent would win handily, wanted to keep overt reports of fraud to a minimum. As Josh Tucker commented, if the election was meant as a signal to the public that Putin remains popular, compromising footage of ballot stuffing and ham-handed manipulation would weaken the message. So while there is a disincentive to commit visible fraud, there is still a logic behind committing less clearly observable fraud (Andrew Little wrote a good post recently on this point). So the statistical evidence combined with anecdotal reports from observers strongly suggests that systematic cheating, while much less blatant than in the Duma elections, likely occurred.

Part 2 of this post will apply some more advanced statistical techniques to examine the variation in fraud levels across the different regions.


*I compute turnout as (Number of Valid Votes + Number of Invalid Votes)/(Number of Registered Voters). The Russian electoral commission site does not give a clear percentage figure of turnout.

Monday, February 27, 2012

Could the ICC Help a Political Solution in Syria?

The Center for a New American Security recently published a policy brief by Marc Lynch on what non-military actions the United States could take with the aim of defending the Syrian opposition and pushing for the resolution of the Syrian conflict. The entire paper is well worth a read, but I was particularly struck by Lynch's last proposal to leverage the threat of ICC prosecution to push key figures in the Syrian government towards cooperating on a political transition:
The time has come to demand a clear choice from Syrian regime officials. They should be clearly warned that their names are about to be referred to the ICC on charges of war crimes. It should be made clear that failure to participate in the political transition process will lead to an institutionalized legal straightjacket that would make it impossible for them to return to the international community. This should be feasible, even without Security Council agreement. Top regime officials should be left with no doubt that the window is rapidly closing on their ability to defect from the regime and avoid international prosecution.

To date, Syrian officials have not been referred to the ICC, in order to keep alive the prospect of a negotiated transition. Asad must have an exit strategy, by this thinking, or else he will fight to the death. However, Asad has shown no signs of being willing to take a political deal, and in any case, his crimes are now so extensive that he cannot have a place in the new Syrian political order. He should be forced to make a clear choice: He can step down and agree to a political transition now, and still have an opportunity for exile, or he can face international justice and permanent isolation. He should also be forced to make this choice quickly. Beyond Asad himself, the threatened indictments should be targeted to incentivize for those not named to rapidly abandon Asad and his inner circle in order to maintain their own viable political future.
I think this is a fair point and matches up with a typical model of the kind of decision-making logic that Bashar al-Assad is facing right now. Hypothetically, he can choose to accept or reject an offer for mediation and exile. That's certainly worth a lot less than simply suppressing the opposition. However, it is probably worth more than a lifetime at the Hague were Assad to "lose" in a civil war. Absent ICC prosecution, Assad could simply take his chances and fight, keeping open the exile option once defeat becomes inevitable. But with the threat of ICC charges, an easy flight to exile becomes more difficult as fewer countries are suitable "safe havens." Moreover, a negotiated transition minimizes the risk that Assad will face a Gaddafi-like end if the opposition were to gain the upper hand. Threatening ICC prosecution provides a clear incentive to negotiate instead of continuing to fight the opposition by eliminating (or at least complicating) Assad's "out" (exile) if fighting fails. Below is a quick diagram of the two possible "games" between Assad and the opposition (one with the ICC and one without). Assad can either choose to reject or accept the ultimatum. If he rejects, he either "wins" or "loses" in a confrontation with the opposition.



In the first case, assuming equal probability of winning and losing, Assad chooses to reject negotiation. In the second case, with the same assumption, Assad accepts.

However, there are some flaws with this basic model. Namely, it ignores the commitment effect that following through on the ICC referral would create. By eliminating the option to flee, it essentially commits Assad to fighting the opposition to the end if he chooses to reject the offer. Observing this "forced" commitment, the opposition is faced with a choice - either back down or continue to challenge Assad, knowing that his regime has been backed into a corner. If the opposition is sufficiently risk-averse and worried about losing in a protracted conflict, they would likely reduce their protest activity (preferring to concede now rather than fight a futile struggle).

You may notice that I'm again referencing James Hollyer and Peter Rosendorff's work on why dictators sign the Convention Against Torture (also known as the "badass theory of torture"). I think it applies rather well in this case. Hollyer and Rosendorff argue that dictators ratify CAT in order to credibly signal that they will fight any challenge that could remove them from their position. Since they engage in torture, and since the CAT has compulsory jurisdiction, ensuring that they will likely be prosecuted if they try to seek exile abroad, dictators give up the option of backing down and leaving power. Dictators show their willingness to repress at all costs, and, as a result, face fewer challenges from domestic opposition groups. ICC prosecution against Assad, if actually credible, could have a similar effect. The model below shows an expanded version of the previous game:



Here, I've added two more decisions interposed between Assad's acceptance or rejection of the ultimatum and the final outcome of the Assad/opposition conflict. After Assad rejects the ultimatum, the opposition has to choose whether to continue challenging the government (fight) or conceding (a rather simplified choice). If the opposition fights, Assad independently has the choice of whether to also fight or resign/flee. Essentially, this is an entry/deterrence game added into the previous model. Assad would prefer to win by concession rather than by fighting. The opposition also is assumed to prefer conceding to losing a massive fight. Also, assume that the probability of winning and losing are equal (for now).

What happens when the threat of ICC prosecution is brought in? It essentially forecloses "fleeing" as an option for Assad. As long as there is some risk of winning a challenge, Assad will fight. As a result, the opposition cannot count on getting Assad to back down if challenged (its preferred option) and instead must risk a fight. If the expected value of Assad losing is less than the value of backing down, the opposition will likely reduce its pressure on the regime, which in turn gives Assad a strong incentive to reject.

In the first model, given a 50-50 chance of winning, Assad will reject and flee. There is no scenario were he accepts since there is a non-zero chance that the opposition could back down and still a good chance that he could win a challenge. It's also assumed that Assad would prefer to leave on his own terms rather than those negotiated for him.

In the second model, Assad could accept negotiation, but only if he has a high chance of losing were he to fight the opposition. Assuming than victory and defeat are equally likely (and that both actors have similar assessments of the likelihood of winning versus losing), the opposition in this model will back down (utility of 2 vs. expected value of 1.5 from fighting). If, given the somewhat arbitrary utilities I've created, the probability that the opposition "wins" a "fight" is greater than 67%, Assad will take the offer.

Obviously these models are highly simplified and assume perfect knowledge of the other actors' capabilities/strategy. The utilities themselves, while somewhat reflective of the priorities of each actor, are arbitrary and likely poorly weighted. Nevertheless, the models do highlight an important caveat to Lynch's argument for threatening Assad's exit strategy. If the Syrian government thinks that it could still successfully repress the opposition were it to reject a negotiated settlement, then permanently closing off exit routes for government officials would limit options for further negotiation by pushing Assad towards a "survive-or-else" strategy. At best this weakens the credibility of such a threat since the United States may not follow through for fear of limiting future. At worst, it ensures prolonged bloodshed and violence if Assad chooses to reject negotiation.

I think Lynch is right that threatening to cut off the Syrian regime's "golden parachute" can be an effective way of forcing a settlement, but it is a tactic to be used wisely and at a time when Assad would be most conducive to accepting. Moreover, it's unlikely that ICC prosecution will be significant enough of a threat since there are plenty of non-ratifiers which could possibly offer Assad and his coterie safe-haven (for example, Walter Russel Mead suggested Russia's Black Sea coast as the destination for a possible "getaway" vacation).

But most importantly, such a policy must compliment other non-military efforts to tip the balance of domestic and international opinion against Assad. The other strategies that Lynch outlines are crucial to making it less likely that, if faced with an ultimatum, Assad will choose to accept the costs of rejection and opt to fight the opposition to the bitter end.

Monday, February 13, 2012

Tweets vs. Likes? An Analysis of the Monkey Cage

A while back, Joshua Tucker issued a challenge on The Monkey Cage:
Here at The Monkey Cage we allow people to “Tweet” posts to their Twitter followers, and “Like” posts to their Facebook friends. Lately I’ve noticed that some posts get more tweets than likes, some get more likes than tweets, and others get roughly the same amount. Anyone have any idea why?
Challenge accepted.

I was actually surprised to find that this question has already been looked at by other data science bloggers. A quick google search for "Tweets vs. Likes" got me to Edwin Chen's blog where he posed the exact same question as Joshua did:
It always strikes me as curious that some posts get a lot of love on Twitter, while others get many more shares on Facebook:

What accounts for this difference? Some of it is surely site-dependent: maybe one blogger has a Facebook page but not a Twitter account, while another has these roles reversed. But even on sites maintained by a single author, tweet-to-likes ratios can vary widely from post to post.
He analyzes the data from a few tech-related blogs, comparing the tweet-to-like ratio for each post to various post attributes and finds that:
tl;dr Twitter is still for the techies: articles where the number of tweets greatly outnumber FB likes tend to revolve around software companies and programming. Facebook, on the other hand, appeals to everyone else: yeah, to the masses, and to non-software technical folks in general as well.
This nerd/normal divide corresponds surprisingly well to Joshua's initial set of hypotheses.
Humor vs. wonkishness hypothesis: The funnier a post, the more likely it is to go on Facebook; the wonkier the post, the more likely it is to get tweeted.

The graphics hypothesis: The more graphics, the more likely it is to go to Facebook. The more text, the more likely it is to be tweeted.

The source of visitors hypothesis: Visitors outside academia are more likely to post to Facebook; academics who read blogs are more likely to tweet.
Is this really the case? To obtain the actual data, I wrote a quick screen scraping script and went through all posts from this February up until about May of last year. At some point after that, no likes or tweets appear to be recorded for most of the posts. In total, I scraped around 860 posts, 492 of which had both tweets and likes.

I use a modified version of Edwin Chen's tweet-to-like ratio as the dependent variable. In order to avoid dividing by zero since many posts have only tweets and no likes, I add 1 to the quantity of both tweets and likes for a given post. I then take the base-10 log of the modified tweet/like ratio to linearize the dependent variable for regression analysis. For brevity, let's call this measure the "tweet rating" - positive values indicate more tweets than likes while negative values indicate more likes than tweets.

Since the third of Joshua's hypotheses is untestable with the data that I could obtain, I'll focus on the first two. Graphics and length are directly measurable. I use a dummy variable indicating whether or not a post includes a graphic (i.e. img tags) and another indicating whether a post has an embedded video. For length, I use only a basic word count measure. Since this may not capture the "complexity" of a post well, I also include the Flesch-Kinkaid grade level (a rather rough measure, but the best quantitative one that I could come up with quickly).

Wonkiness vs. Humor is a bit harder to capture. While it would be interesting to do a full analysis of each post to determine the sentiment (using something like Sentiwordnet and a natural language processor), I simply don't have the time. As a proxy, I use post categories. A lot of the categories are rather neutral but a few stand out as relevant in the nerd/normal framework. Frivolity and especially the Ted McCagg Cartoons are definitely more humor-oriented. Conversely, I found the "Data," "Academia," "Methodology," and "IT and Politics" categories more "wonky" than the rest. Each is coded as a 0-1 dummy variable.

Wednesday, February 8, 2012

Russian Politics Part 2: The UNSC Veto

In the wake of the failure of the UN Security Council resolution condemning the Syrian government, analysts have offered up a variety of reasons behind the Russian and Chinese decisions to veto. Erik Voeten posted a succinct summary two days ago and followed up yesterday with an extended rebuttal to the Libya "precedent" made by a number of writers and bloggers. I agree that perceived NATO overreach in Libya is a very weak story for why Russia and China vetoed. The Russian government's suggestion that it was "duped" by the text of the UNSC's resolution on Libya is highly specious given its generally realpolitik foreign policy. Erik elaborates on this further in the post. I would add that Russia's behavior with regards to arms control - the fact that it has consistently sought binding over non-binding nuclear reductions agreements (START vs. SORT) - also shows that it certainly does not attach much importance to unenforceable declarations.

The rest of the blogosphere has more or less finished dissecting both countries' general motivations for opposing both the resolution and any substantive action against Assad. However, I am more interested in the possible domestic dimension behind the veto, an explanation that has been thrown around but not developed much. I know less about Chinese internal politics, so I'll focus on Russia.

There have been a few explanations offered for why the Russian government had some sort of domestic political interest in vetoing the resolution and all are general variations on a theme: Putin and his cohort are facing increasing pressures on their political survival and the Syria veto sends a signal that benefits them vis-a-vis opposition forces.

Walter Russell Mead suggests that the veto is meant to boost Putin's reputation as a hardliner, look good domestically, and co-opt the increasingly agitated ultra-nationalists:
First point: domestic politics. Putin is running for reelection, and although the clueless MSM (the ones who thought the Egyptian revolution was all about liberals and tweeting) instinctively sees the issue as a contest between Putin and liberals, the opposition that worries him is on the right. They are ultra-nationalists and fascists steeped in crazy-think conspiracy theories and full of fear and hate.
I mentioned two weeks ago that Putin is likely to continue ratcheting up his confrontational rhetoric in the run-up to the Presidential elections (with the caveat that I was also skeptical as to how effective it will be). Certainly there's a logic behind the "tough talk," but I think Mead is conflating the rhetoric with the policy here.  Putin doesn't need the veto in order to bash the West. As in the Libya case, Russia could have abstained while continuing to voice its objections to interventionism. Any reputational costs that would have resulted from failing to stand up to a "NATO intervention" would have been negligible as it's clear that the likelihood that Western powers will independently use force against Syria is very low (compared to Libya). Certainly the  veto helps the spin, but it is in no way essential given the Kremlin's vast media resources.

More importantly though, Mead also overstates the extent to which those nationalists dissatisfied with Putin care about the government's foreign policy. As Igor Torbakov noted in a post on EurasiaNet, the nationalists that would be attracted to such shows of strength are already squarely in the government's camp. The ones that pose a threat to Putin are the cultural nationalists who are already staunch anti-statists.
Nationalism in Russia has undergone a dramatic shift lately, one that Putin, apparently, has been slow to catch on to. Two competing strains of nationalism have always existed in the country – one that can be described as imperial, or statist nationalism, the other ethno-cultural. The first worshipped the state, its power and international prestige; the second glorified the nation, its culture and faith. Throughout Russian history, statists have tended to hold a pragmatic view of nationalism, seeing it mostly as an instrument to strengthen state institutions and bolster the authority of the ruling class. As such, statists have traditionally favored territorial expansion, followed by efforts to assimilate minority groups. 
Radical ethnic nationalists, on the other hand, see no place for non-Russians in the state. This strain of nationalism, naturally, has caused particular problems for imperialists, whether they have been Russian tsars, Soviet commissars or Putinists advocating “managed democracy” and relying on energy policy to expand their influence in the near abroad.
In recent years, economic hardship has boosted the popularity of ethnic nationalism at the expense of the imperial variety. This trend is underscored by the growing popularity of the slogan “Russia for the Russians.” Putin, who clearly aligns himself with the imperial school, has been reluctant to acknowledge this trend. Instead, he has tended to oversimplify the rise of ethnic nationalists, casting them as trouble-makers whose ideas could encourage the disintegration of the Russian Federation.
In essence, the Syria veto would only "boost" Putin's popularity among those Russians who already accept his  tough foreign policy credentials and therefore, are already likely supporters.When it comes to getting the nationalist vote, Putin has either done enough (in the case of the "statist" nationalists) or can simply do nothing.

So I'm unconvinced by the popularity argument since it seems to only explain the rhetoric and not the policy. But what if the goal is not to gain popularity, but simply to show strength? Facing an increasingly vocal protest movement, Putin may be seeking to dissuade additional protesters by signalling his staunch commitment to staying in power, whatever the costs. Assisting a leader who is actively repressing protesters may be a veiled threat to the Russian opposition that similar actions will be taken by the government if their protests go to far.  Michael Weiss and Julia Pettengill made this argument last week in Foreign Policy:
These demonstrations, coupled with the general weariness at the decline of living standards and increasing state corruption, have raised the possibility that Putin may not secure a majority in the first round of voting, a contingency he has acknowledged as possible -- though it would no doubt be politically disastrous for him and his ruling United Russia party. As a consequence, Putin is attempting to shore up his reputation as an unyielding strongman abroad to detract from the increasing perception of weakness at home.

Putin has not had a significant foreign policy standoff since the 2008 Russian-Georgian War, which was billed as an effort to reclaim Russia's "near abroad" from creeping Western and NATO influence. He opposed, but did not veto, the Security Council's authorization of a NATO-imposed no-fly zone in Libya last year. He now appears to be compensating for that acquiescence by backing a friendly tyrant and showing a wobbly electorate that Russia won't be pushed around by American and European democracy-promoters.
On face this makes sense and may indeed be part of the Putin government's reasoning behind vetoing the UN resolution and backing Assad. However, it too is a poor explanation for the simple fact that supporting a foreign government does very little to make its threats against the protesters more credible. At best it's just more costless talk - it doesn't actually make it more difficult for Putin to concede to protest demands.

James Hollyer and Peter Rosendorff's research on why autocracies that practice torture ratify the Convention Against Torture (CAT) is particularly relevant here. Their theory is that dictators want to convince domestic opposition movements that they will not resign from power without a fight, thereby decreasing the expected benefit to protests and lessening their number. The "badass" theory of torture (to use James Vreeland's phrase) suggests that autocrats use foreign policy tools to constrain themselves in order to make their threats more credible. They can certainly talk tough, but protesters have no reason to believe them, so leaders must make an actual commitment. This is where the CAT comes in. By signing the CAT and then proceeding to torture, leaders signal that they have no easy escape route once faced with protests. They can't resign and flee to an Italian villa because the CAT's principle of universal jurisdiction ensures that they will face prosecution upon losing power. The goal is to show protesters that there is no conceivable way that they can win short of a massive, protracted fight, thus deterring protests from developing in the first place.

So does Putin look more like a "badass" after the veto? Not really. Maybe supporting Assad creates some reputational costs for backing down, but that's true of all of the aggressive rhetoric being issued from the Kremlin. Putin is certainly committed to supporting Assad, especially given the extensive economic and political ties between Russia and Syria, but as far as Russian protesters are concerned, this does not increase the actual likelihood that Putin will crackdown on protesters. The government is no more committed to repress now than it would be if it had not backed Assad. There is no new credible threat.

This is not to discount the potential for costless threats to be meaningful. Backing Assad certainly makes Putin look like a tougher leader, even if there's no substantive reason to believe that he is. Polling done in December by the Levada Center suggests that the message may be working. 43% of respondents think that the government will do everything in its power to avoid a recount of the Duma elections while only 17% believe the government will accede to the protesters demands. Moreover, 43% of respondents think that protesters should back down if the government turns to tougher measures to crush the protests while only 16% think that protests should continue. Its important to also note the large number of respondents who are "unsure" - 40 and 41 percent in both polls. While it's unclear how these individuals would answer if pressed further, the early polling does show that a significant number of Russians think that the government is likely to increase its repressive measures against protesters and that protesters should subsequently back down - precisely the deterrent effect that the government wants. However, its obviously impossible to determine whether Syria factored into the public's reasoning, and given how little attention Russians paid to Libya, it may be safe to say its influence was rather small.

Ultimately, I am skeptical that domestic politics played a key role in the decision to back Syria. Or more specifically, there may be domestic incentives to support Assad, but increasing Putin's popularity or showing strength to the opposition are relatively minor. The real "domestic politics" explanation for Russia's backing of the Syria government likely has more to do with the commercial interests of those with strong ties to the Kremlin. Syria is one of the last remaining dedicated clients of the Russian arms export industry. Moreover, Russian companies have concluded an extensive series of oil and gas contracts with Syrian state energy enterprises. For example, Stroytransgaz, which recently renegotiated its contract with the Syrian Gas Company, is owned by Gennady Timchenko, one of the key "new oligarchs" connected to Putin. Vetoing the UNSC resolution may simply be good backroom politics, but I'm doubtful that Putin's Syria strategy will have much effect on the ongoing opposition protests.

Wednesday, February 1, 2012

Cheap Talk, Real Deterrence?

Dustin Tingley and Barbara Walter have an interesting article in the latest issue of the Journal of Conflict Resolution on using controlled experiments to analyze the role of "cheap talk" in conflict deterrence situations. Here's the abstract:
What effect does cheap talk have on behavior in an entry-deterrence game? We shed light on this question using incentivized laboratory experiments of the strategic interaction between defenders and potential entrants. Our results suggest that cheap talk can have a substantial impact on the behavior of both the target and the speaker. By sending costless threats to potential entrants, defenders are able to deter opponents in early periods of play. Moreover, after issuing threats, defenders become more eager to fight. We offer a number of different explanations for this behavior. These results bring fresh evidence about the potential importance of costless verbal communication to the field of international relations.
The model itself posits a "defender" who is confronted with a series of possible "entrants" who must choose whether to challenge or not challenge the defender. This defender then decides whether to fight the "entrant" or concede. The balance of incentives for fighting vs. acquiescing is determined by whether the defender is a "strong" or a "weak" type. "Strong" defenders would prefer to fight a challenge while "weak" ones benefit more from accepting it. However, the type is randomly assigned and unknown to the entrant. The defender's actions reveal the type to the entrants in successive rounds (entrants know how a defender reacted in previous rounds when other entrants chose to challenge), but in early rounds the type is largely unknown. Therefore, a weak defendant may have an incentive to fight early in order to signal a "strong" type to future entrants and deter those challenges.

Tingley and Walter add into the game the possibility for "cheap talk" - private communication between the defender and entrant. Defenders could send a message that they will either fight or not fight if challenged before the entrant makes their decision. Both types of games (talk vs. no talk) were played out by a group of test subjects - conveniently available undergraduates.

Despite the prediction that communication should not alter the game since preferences remain static and talk itself should not reveal anything about whether the defender is strong or weak since all types of defenders have an incentive to bluff, Tingley and Walter find that talk has a deterrent effect in early rounds. That is, when entrants lack reliable information about the defender's type, they appear to be dissuaded by costless threats alone. The effect disappears in later rounds as entrants get more information about the defender. Surprisingly, the results also suggest that talk has a slight deterrent effect even when a defender has revealed weakness by not fighting a challenge. Even more interesting is the finding that in early rounds, weak defenders who issued a threat and previously not backed down were more likely to follow through with the threat if challenged (which is unexpected if not following through imposes no costs).

The authors suggest a signalling explanation. In the game, the participants are unsure of whether their opponents will understand how to play the game. Cheap talk can be a way of confirming that a player does indeed know how the game works thereby "deterring" opponents from exploiting information asymmetries. Early threats are expected since both sides know the defender wants to signal strength and will likely follow through in early rounds to deter future challengers. If a defender does not threaten, then they are signalling less competence and therefore inviting a possible challenge. This explains both why entrants pay attention to threats (since they reveal something about the games-playing ability of the defender) and why defenders follow through (both are associated with competence since more capable weak defenders understand that fighting early and taking a loss can benefit as a signal to future challengers). Essentially, its not that cheap talk itself deters, but rather that its absence may suggest a defender that does not fully understand the game.
Given that a threat is costless, a defender who threatens early in the game is playing exactly as one would expect him or her to play. Likewise, a player who does not issue a threat may be indicating that he or she does not fully understand the game. Thus, sending a threat or not sending a threat signals to the challenger something about the sophistication of the defender. (1010).
The model provides an explanation for the prevalence of "cheap talk" among international actors. Costless threats are assumed to exist and thus are conspicuous when absent. Even in inter-state disputes, where actors are likely to already be well-informed about the other's nature (via intelligence gathering), cheap threats remain ubiquitous. For example, Iran's threat to shut down the Strait of Hormuz in response to sanctions is expected even if the action itself would be irrational. Although following through would be too costly, signalling the option shows that Iranian leadership is playing the deterrence game rationally, that is, it will try to show strength for as long as possible.

However, I think the Tingley and Walter model is most applicable in situations where there is a true "low-information environment." Protest movements against autocrats are one such example. Why do dictators like to "talk tough"? The model suggests that they must do so in order to avoid showing incompetence. Let's assume that autocratic leaders want to deter protesters from protesting. Conversely, protesters must decide whether to challenge the regime or stay quiet, and this choice depends on how intensely they think the leader will fight them. "Strong" type leaders will choose to fight while weak ones will "acquiesce." However, the information asymmetry in such authoritarian states is highly pronounced - protesters find it difficult to determine whether their leader is a strong or a weak type (because of media control, lack of political networks, etc...) and discover this only through the government's reaction to protests. Likewise, both sides are also unsure about how well the other will play the "game." There is a potential difference in expertise. Therefore leaders may send signals of toughness simply because they are expected to do everything they can to say "I will fight" in a costless fashion. Failing to do so reveals an imperfect player (a weak leader who doesn't try to hide it) and suggests the possibility that a leader may back down when faced with a protest.

The first related example that came to mind were the post-election protests in Russia. In a blog post at the Monkey Cage, Andrew Little hinted at a similar logic behind the persistence of election fraud. If everyone knows the results are falsified, why bother falsifying? He argues that because fraud is expected, lack of fraud can be a signal of weakness to protesters. Likewise the bluster coming from the Kremlin (which I discussed last week) is certainly just talk, but it is also expected - it shows that the Russian government knows that it must appear as a "strong" type rather than a "weak" type.

On an even more extreme end, the model might explain the persistence of the North Korean personality cult and the speed at which elites tried to build up the image of Kim Jong Un. If leaders want to deter challengers and show strength, then even if no one believes the propaganda, it still has a purpose. It confirms that the leadership will continue acting in a manner that projects a strong type even if the regime is itself weak. It also might explain the increasing absurdity of North Korean propaganda efforts. If Kim Jong Il's personality cult was at a "10," then Kim Jong Un must crank it up to "11" or risk revealing weakness.

Certainly all of these cases are imperfect applications of the model. Indeed, in the real world, it is difficult to isolate the effects of "costless" talk from other phenomena that may be unobservable. Talk in the aforementioned scenarios is also more public than private and therefore not entirely costless. Despite these and other differences between the model and its application, I think Tingley and Walter make an interesting point in providing a rational basis for ostensibly meaningless actions. Even if no one believes cheap talk, everyone expects that it will exist simply due to the lack of any consequences. Refraining from cheap talk suggests that an actor is not playing the game as efficiently as they could be (a revelation that is particularly meaningful).

I was also intrigued by the method that Tingley and Walter used in light of Brad Smith's recent post on the dearth controlled experiments in IR research. My sense is that although it is difficult to make the jump directly from undergrads to states, lab experiments can be a good first step in testing a new and possibly counterintuitive model. Here I think an experimental approach was effective since it is almost impossible to isolate the effects of communication in an observational study. Finding the "ideal" case of entirely private and entirely costless talk in the "wild" is a difficult task, especially when there is no clear starting point. The experimental approach is certainly nowhere near conclusive, but it does help define the initial parameters for an observational study. To use the Rumsfeldian theory of knowledge, there are a lot of "unknown unknowns" in IR, things that we cannot even begin to test observationally because we do not know where to start. Lab experiments can help convert those into "known unknowns."  That is, we can isolate an effect in a controlled study, so we "know" that it might exist, but cannot yet find a good example in real-world observation. From there it is a matter of finding cases that roughly fit the ideal type and varying some of the initial constraints (public vs. private talk, zero cost vs. near-zero cost) to figure out whether the model remains applicable. Truly controlled experiments are certainly fun and awesome, but my sense is that they are only the initial step to more rigorous real-world testing.

Tuesday, January 24, 2012

The Domestic Crisis Politics of Russian Foreign Policy Rhetoric

NYT reports that the Russian media's reaction to the arrival of Michael McFaul, President Obama's new ambassador, has been remarkably virulent.
In the annals of American diplomacy, few honeymoons have been shorter than the one granted to Michael A. McFaul , who arrived in Russia on Jan. 14 as the new American ambassador. 
It was toward the end of his second full day on the job when a commentator on state-controlled Channel 1 suggested during a prime-time newscast that Mr. McFaul was sent to Moscow to foment revolution. A columnist for the newspaper Izvestia chimed in the next day, saying his appointment marked a return to the 18th century, when “an ambassador’s participation in intrigues and court conspiracies was ordinary business.”
This is only the most recent of the vocal attacks on U.S. "interventionism" that have been coming from the Kremlin over the past year, and particularly in the months leading up to the Duma and Presidential elections. PM Putin's suggestion that Secretary of State Hillary Clinton was responsible for the December protests, President Medvedev's threat to move Iskander missiles to Kaliningrad in response to US missile defense plans, and the Kremlin's persistent criticism of perceived NATO overreach in the Libya operation all illustrate a pattern of increased verbal hostility towards Washington.

The official and semi-official rhetoric appears to be outpacing both reality and actual Russian policy. Certainly calling Ambassador McFaul "not a Russia expert" is stretching the bounds of language itself. But despite suggestions that the Kremlin's increasingly harsh tone signals the end of the "reset," the incentives for cooperation remain strong in key areas. Indeed, the Libya case reveals that the Russian government is perfectly capable of both rhetorically opposing and substantively accepting U.S. action. Undersecretary of State for Arms Control Ellen Tauscher is likewise sanguine about Russian missile defense rhetoric:
In a November speech, Russian President Dmitry Medvedev suggested talks had broken down and he threatened several retaliatory measures, including Russia's potential withdrawal from the New START nuclear reductions agreement. 
Tauscher responded that these statements were part of the Russian campaign season and that progress would speed up once the March Presidential elections in Russia had subsided. She also acknowledged that the Russians are demanding a legally binding document from the Obama administration promising U.S. missile defenses in Europe will not impact Russia's strategic deterrent, which Tauscher said they will never get.
So the recent surge in confrontational rhetoric appears to be primarily a reaction to domestic political developments. It is reasonable to expect that the Kremlin, facing flagging popularity and an increasingly vocal public opposition, will try to leverage the traditional bogeymen of NATO and Western interventionism more broadly as a means of consolidating support. However, I am skeptical regarding whether this will actually be effective.

What accounts for the rise and fall of public support for the Russian leadership over the past 10 years? The traditional "story" behind Vladimir Putin's surging popularity over the 2000s is that it was fueled by strong economic growth on the basis of high oil prices. This graph shows the correlation between Putin's monthly approval ratings (as gathered by the Levada Center) and the spot price for Brent oil lagged by 2 months (obtained from the Energy Information Administration) for the years 2000 to 2010.


Indeed, higher oil prices, which serve as a good proxy for Russian economic performance overall given how crucial the oil sector is to the economy, tend to be associated with higher approval levels (r = .5531). However, when you add 2011, the relationship begins to break down. The graph below adds in the ratings for 2011 - highlighted in red.


Despite the recovery in oil prices after the 2008 crash, Putin's approval rating has plunged below 70% (levels not seen since the months after the Ukrainian Orange Revolution). The link between oil and popularity has become much weaker (r = 0.2576).

Why is this so? Consider a third factor - Russian attitudes towards the United States - a somewhat general proxy for pro-western/anti-western sentiment. The next graph shows the monthly U.S. disapproval rating (as provided by Levada Center) against Putin's monthly approval rating:


There does appear to be a relatively strong relationship between negative opinions of the U.S. and positive opinions of Putin (r = 0.5306). Moreover, the data from 2011 actually fits the trend - recent polls have indicated that Russians have a much more positive attitude toward the United States than in previous years. Does this mean that Putin's popularity stems more from his government's ability to frame the "West" as a threat and generate a "rally 'round the flag" effect than Russia's economic growth? Probably no. The correlation may simply indicate that low levels of popularity mean that the public is less willing to "buy" the government's foreign policy rhetoric (i.e. the relationship goes the other way). It may also suggest a third variable that correlates with both - something like "trust in government." If it is relatively low now (as the protests may suggest), then boisterous foreign policy rhetoric is less likely to be taken seriously. Indeed, the decline in the popularity of television and the rise of the internet may be cutting into the credibility of the Kremlin's traditional anti-U.S. messaging strategy.

It may be that the two narratives behind Putin's popularity, the economic growth story and the "enemies abroad" story, are intertwined. The ability of the Putin/Medvedev government to benefit politically from economic growth rests on whether or not the public accepts the linkage between the economy and the government's actions - that is, that the government deserves credit for the improvement in living standards. This only happens when the public generally sees the government's messaging as credible. Public opinion of the United States may therefore be a proxy measure of how much the public believes the Kremlin's narrative generally, particularly since the growth and foreign policy messages are often mixed (Putin has tended to link Russia's economic resurgence to its "sovereign democracy" and its "regained" influence and independence on the international stage). In this case, there may be an interaction between the two variables - high levels of growth (oil prices) translate into higher levels of support for Putin if U.S. disapproval is also high.

The table below gives the results of a series of linear regressions with approval rating as the dependent variable:

Independent Variable 1 - no 2011 2 - include 2011 3 - include 2011 4 - include 2011
Lagged DV 0.6064*** 0.7399*** 0.7346*** 0.7240***
(8.83) (12.55) (9.40) (9.35)
Brent Oil Price - Lag 2 Mo. 0.0519*** 0.0142 0.0024 -0.0678
(3.51) (1.25) (0.16) (-1.55)
U.S. Disapproval 0.1297** -0.0080
(2.35) (-0.08)
Interaction Effect 0.0020*
(1.71)
Constant 27.3272*** 18.8144*** 15.2262*** 20.7430***
(5.59) (4.32) (2.85) (3.35)
Adjusted R^2 0.5680 0.5639 0.6640 0.6727
Num. of observations 126 137 76 76


T-values in parentheses. * = 90% significance, ** = 95% significance, *** = 99%+ significance

I included a lagged dependent variable in each regression to account for autocorrelation. The first three regressions generally confirm the argument made in the previous graphs - approval rating correlates with both oil prices and U.S. disapproval through 2001-2010 but U.S. disapproval is the better predictor when including 2011. However, regression number four is interesting. The interaction effect is significant and positive at the 90% level, giving some limited support to the above hypothesis. 

Of course the "story time" part of this analysis is getting far ahead of the data - 90% significance is a relatively low bar and the accuracy of many of the proxies and assumptions that I'm using is questionable. Oil prices may not be the best measure of economic performance (per capita income is likely better, though I was unable to find monthly data). Moreover, the size of the sample is tiny and plagued with missing data. Nevertheless, this is a blog post and the initial results do suggest some interesting speculation/avenues for further research.

While I expect that the Russian government will continue its rhetoric over "U.S. interventionism," I highly doubt that it will have any significant effect on either Russian citizens' approval of the United States or of the government/Putin/Medvedev. It is difficult to make any meaningful predictions about the future of the opposition protests or the survival of the Putin/Medvedev tandem after the presidential elections. However, the data do suggest that the government is in trouble - it can no longer rely on a steady stream of oil income to assure public support. In fact, that "support" was hollow to begin with and confrontational showmanship is unlikely to bring it back. Stephen Holmes' recent piece in the London Review of Books summarizes this sentiment quite succinctly:
Some of the time, at least, rulers become fleetingly popular because they are believed to wield power. From the predictable tendency of opportunistic citizens to flock obsequiously to the power-wielders of the day it follows that an incumbent who seems to be losing power may see his poll-tested ‘popularity’ vanish overnight. 
This is the nightmare now faced by Putin’s team. Keen to avoid any appearance of weakness, they are well aware that public support can be artificially inflated by the illusion of power. They have long depended on theatrical displays which, however easy to stage, gave spectators an outsize sense of what the government could achieve...Can an internally warring, socially detached and rapacious oligarchy hold onto power with only a minimum use of violence now that such electoral fakery seems to have outlived its usefulness?
Edit 1/25: Fixed the title on the third graph