Forecasting Genocide

Preventing genocide is one of the greatest challenges facing the international community.[1] Aside from the suffering and grief inflicted upon generations of people and the catastrophic social, economic and political dislocations that follow, this ‘crime of crimes’ has the potential to destabilize entire regions for decades (Bosco, 2005). The shockwaves of Rwanda’s genocide are still felt in the eastern parts of the Democratic Republic of Congo nearly 20 years later, for example. Considerable resources are now devoted to the task of preventing genocide. In 2004 the United Nations established the Office of the Special Advisor on the Prevention of Genocide with the purpose to ‘raise awareness of the causes and dynamics of genocide, to alert relevant actors where there is a risk of genocide, and to advocate and mobilize for appropriate action’ (UN 2012). At the 2005 World Summit governments pledged that where states were ‘manifestly failing’ to protect their populations from ‘war crimes, genocide, ethnic cleansing and crimes against humanity’ the international community could step in a protect those populations itself (UN, 2012). The ‘responsibility to protect’ (R2P) project, designed to move the concept of state sovereignty away from an absolute right of non-intervention to a moral charge of shielding the welfare of domestic populations, is now embedded in international law (Evans 2008). Just this year, the United States government has stated that ‘preventing mass atrocities and genocide is a core national security interest and a core moral responsibility of the United States,’ and that ‘President Obama has made the prevention of atrocities a key focus of this Administration’s foreign policy’ (Auschwitz Institute, 2012). Numerous scholars and non-government organisations have similarly made preventing genocide their primary focus (Albright and Cohen, 2008; Genocide Watch, 2012).

Since it is primarily (although not exclusively) governments that perpetrate mass killing, the burden of prevention and mitigation will fall heavily upon the international community in the short and medium term. Aspirations expressed through the UN and R2P project are, however, yet to translate into effective and decisive action to arrest mass killings when they begin. A lack of political will, uncertainty in military planning, and ambiguity over the extent and coordination of genocidal acts have hampered these efforts (Totten and Parsons, 2009). Former President Bill Clinton expressed that his ‘greatest regret’ was not acting to stop the Rwandan slaughter and suggested that the U.S. would have responded given more time and understanding of the situation (Henriksen, 2007: 85), although it is likely that fears of a casualty-averse domestic public influenced this decision as well. Reluctance to openly challenge the sovereignty of the government of Sudan resulted in an anemic response to the genocide in Darfur (De Walle, 2007) and it appears that the divergent interests of UN Security Council members have stonewalled efforts to address the present violence in Syria. It is not the case, however, that international intervention cannot halt mass atrocities. NATO’s military intervention in Kosovo in 1999 and the UN-sanctioned ‘no fly zone’ in Libya (2011) are both cases in point.

Forecasting Genocide

While the purpose of dedicated offices such as that of the Special Advisor on the Prevention of Genocide might be to act ‘as a mechanism of early warning’ we have, until recently, had few reliable tools to asses the extent to which some countries are ‘at risk’ of genocide and others are not. An accurate and reliable ‘early warning’ or forecasting tool would assist in the prevention of genocide in a number of ways. It would allow international organisations and advocacy groups to focus their limited resources on the most dangerous cases. Evidence to rally international support for intervention, and to guide interventions to the most vulnerable populations, could be at hand more quickly. Indeed, if evidence of intent can be established early, political deadlocks may become tractable as states avoid being perceived as obstructing efforts to halt mass killing in the face of commanding evidence. Such evidence may also make it easier for state leaders to pitch the case for intervention to, potentially, skeptical publics. Planning for humanitarian interventions, including numerous contingencies, can be made far in advance, thereby reducing some of the uncertainty associated with deployments in foreign lands. Most importantly, such forecasts could form part of an early warning system that enables concerned external actors to focus their efforts on preventing the outbreak of mass atrocities in the first place.

Genocides can erupt with such speed and ferocity, and perpetrators have such strong incentives to conceal their plans that developing predictive models of genocide might seem like a fruitless enterprise. Jeffrey Herbst, for example, notes that ‘although Rwanda’s previous history was itself bloody, no-one predicted the genocide. Indeed, even the Tutsi – presumably the group with the greatest interest and the most information – were taken by surprise at the slaughter that engulfed them’. (Herbst 2001: 124). It might also be argued that the causes of individual genocides are culturally and historically specific and that searching for systematic causes will yield little in the way of reliable results.

Yet, in the past decade or so, scholars have achieved some success in identifying background conditions that make genocide more or less likely. This effort has been greatly assisted by modern communications technology that allow scholars to publish, share, and easily integrate data on many variables into their studies. The Political Instability Task Force, for example, houses data on a large number of variables relevant to the study of civil war, political instability and genocide. These improvements have substantially decreased the costs of data collection and have allowed scholars to focus their efforts on interrogating the determinants of mass killing. Most have worked from the assumption that, as Helen Fein (1994: 4) puts it: ‘genocide is preventable because it is usually a rational act; that is, the perpetrators calculate the likelihood of success, given their values and objectives’. Furthermore, is it plausible to expect the systematic and planned nature of genocide means that systematically occurring, and observable, preconditions also exist (Heldt, 2012: 2). A recurring finding is that genocide does not erupt from stable or harmonious political settings but from common forms of political instability such as civil war and democratic reversals (Krain, 1997; Melander, 2009; Valentino, Huth, and Balch-Lindsay, 2004; Colaresi and Carey, 2008; Harff, 2003). Take again, the example of Rwanda. The Rwandan Patriotic Front (RPF) launched an armed campaign from Uganda in 1990 to depose the Hutu-dominated government. Despite a spluttering peace-process, the civil war was ongoing in 1994, when the genocide began. Indeed, the government of Rwanda used the military threat posed by the RPF to enflame inter-ethnic division and consolidate its hold on power (Lemarchand, 2009: 47). Other findings from the literature suggest that autocracies are more likely to abuse their populations than democracies and leaders become ‘habituated’ to using mass-killing to address domestic instability (Rummel, 1995; Fein, 1993; Harff, 2003).[2] Exclusionary ideologies (such as Marxist-Leninism and ethnic elitism), poverty and trade-openness have also been found the significantly affect the chances of genocide. In 2003, Barbara Harff gathered these insights and published a seminal study in the American Political Science Review that was able to predict, with a statistical model fit to data from 1955-2001, 74% of genocide onsets correctly, whilst also classifying 73% of non-genocides correctly (Harff, 2003: 66).

Forecasting genocide remains a challenge for a number of reasons, however. First, compared to other social science phenomena such as elections or labor strikes, and even when compared to phenomena in international relations such as alliances and war, genocides are ‘rare events’ (King and Zeng, 2001). By predicting that there will be no genocide in any country in any given year since the end of World War 2, we will typically get over 99% of cases correct. Of course, the enormous human and material costs of genocide mean that we care much more about getting the remaining 1% right (something that goes for most ‘rare events’ in international relations). Forecasts of genocide therefore produce many false positives (cases where genocide was predicted but did not occur) in the pursuit of obtaining true positives. Genocide Watch, for example, placed 39 countries ‘at risk of genocide politicide or mass atrocity’ in 2012 (Genocide Watch, 2012: 4-5). The rarity of genocide also means that forecasts can only estimate the effects of causal variables from a small number of observations on the dependent variable and always run the risk of making predictions in situations for which we no historical parallel (King and Zeng, 2006). It remains possible that, as new cases of genocide occur, our estimates for how some variables affect the likelihood of genocide may change substantially, along with our forecasts. To an extent, this is simply a problematic character of the data, but, as is discussed below, there are some methods that help to overcome some of these issues.

A second question relates to modelling the connection between political instability and genocide. This is not straightforward because, while it might appear that instability ‘causes’ mass killing and we can restrict a sample to cases of ongoing instability (as is prevalent in the literature) it might be the case that elites plot genocide or politicide before the onset of instability and use that instability as a pretext for perpetrating genocide. That is, some elites may place their societies at a higher risk of instability with a pre-existing strategy of mass killing that may help them to maintain or gain power. According to Hoare (2010), for example, Slobodan Milosevic was resolved on a strategy of ‘ethnic cleansing’ and mass-killing in Croatia and Bosnia-Herzegovina by early 1990 to effect the most advantageous result possible for Serbia as Yugoslavia disintegrated over the period 1991-95. The military strategy of expelling armed resistance from parts of Bosnia claimed by the Republika Srprska militias was practically indistinguishable from the genocidal strategy of expelling the Bosnian Muslim and Croatian populations from their homes and villages. As analysts we are unlikely to be able to distinguish those cases where elites use atrocity as a ‘last resort’ to manage instability from those where atrocity was planned before political instability and restricting our sample to cases of instability may introduce selection bias. Furthermore, doing so is not especially useful for policy-relevant forecasting as we can only produce forecasts for states that are already experiencing political instability. However, there are a number of cases where mass atrocities commenced not long after the beginning of civil war or democratic reversals (as was the case in Sudan in 1956 and 1983, or Burundi in 1988). If part of our aim is to create an ‘early warning’ system, then a lead time of months, or even weeks, is not early enough.

Third, real-world forecasting requires that, even where we have a model capable of producing reliable forecasts, the relevant data must be available to policy makers to ‘plug in’ to the model (Ulfelder, 2012). The types of variables that can be included in forecasts are, therefore, restricted to those sources that are continuously updated or broadly predictable to policy makers. In the past, this has presented a major problem because we often have more data on the triggers and warning signs of past genocides than we do for cases where genocide may be presently unfolding. As historians, political scientists and national and international organisations document how a past genocide unfolded, we gather valuable information on potential precursors such as the recruitment of paramilitaries, the movements of military or paramilitary forces, the uses of hate-speech, or smaller scale killings or ‘ethnic cleansing’ that may precede a major escalation. Obtaining this data in the present, for forecasting purposes, is typically much more difficult. Even for the past, however, missing data remains a problem. Much of the historical data upon which forecasting models base their predictions are patchy, being available only for certain periods of time, or for certain countries, or both. Missing data has, until recently, usually meant abandoning many observations from the analysis (including some observations of genocide and mass killing), but some recent statistical procedures have mitigated this problem to an extent (Honaker, King and Blackwell, 2009).

Finally, statistical modelling of the onset of genocide and mass killing typically assumes that explanatory variables influence the likelihood of genocide in  a roughly linear way (or via some pre-specified mathematical transformation). So, the assumption might be that an increase GDP per capita from $500 to $1000 has the same effect on the probability of genocide as an increase in GDP per capita from $30,000 to $30,500. Furthermore we assume that the effect of one variable does not depend upon the value of another variable in the model (the relationship between independent variables is additive, not multiplicative) and that we can add the effects of separate explanatory variables together to obtain a complete picture. Colaresi and Carey (2008), however, have found that increases in the number of people under arms as a proportion of the total population (the ‘human defense burden’) increases the chances of genocide, but only in states with few constraints on their decision-making autonomy. Put another way, the extent to which the ‘human defense burden’ influences the likelihood of genocide, depends upon the value on a measurement of ‘executive constraints’. While linearity and additivity may be unrealistic assumptions in some cases, the difficulty is that there are many variables that may predict genocide and little theory to guide decisions of functional form and the effects of which variables depended upon the values of other variables (Ulfelder, 2012: 2).

Future Directions

Existing forecasting tools can be improved, and there are a number of scholars now working on this problem (Ulfelder, 2012; Hazlett 2011; Goldsmith et al, 2012). One way to increase the policy-relevance and lead time of genocide forecasts would be utilise a statistical model that incorporates both the probability of political instability and the probability of genocide into a single estimate, or alternatively, one that factors the covariates of instability into a single model of genocide onset.[3] We could then estimate the risk of genocide onset for all countries in a given year, not just for states already experiencing instability, and enable the publication of annual rosters of ‘at risk’ states. Longer term forecasts of, say, 2-5 years, would be even more useful, providing policy-makers with a wider temporal window in which to concentrate their prevention efforts. Longer-term forecasts also reduce the demand for the most up-to-date data (at present, for example, to produce forecasts for 2013, we would require data on the independent variables in 2011 to train a statistical model that produces coefficients for 2012, capable of forecasting into 2013). Wrapping any statistical model in easy-to-operate software that provides a window to those states at highest risk, and the variables that make those places dangerous, would be also of considerable practical use to policy-makers.

Second, incorporating variables with greater variation over time than are typically included in statistical models of genocide should increase the accuracy of year-on-year forecasts and reduce the number of false-positives. These might include: election periods and impending election periods, political assassinations, ‘hate speech’ and the rapid recruitment of paramilitaries. Between 1992 and 1994, for example, it is estimated that the Rwandan government recruited 50,000 paramilitary troops (such as the Interahamwe) that were heavily involved in the killing (Lemarchand, 2009: 408). Government sponsored paramilitaries also perpetrated mass killing in Guatemala and Darfur.

Third, technology, may be collapsing the gulf between what we can observe about the past and what we can observe in the present. The Satellite Sentinel Project, for example, uses satellite imagery to monitor military buildups and deployments and documents the destruction of villages (in collaboration with people on the ground) in the Sudan.[4] In concert with an effective tool to identify the world’s most at-risk states and populations, initiatives such as the Satellite Sentinel Project may be able to identify those triggers and warning signs that occur close to a genocidal event. Furthermore, widespread internet connectivity and the use of smartphones might mean that some of these ‘early warning’ signals can be transmitted to the international community faster and from more remote places of the globe than was previously possible.

Finally, methods of modelling statistical relationships that do not assume linearity or additivity may increase the accuracy of our forecasts if there are important non-linear and multiplicative ways in which explanatory variables influence the probability of genocide. Methods from ‘machine learning’, for example, are one avenue of future research (Ulfelder, 2012; Hainmueller and Hazlett, 2012). [5]

In sum, and ideally, these improvements would allow scholars to produce a small, but reliable list of ‘at risk’ states with the most contemporary data, thereby enabling policy-makers to focus their preventative efforts, contingency planning and political advocacy on these most dangerous of cases. While it might seem optimistic to think that a tool for forecasting genocide can break down the political calculations that so often seem to obstruct effective responses to genocide, it is surely a hope worth chasing.

Charles Butcher is a Postdoctoral Research Assistant in the Department of Government and International Relations at the University of Sydney. He is currently part of a team of researchers from the University of Sydney and the University of New South Wales working on forecasting of genocide.

 

Bibliography

Auschwitz Institute for Peace and Reconciliation (2012) Auschwitz Institute Praises New Atrocities Prevention Board. Press Release April 23. Available from http://www.auschwitzinstitute.org/obama_atrocities_prevention_board.html

Albright, Madeleine K and William S. Cohen (2008) Preventing Genocide: A Blueprint for U.S Policymakers. Washington D.C: The United States Holocaust Memorial Museum.

Beck, Nathianiel; Gary King, and Langche Zeng (2000) Improving Quantitative Studies of International Conflict. American Political Science Review 94(1): 21-35.

Bosco, David (2005) Crime of Crimes: Does It Have to be Genocide for the World to Act? The Washington Post, March 6: B01

Colaresi, Michael and Sabine C. Carey (2008) To Kill or to Protect: Security Forces, Domestic Institutions, and Genocide. Journal of Conflict Resolution 52: 39-67.

De Walle, Alex (2007) Darfur and the Failure of the Responsibility to Protect. International Affairs 83(6): 1039-1054.

Evans, Gareth (2008) The Responsibility to Protect: Ending Mass Atrocity Crimes Once and For All. Washington D.C: The Brookings Institute

Fein, Helen (1993) Genocide: A Sociological Perspective. London U.K: Sage Publications.

Fein, Helen (1994) Patrons, Prevention and Punishment of Genocide: Observations on Bosnia and Rwanda. In: Helen Fein (ed.) The Prevention of Genocide: Rwanda and Yugoslavia Reconsidered: A Working Paper of the Institute for the Study of Genocide. New York N.Y:  The Institute for the Study of Genocide.

Genocide Watch (2012) Countries at Risk Profiles 2012. [online] Available from: http://www.genocidewatch.org/alerts/countriesatrisk2012.html

Goldsmith, Benjamin E., Butcher, Charles Robert, Semenovich, Dimitri and Sowmya, Arcot, (2012) Forecasting the Onset of Genocide and Politicide: Annual Out-of-Sample Forecasts on a Global Dataset, 1988-2003 Working Paper Series. Available at SSRN: http://ssrn.com/abstract=2027396 or http://dx.doi.org/10.2139/ssrn.2027396

Goldstone, Jack. A; Robert.H. Bates, David.L. Epstein, Ted Robert Gurr, Michael B. Lustik, Monty G. Marshall, Jay Ulfelder, and Mark. Woodward (2010) A Global Forecasting Model of Political Instability. American Journal of Political Science 54(1): 190-208.

Hainmueller, Jens and Chad Hazlett, (2012) Kernel Regularized Least Squares: Moving Beyond Linearity and Additivity without Sacrificing Interpretability, Working Paper Series. [online] Available at: http://polmeth.wustl.edu/mediaDetail.php?docId=131

Harff, Barbara (2003) No Lessons Learned from the Holocaust? Assessing Risks

of Genocide and Political Mass Murder since 1955. American Political Science Review 97(1): 57-73.

Hazlett, Chad (2011) New Lessons Learned? Improving Genocide and Politicide Forecasting [online] Available from:  http://www.ushmm.org/genocide/analysis/details/2011-10-05/Chad%20Hazlett%20Early%20Warning%20Final%20Long%20Paper.pdf

Heckman, James J (1979) Sample selection bias as a specification error. Econometrica 47:153–61.

Heldt, Birger (2012) Mass Atrocities Early Warning Systems: Data Gathering, Data Verification and Other Challenges, Working Paper Series, available from: http://ssrn.com/abstract=2028534.

Henriksen, Dag (2007) NATO’s Gamble: Combining Diplomacy and Air Power in the Kosovo Crisis. Annapolis: Naval Institute Press

Herbst, Jeffrey (2001) The Unanswered Question: Attempting to explain the Rwandan Genocide, Foreign Affairs 80(3): 123-126.

Hoare, Marko Attila (2010) Genocide in the Former Yugoslavia Before and After Communism. Europe-Asia Studies 62(7): 1193-1214.

Honaker, James, Gary King and Matthew Blackwell (2009) AMELIA II: A Program for Missing Data. [online] Available from: http://j.mp/k4t8Ej

King, Gary and Langche Zeng (2001) Improving Forecasts of State Failure. World Politics 53(4): 623-658.

King, Gary and Langche Zeng (2006) The Dangers of Extreme Counterfactuals. Political Analysis 14(2): 131-159.

Krain, Matthew (1997) State-Sponsored Mass-Murder: The Onset and Severity of Genocides and Politicides. Journal of Conflict Resolution 41(3): 331-360.

Lemarchand, Rene (2009) The 1994 Rwanda Genocide. In: Samuel Totten and Willian S. Parsons (eds.) Century of Genocide: Critical Essays and Eyewitness Accounts. New York N.Y:  Routledge.

Marshall, Monty G; Ted Robert Gurr, and Barbara Harff  (2010) PITF – State Failure Problem Set: Internal Wars and Failures of Governance, 1955-2009. Dataset and Coding Guidelines. (http://www.systemicpeace.org/inscr/inscr.htm).

Melander, Erik (2009) Selected To Go Where Murderers Lurk? The Preventive Effect of Peacekeeping on Mass Killings of Civilians. Conflict Management and Peace Science 26: 389-406.

Rummel, Rudolph J (1995) Democracy, power, genocide and mass murder, Journal of Conflict Resolution 39(1): 3-26.

Totten, Samuel and William S. Parsons (2009) Introduction. In: Samuel Totten and William S. Parsons (eds.) Century of Genocide: Critical Essays and Eyewitness Accounts. New York N.Y:  Routledge.

Ulfelder, Jay (2012) ‘Forecasting Onsets of Mass Killing’ Paper prepared for presentation at the annual Northeast Political Methodology Meeting, New York University, May 4th.

United Nations (2012) Office of the Special Advisor on the Prevention of Genocide, (http://www.un.org/en/preventgenocide/adviser/index.shtml).

Valentino, Benjamin, Huth, Paul and Balch-Lindsay, Dylan (2004) ‘Draining the sea’: Mass killing and guerrilla warfare. International Organization 58(2): 375-407.

Wayman, Frank W. and Atsushi Tago (2010) Explaining the onset of mass killing, 1949-87. Journal of Peace Research. 47(1): 3-13.

Wood, Reed M (2010) Rebel capability and strategic violence against civilians. Journal of Peace Research. 47(5): 601–614


[1] This paper draws heavily from a larger working paper. To view the full version please see: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2027396. In the present paper I use ‘genocide’ and ‘mass killing’ interchangeably, but take the specific definition of genocide and politicide from the Political Instability Task Force. See Marshall, Gurr, and Harff,  2010: 14. It is important to note that the definition and operationalization of genocide and mass killing vary across research projects. In general, mass killing refers to a more broad category of atrocity than genocide, which usually requires the presence of intent to eradicate a particular ethnic or political group. Thanks must go the Australian Responsibility to Protect Fund via the Asia-Pacific Centre for the Responsibility to Protect, University of Queensland, for financially supporting this research and the larger project to forecast genocide.

[2] The effect of regime type appears to be sensitive to the choice of dataset. See Wayman and Tago, 2010.

[3] For a recent effort at forecasting political instability, see Goldstone et al. 2010

[5] To the best of my knowledge, Ulfelder (2012) and Goldsmith et al (2012) have experimented with non-parametric techniques to estimate some of the non-linear and conditional casual pathways leading to genocide.

Further Reading on E-International Relations

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.