In recent years, the world has experienced a substantial rise of cybercrimes across many countries, and especially as a result of the digitalisation of jobs due to the various lockdowns implemented in 2020 (Riley, 2021). Technological progress will make online criminality more sophisticated and thus even more dangerous and harder to defend against. Thus, a multidisciplinary approach is needed to fight this phenomenon by adopting a variety of techniques from social and computer sciences. This essay will focus on computational propaganda, and more precisely on the use of bots on social media. The paper will first define what computational propaganda is, while highlighting its main features from different perspectives. It will later examine the challenges faced when countering online propaganda. Lastly, the essay will critically analyse and evaluate the possible responses and solutions to this issue.
Understanding computational propaganda
Computational propaganda can be described as an “emergent form of political manipulation that occurs over the Internet” (Woolley and Howard, 2018, p. 3). It is carried out particularly on social media, but also on blogs, forums and other websites that involve participation and discussion. This type of propaganda is often executed through data mining and algorithmic bots, which are usually created and controlled by advanced technologies such as AI and machine learning. By exploiting these tools, computational propaganda can pollute information and rapidly spread false news around the internet (Woolley and Howard, 2018).
Data mining is utilized to personalize adverts and automated bots to promote a certain point of view or perspective, while also disrupting the communication and campaign of the opposition (Howard, Woolley and Calo, 2018). Therefore, political adverts are tailored accordingly, and the information is spread to a greater amount of people. Bots shape discussions and share multiple posts on social media, in order to spread false or partisan information and support a particular party or group, as well as to promote hate campaigns (Woolley and Howard, 2018). In this way, computational propaganda can influence the outcome of democratic processes, such as elections and referenda. A critical factor to consider is that data mining and bots are, respectively, performed and created by humans (Howard, Woolley and Calo, 2018). Hence, computational propaganda could be carried out by activists or political actors, who exploit technologic advancements to promote their objectives or endorse their candidates. This is usually executed on platforms that engage the public in discussions and decisions, such as social media and blogs. One may argue that bots serve as facilitators to spread information and thus they could be conceived as benign. However, there are many cases in which bots are used for malicious intentions, such as spreading false information or derailing opposition campaigns.
In fact, these tools have often been used in electoral campaigns to manipulate and influence the opinion of the voters, menacing both online and offline aspects of the community. For instance, the outcomes of the 2016 UK referendum and the 2016 US elections were allegedly affected by influence campaigns mainly carried out on social media, since almost a third of tweets about the UK referendum and a fifth about the US elections were shared by bots (Schneier, 2020).
These figures reveal how important is the role of bots in computational propaganda, and to what extent this strategy can impact political systems and undermine the credibility of media institutions. It also shows how foreign governments are able to influence the outcome of democratic practices of other countries by engaging in ‘information warfare’ campaigns. For this particular act, in 2018 thirteen Russian nationals and three companies were charged for interference in the US political system, including the 2016 presidential elections (United States Department of Justice, 2018).
Moreover, recent research found that 81 countries are performing computational propaganda, 57 of which are using automated bots on social networks (Bradshaw, Bailey and Howard, 2020). This is a crucial factor because it proves that this form of manipulation is on the rise, as well as the use of social media as a source of information: in fact, a recent poll found that almost half of American citizens rely on such platforms to get news (Shearer and Mitchell, 2021). Therefore, the more social networks become popular, the more people are affected by computational propaganda and the easier it is to influence public opinion.
Social media applications were conceived as a platform where freedom and democracy would prevail, but in recent years many concerns have been raised about an increasing presence of accounts, mostly fake, sharing false news (Bradshaw and Howard, 2019). This can cause repercussions on both the platforms and conventional media, which have seen a decline in public trust. Moreover, a study on terrorist groups ISIS and Al Qaeda has found that these organisations also spread propaganda in the cyberspace through social networks such as Facebook and Twitter (Choi, Lee and Cadigan, 2018). Focus must be thus put on these platforms and whether their system of control and detection of inappropriate or illegal content is efficient enough to avoid the spread of false or dangerous information.
Furthermore, computational propaganda is expanding into other fields. Many bots have been used to spread misinformation and disinformation about healthcare: for instance, by running anti-vaccine campaigns (Broniatowski et al. 2018). A very recent example is the large amount of false news shared during the Covid-19 pandemic through machine learning techniques such as automated bots (Khanday, Khan and Rabani, 2021). The dangerous aspect of this type of propaganda is that public consensus about the benefits of vaccines and other medications erodes, and people will more likely believe in quick and simplistic solutions, instead of scientific research based on empirical evidence.
From a sociological perspective, computational propaganda and the manipulation of social media contribute to the generation of echo chambers, which refer to environments where people come across information that only reinforces their own point of view (Woolley and Howard, 2018). For instance, social media algorithms adjust the content that users can see and thus create filter bubbles (Barberá, 2020). Therefore, the individual is isolated and mainly finds users with similar opinions. This form of ‘enclave deliberation’ leads to a further strengthening of the user’s perspective, who encounters little opposition (Barberá, 2020). As a result, this will also favour an increase of partisan stances where there is no room for challenge nor compromise.
Echo chambers can thus pollute public discussions, by making them homogenous contexts where opposing opinions are rejected. These aspects may lead to a polarization of the political discourse, which could also allow extremist stances and conspiracy theories to emerge (Barberá, 2020). As individuals participate in online discussions solely with like-minded people, they are able to filter out all the content that challenges their position on social or political topics. Therefore, the absence of counter information would induce their ideas to go through a process of polarization. This is a significant matter, because exposing yourself to opposing views is needed in a democratic environment, as to have a clear and balanced understanding of relevant issues.
Another consequence of computational propaganda is the spread of misinformation and disinformation, which could intensify socio-cultural differences, as well as reduce public trust towards conventional media and democratic institutions (Lavorgna, 2020). Traditional media would thus lose legitimacy and the public could gear towards alternative sources of information, such as social networks (Bennett and Livingston, 2018). In turn, this can dissuade media organizations from investing money and time in meticulous and factual reporting. This happens especially in developing countries where media institutions are not well established and only reach a small percentage of the population (Guess and Lyons, 2020). Apart from creating social divisions, in these contexts the spread of disinformation and misinformation may also increase violence amongst the population and contribute to the spread of weaponized online propaganda.
All the factors previously analysed contribute to the development of an environment in which impartial and clear evidence counts less than emotional responses and sentiments of the public. This concept is usually explained as ‘post-truth’ politics, where the line between facts and subjective feelings is blurred (Block, 2019). As a result, political actors can make false information seem true to the eyes of the public, whose decisions are driven by instincts and emotions, rather than empirical evidence.
Challenges and responses
Over the years computational propaganda and the use of bots on social media have become widespread tools to influence public opinion. It is becoming increasingly harder to address this challenge as there is a shortage of legal framework and awareness on these modern techniques of manipulation (Lavorgna, 2020). It is partly due to the inadequacy of institutions and the lack of precise knowledge about these automated tools: for instance, it is still not clear whether bot traffic is always negative and in what circumstances it may not be (Woolley, 2020). As for the role of social networks, it has been complicated to develop and implement effective policies over the responsibility of these platforms, also because of the inaccessibility of data (Bennett and Livingston, 2018).
Another critical element worth analysing is the problem of attribution, namely the difficulty in identifying both instigators and perpetrators of such actions. This is mainly due to the technological advancements of AI and machine learning techniques, which grant bots the ability to adapt to different environments (Woolley, 2020). In addition, computational enhancement provides anonymity and automation, which allows offenders to hide their identity and bots to perform repetitive tasks at a much faster rate than human actors (Woolley and Howard, 2018).
Despite the slow legal development around this issue, it could be argued that a multidisciplinary approach will facilitate the regulation and control of computational propaganda. Aside from a technological modus operandi, this challenge should be addressed by adopting techniques from different fields of study. Improving technologies of detection can help contrast this issue, but also sociological and legal approaches would further simplify this process of mitigation.
Machine learning and AI may be improved and used to combat computational propaganda, as human actors alone cannot deal with this matter. These technologies could be utilized to prevent malicious usage of bots, in order to detect and regulate practices of online propaganda (Woolley, 2020). In addition, the use of high-powered software such as data intelligence platforms will help individuals to gather and analyse the information found on the web, and it will assist companies and professionals such as journalists and researchers to better understand and fight disinformation (Woolley, 2020).
Electoral campaigns should also be secured with digital tools, as to provide political actors and voters with an efficient form of detection and response to misinformation and disinformation (Schia and Gjesvik, 2020). An example could be fact checking applications that can verify the veracity of the information shared over the internet. This would also improve public trust towards the democratic process, as well as media institutions.
In addition to technological solutions, social policies are also needed. Raising awareness about computational propaganda and automation of bots allows individuals to better understand the world of social networks (Schia and Gjesvik, 2020). Further collaboration and cooperation in the government to promote and improve this process would be beneficial, in such a way as to allow citizens to acknowledge facts and counter arguments. As a result, critical thinking will be promoted through awareness campaigns and political education schemes in order to build trust and enhance the ability of the public to spot false news and find alternative solutions (Schia and Gjesvik, 2020).
These strategies may gain some immediate success, but they cannot be the only solution to this vast dilemma. It is crucial also to identify who is behind online propaganda operations, and at the same time understand who the targets of these campaigns are. In fact, social platforms suffer from a lack of transparency that does not allow to precisely measure the impact of computational propaganda on society (Schia and Gjesvik, 2020). Therefore, there is the need of regulations and policies that would directly address the role of social networks over the spread of disinformation. Social media must thus take more responsibility over the impact of data mining and automation on society and politics (Woolley, 2020).
Furthermore, regulation of bots could be implemented by revisiting election laws or communication policies, since in many states they are usually obsolete and do not take into consideration that new forms of technology are able to influence opinions and polarize discussions (Howard, Woolley and Calo, 2018). Transnational corporations need to cooperate, and international law frameworks need to keep up with the ever-evolving cyberspace in order to regulate the use of these automated tools. Moreover, a better legislation on contributions and expenditures of political parties would be helpful to investigate on who may be driving these activities.
Conclusion
Computational propaganda deploys automated bots on social media to influence users and induce them to support a specific political agenda. As a result, this practice can potentially create public consensus where there was little or did not exist, while drastically altering public opinion. The drivers of such campaigns usually have political aims, such as influencing the outcome of elections or referenda. Computational propaganda thus has a severe impact on the democratic process, as it weakens institutions and traditional media outlets. According to recent cases and statistics, this phenomenon is on the rise and is expanding to other fields, such as terrorist propaganda and healthcare disinformation. Although computational propaganda is not technically illegal, it can be described as a form of political deviance which undermines democratic principles. It also has social repercussions, such as the creation of echo chambers that make online public discussions homogenous, as well as the polarization of political communication.
Contrasting computational propaganda presents several challenges. There is a lack of legislation aimed at this issue as it makes use of always evolving technologies. In fact, advanced technologies allow the instigators and perpetrators of online propaganda to remain anonymous and hidden. Therefore, it has been very complicated to implement appropriate policies to combat this form of manipulation. An approach that would include techniques from multiple subjects and fields of study is thus needed, in such a way as to consider every implication that computational propaganda leads to.
On a technological level, AI and machine learning can be exploited by governments to counter propaganda through bots on social platforms. Sophisticated tools such as fact checkers should also be utilized to hinder the spread of disinformation and improve public trust towards media institutions. From a sociological perspective, raising awareness and critical thinking through education and cooperation helps make the public more informed and prepared to such events. In conclusion, an international legal framework that regulates the automation of bots and the role of social networks should be implemented, as to avoid negative impacts on society and politics.
References
Barberá, P. (2020) ‘Social Media, Echo Chambers, and Political Polarization’, in Persily, N. and Tucker, J. A. (eds.) Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press, pp. 34-55.
Bennett, W. and Livingston, S. (2018) ‘The disinformation order: Disruptive communication and the decline of democratic institutions’, European Journal of Communication, 33(2), pp. 122-139.
Block, D. (2019) Post-Truth and Political Discourse. Cham: Palgrave Pivot, Palgrave Macmillan.
Bradshaw, S. and Howard, P. (2019). The Global Disinformation Order – 2019 Global Inventory of Organised Social Media Manipulation. Computational Propaganda Research Project, University of Oxford. Available at: https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf
Bradshaw, S., Bailey, H. and Howard, P. (2020) Industrialized Disinformation – 2020 Global Inventory of Organized Social Media Manipulation. Computational Propaganda Research Project, University of Oxford. Available at: https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/127/2021/01/CyberTroop-Report-2020-v.2.pdf
Broniatowski, D., Jamison, A., Qi, S., AlKulaib, L., Chen, T., Benton, A., Quinn, S. and Dredze, M. (2018) ‘Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate’, American Journal of Public Health, 108(10), pp. 1378-1384.
Choi, K., Lee, C. S. and Cadigan, R. (2018) ‘Spreading Propaganda in Cyberspace: Comparing Cyber-Resource Usage of Al Qaeda and ISIS’, International Journal of Cybersecurity Intelligence and Cybercrime, 1(1), pp. 21-39.
Guess, A. M. and Lyons, B. A. (2020) ‘Misinformation, Disinformation, and Online Propaganda’, in Persily, N. and Tucker, J. A. (eds.) Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press, pp. 10-33.
Howard, P. N., Woolley, S. and Calo, R. (2018) ‘Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration’, Journal of Information Technology and Politics, 15(2), pp. 81-93.
Khanday, A., Khan, Q. and Rabani, S. (2021) ‘Identifying propaganda from online social networks during COVID-19 using machine learning techniques’, International Journal of Information Technology, 13(1), pp. 115-122.
Lavorgna, A. (2020) Cybercrimes: Critical Issues in a Global Context. London: Macmillan International Higher Education, Red Globe Press.
Riley, T. (2021) ‘The Cybersecurity 202: Cybercrime skyrocketed as workplaces went virtual in 2020, new report finds’, The Washington Post, 22 February. Available at: https://www.washingtonpost.com/politics/2021/02/22/cybersecurity-202-cybercrime-skyrocketed-workplaces-went-virtual-2020/
Schia, N. N. and Gjesvik, L. (2020) ‘Hacking democracy: managing influence campaigns and disinformation in the digital age’, Journal of Cyber Policy, 5(3), pp. 413-428.
Schneier, B. (2020) ‘Bots Are Destroying Political Discourse As We Know It’. The Atlantic, 7 January. Available at: https://www.theatlantic.com/technology/archive/2020/01/future-politics-bots-drowning-out-humans/604489/
Shearer, E. and Mitchell, A. (2021) News Use Across Social Media Platforms 2020. Pew Research Centre, January 2021. Available at: https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/
United States Department of Justice (2018) Grand Jury Indicts Thirteen Russian Individuals and Three Russian Companies for Scheme to Interfere in the United States Political System [Press release]. 16 February. Available at: https://www.justice.gov/opa/pr/grand-jury-indicts-thirteen-russian-individuals-and-three-russian-companies-scheme-interfere
Woolley, S. C. and Howard, P. N. (2018) Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. New York: Oxford University Press.
Woolley, S. C. (2020) ‘Bots and Computational Propaganda: Automation for Communication and Control’, in Persily, N. and Tucker, J. A. (eds.) Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press, pp. 89-110.
Further Reading on E-International Relations
- How the Islamic State Weaponizes Imitation in Its Propaganda
- Postcolonial Subjects and Their Responses to Metanarratives
- Humanitarianism and Securitisation: Contradictions in State Responses to Migration
- Global Covid-19 Responses Through a Critical Security Studies Perspective
- The Future Challenges Facing Europe as a Global Actor
- Australia: Challenges to the Settler State’s Pursuit of Transitional Justice