The Age of AI: And Our Human Future
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Little, Brown and Company, 2021
AI will indeed become ubiquitous. The discussions on whether Artificial Intelligence (AI) is good or bad are no longer useful. The time has come for discussions on how to shape AI development as per the requirements of a nation (Kalluri, 2020). The geopolitical debates anchored in nuclear weapons have shifted to AI. In the era of such unprecedented changes, an analytical account of the socio-political impacts of AI is required. The book The Age of AI: and our Human Future allows the general public to understand the impact of AI on society and global politics. Interestingly, it builds on the conclusions put forward in The Third Wave (Toffler, 1981), Future Politics (Susskind, 2018), and Superintelligence (Bostrom, 2014). It reiterates the argument that AI can be politically disruptive. Validating this argument requires expertise in foreign affairs and technological development, and the authors of The Age of AI meet these qualifications.
Henry Kissinger has published American foreign policy, the Vietnam fiasco, and the rise of China. With this book, he picks up the next geopolitical shift anchored in digital technologies. Lacking the required expertise in technology, he joins Eric Schmidt, who is well known for his long executive stint at Google, and Daniel Huttenlocher from the Massachusetts Institute of Technology. This book introduces AI disruption by elaborating on its use in medical science, pharmaceutical research, social compositions, liberal values, and the defense industry. The authors, coming from government, industry and academia do not subscribe to the idea that AI can take away human intelligence. The excerpt below provides a peek into their conceptual position.
Turing’s and McCarthy’s assessments of AI have become benchmarks ever since, shifting our focus in defining intelligence to perform (intelligent – seeming behaviour) rather than the term’s deeper philosophical, cognitive, or neuroscientific dimensions (p.56).
Understanding the Societal Impact of AI
There are two broad approaches taken to understand the interactions between technology and society. One is social constructivism and the other is technological determinism. The former espouses that the development of technology is an outcome of cultural and societal interactions. The latter approach considers technology to be neutral to societal interactions. This book opines that AI was first built to aid business. Later, business discovered innovative applications of AI (p.110). This mutual influence of AI and business shows that the technology is initially constructed with purpose. After its construction, along with its intended purpose, it would bring new use cases. This confirms the performativity analysis (Kornelia Konrad, 2017) that AI and society will provide a better understanding of social changes. The performativity approach to understanding AI considers the mutual interactions of technology and society. This is similar to Gidden’s understanding of structuration theory, which explains the interactions between the social institutions and situated interactions between individuals and groups (Whittington, 2010). Though there is no direct mention of these approaches, the description of AI in this book appears to share their background.
The description of the societal impact of AI is provided through a popular culture narrative. It reiterates the argument that AI divides countries into ‘haves’ and ‘have-nots’. This inference made in the book is heuristically observable for the readers. This is because all AI products are made by humans and funded by political or business entities. They are developed to perform a set of operations more effectively than humans by training them on particular datasets, suggesting that AI can be skewed towards political bias. Importantly, the development of AI requires high computing power, humongous data, skilled computer scientists, and huge capital investment. Along with the descriptive account on unequal social developments, this book raises the question of who controls the data if big corporations withhold the digital resources (p.91).
The book also espouses the idea that AI leads individuals to form a tacit relationship with machines (p.106). Hence, AI is building anthropomorphic societies. For a geopolitical analysis, it is important to have a morphogenesis approach similar to Daniel Deudney’s new materialism (Deudney, 2018) to understand machine-human interactions in analyzing political decisions. This morphogenesis approach helps in creating new nodes from the interaction between humans and machines. Another approach to consider new agencies to understand international politics is Latour’s Actor Network theory (Latour, 2005). Similar to ‘new materialism’, it proposes to construct society by understanding the interaction between materials and humans. Such theoretical approaches, read alongside this book, would help to critically understand the impact of AI on geopolitics.
Geopolitical Implications
Each major technologically advanced country needs to understand that it is on the threshold of strategic transformation as consequential as the advent of nuclear weapons – but with effects that will be more diverse, diffuse, and unpredictable (p.172).
Societal change brought by AI is not limited to the borders of a country. Almost all the major technological platforms are developed by technologically advanced countries like the US and China and used worldwide. Here, the client states are those who import AI technologies, and the host states are the US and China. This provides the US and China with huge data, and its analysis will be used to the strategic advantage of Washington and Beijing (p.96). This client–host relationship means that dominant countries will have a first-mover advantage in gaining and sustaining their influence in global governance. These arguments coming from authors who are stalwarts of American foreign policy, technology, and academia is a significant input for geopolitical analysis. The authors propose that the new geopolitics driven by AI will answer the following questions:
- What margin of superiority will be required by competing states?
- At what point does superiority cease to be meaningful in terms of performance?
- What degree of inferiority would remain meaningful in a crisis in which each side uses its capabilities to the fullest? (p.132).
As a Neo-realist, Kissinger points out that international relations is engulfed in a paradox (p.173). States have to maximize their power for security and at the same time limit the arms race to maintain international peace. This paradox is generally understood in terms of nuclear weapons and the way states react. Nuclear arms are used as deterrence and the arms control treaty discourages the usage of nuclear weaponry. There is a never-ending paradox between using nuclear weapons for sustained security or abandoning nuclear weapons for human security. Kissinger proposes that cyber conflict and AI have compounded the security dilemmas created by strategic nuclear weapons (p.150). Similar to the dichotomous approaches advocated by the usage of nuclear weapons, AI technology can also be used for effective governance and for invasive surveillance. This is because the process of extracting data and performing predictive analysis is common for both the use cases. When AI systems are exported, there is a chance that the data stored in the servers will be used for the political advantage of the host country. It can be used to hijack societal behavior, popular opinions, and also manipulate knowledge systems. Thus, AI enhances the security dilemma induced by weapons of mass destruction.
Regulation Dilemma
When an AI algorithm is deployed, even the developers will not know how the neural network identifies the correlations between the input variables. AI systems will only be concerned with the input and the desired output. The success of AI is not judged by the algorithm it uses, but on the output (p.107). If regulation is done based on the outcome, one would lose many factors that are hidden, such as the variety of data the platform is acquiring from users, the kind of inferences it can get from unknown correlations, and the kind of predictive models it is building. These issues can be resolved if regulation is applied to the design of AI algorithms. However, the technology is controlled by a few political and business elites. Taking stock of this scenario, the book asks, “Will attempts to regulate network platforms … produce more just societies? Or will they lead to more powerful and intrusive governments?” (p.113). It is not just political control that is uncertain, but also social progress. Due to increasing cases of fake news and associated violence, increase in the rigid mob thinking is visible worldwide. These incidences will only increase when the information available on the internet is biased or search results are politically influenced. In a situation where the public are more dependent on AI for information and decision making, and it is left unregulated, the authors’ inquisition of whether the spread of disinformation and efforts to combat it will be entirely entrusted to AI becomes a question to ponder (p.114).
Governance of AI
Previously, all social and governance decisions were based on human rationale and judgments, including those made by individuals in the conduct of their social life, and by the state regarding the socio-economic and socio-political conduct of society. With AI, it is certain that the AI predictive models and their data interpretation will share human agency. Humans might no longer take decisions independently from machines. AI machines with their predictive algorithms would play a greater role in human decision making. The anthropomorphic society where human capabilities are considered to be below par to machines, where AI drives the scientific inquiry, and where limiting screen time becomes a major parenting attribute, might be a reality in the future.
Conclusion
This book provides a clear account of the impact of AI on global politics. It covers societal disturbances, inequalities, and the rising security dilemma among states, using relevant examples. The book advises the adoption of a co-productionist approach in analyzing AI geopolitics. This approach is a combination of social constructivism and technological determinism. As geopolitics is the core focus of the argument, this work describes how technologically superior countries like the US and China are leveraging AI to spread their global influence. With the argument, the authors provide the readers with some key questions that will force them to ponder the AI regulatory approaches. The book further assesses that the geopolitical dynamics are similar to the arms race paradox that centered around nuclear weapons during the post world war period.
Finally, socio-technical interactions accelerated by AI and the Internet are not the result of planned corporate objectives. Societies don’t even realize the changes they are undergoing. To resolve such uncertainty, the book concludes that countries should adopt knowledge sharing under the triple helix model, emphasising the collaborative effort of the government, academia, and industry in studying the issue and providing solutions. For those who are concerned with AI ethics, the book also briefly discusses that issue. It ends with a set of questions on AI ethics, regulation of AI and AI-based platforms, security frameworks, and diplomacy.
References
Bijker, E. W. (1995). Of Bicycles, Bakelists, and Bulbs: Toward a Theory of Socio-technical Change. London: MIT Press.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Deudney, D. (2018). “Turbo Change: Accelerating Technological Disruption, Planetary Geopolitics, and Architectonic Metaphors.” International Studies Review, 20 (2), pp.223-231.
Etzkowitz, H., & Leydesdorff, L. (1995). “The Triple Helix — University-Industry-Government Relations: A Laboratory for Knowledge Based Economic Development.” EASST Review.
Kalluri, P. (2020). “Don’t ask if AI is good or fair, ask how it shifts power?” Nature, 583 (169).
Kissinger, H. A., Eric, S., & Daniel, H. (2021). The Age of AI and our Human Future. London: John Murray.
Kornelia Konrad, H. V. (2017). “Performing and Governing the Future in Science and Technology” in Felt, U., Fouché, R., Miller, C.A. and L. Smith-Doerr, The Handbook of Science and Technology Studies, MIT Press, pp. 465-495.
Latour, B. (2005). Reassembling the Social: An Introduction to Actor Network Theory. Oxford University Press.
Polcumpally, A. T. (2021). “Artificial intelligence and global power structure: understanding through Luhmann’s systems theory.” AI & Society.
Susskind, J. (2018). Future Politics. Oxford University Press.
Toffler, A. (1981). The Third Wave. Macmillan India.
Whittington, R. (2010). “Giddens, Structuration Theory and Strategy as Practice” in Golsorkhi, D., Rouleau, L., Seidl, D. and E. Vaara (eds.), Cambridge Handbook of Strategy and Practice, pp.109-126.