When you imagine a meeting of world leaders – after the obligatory handshakes and press photos, small talk, and niceties – you may imagine an expertly composed agenda of talking points based on the goals of the meeting, painstakingly researched and compiled by a team of aides, preparing the leaders for any eventuality the conversation might take.
Now imagine a future where that agenda and those talking points were not created by humans, but generated by a computer, which had gathered not only all of the opposing leader’s publicly available information, but the data of all of his or her previous policy decisions, conversations with other leaders, and even body language during those conversations. Effectively, we’re dealing with an artificial intelligence aide that, instead of trying to predict what the other party might say, can now predict with a high degree of certainty what the other party will say.
This is a future painted in the article “Algorithmic Foreign Policy,” published in the Scientific American, exploring how artificial intelligence (AI) algorithms are being used to provide foreign policy suggestions to the Chinese government. Armed with vast amounts of data from the increasingly digitized world, these algorithms could one day be so advanced that they could accurately predict major geopolitical events, or predict the actions of policymakers. There is evidence that China is actively pursuing some of these technologies, yet it is worrying that there is no evidence of governments from the rest of the world following suit.
While current lawmakers outside of China may scoff at the idea of computers making policy decisions, this will not be a surprise to the programmers and scientists currently working on advanced machine learning algorithms. The problem is, the vast majority of these programmers aren’t working for the government; they’re working for large tech companies like Google, Facebook, and Amazon. You’ve probably already encountered advanced AI algorithms from these companies, like when Amazon seems to know precisely what you want to buy when you land on their website. Facebook even inadvertently spurred a widely-believed conspiracy theory that they tap into users’ smartphone mics to monitor private conversations – all because their algorithms can serve targeted advertisements with frightening accuracy.
The divide between government and tech is, at least in the US, best encapsulated by Mark Zuckerberg’s testimony before congress about the Cambridge Analytica scandal that was uncovered in 2018. During his testimony, it seemed many of the senators in attendance, whose average age was 62 years old, were asking questions to understand what Facebook was and how its advertising algorithms worked, rather than what decisions led to the leaking of 87 million users’ private data to a third-party company. It’s clear then that these lawmakers would have trouble understanding how it is even possible that an algorithm can make accurate policy predictions based on the facial expressions of the policymaker. (READ: EXCLUSIVE: Interview with Cambridge Analytica whistle-blower Christopher Wylie)
Some might say that this technology could never replace the human nuance in political discourse, negotiation, or understanding of human emotions and intent. But at the end of the day, as any big tech company knows, data rules overall. One of the common pitfalls in understanding the buzzword “big data” is to assume that it just means a large volume of data – take, for example, your Netflix viewing history. It’s safe to assume that a human could make good content suggestions for you after reviewing this set of data.
But what defines big data and its inability to be reviewed by humans, is the depth of those large volumes of data. Netflix’s algorithms aren’t just based on your viewing history; they are capturing the time, location, and device with which you watched the content, the number of times you paused, rewound, or fast-forwarded, what you searched for to find the content, and countless other metrics that no one outside of a Netflix data scientist could tell you about. Faced with this vast amount of data across millions of users, you start to see the role algorithms play in crunching this data, and the reason why companies like Netflix can serve predictions with sometimes unnerving precision. (READ: Cambridge Analytica’s parent firm claims it won 2010 election for PH president)
Imagine, now, the amount of data that can be gleaned from a single policymaker – not just the history of his or her policy decisions, but parties involved, reactions, facial expressions, choice of words, time, location, affiliations – and the algorithmic predictions begin to seem feasible.
We know that the Chinese government is already developing machine learning algorithms for foreign policy. The greatest bottleneck to other countries applying this technology to foreign policy is the policymakers’ inability to adopt these new technologies because they simply don’t understand how these algorithms work. To understand the potential of algorithms, you must first experience them firsthand.
So, politicians of the world, if you’re still skeptical about AI-powered foreign policy, just take a look at how the tech products you use every day are predicting your next move – but instead of anticipating your likelihood of enjoying a movie, the algorithms are calculating the likelihood that a policy decision you make will have a successful outcome. – Rappler.com
Alessandra Laurel Lopez is a second-year graduate student at Columbia University’s School of International and Public Affairs, studying International Security Policy. She previously worked for UNICEF’s Public Partnerships Division in New York, and served as a graduate consultant for the National Geospatial Intelligence Agency. She is passionate about the intersection of tech and public policy.