The geopolitical implications of artificial intelligence (AI) and its most prominent example to date – ChatGPT – remain deeply uncertain. It will surely roil the technology industry and change our daily lives, but it is less clear whether it will augur a shift in geopolitical power or create policy dilemmas for the European Union. These are hard questions, perhaps too hard for a human being to answer. So we asked ChatGPT itself (its answers have been edited, like the writing of our human authors, for length):
Q: What are the geopolitical consequences of ChatGPT?
A: OpenAI’s GPT-3 language model, including ChatGPT, does not have geopolitical consequences as it is an artificial intelligence language model … Political consequences are the effects or results of political actions or events and are typically a result of human decisions and actions, not artificial intelligence models.
Q: But can ChatGPT reinforce US dominance in technology at the cost of China?
A: OpenAI’s GPT-3 language model, including ChatGPT, can be seen as reinforcing US dominance in technology, as OpenAI is a US-based company. However, it is important to note that the development of AI and its applications is a global effort, with many countries, including China, contributing significantly to the field. The use of GPT-3 or any other AI technology by individual companies or countries can impact their competitiveness, but it is unlikely to have a significant impact on global geopolitical dynamics.
Q: Can ChatGPT be used for military purposes?
A: ChatGPT … could potentially be used for military purposes, such as in the generation of military reports, translations, or communications. However, it is important to note that the use of AI in military applications is highly regulated and subject to ethical and legal considerations, … It is ultimately up to governments, military organisations, and other stakeholders to determine the appropriate use of AI technology in military contexts.
Q: Can ChatGPT be used to produce and disseminate disinformation?
A: Yes, ChatGPT, like any AI language model, could potentially be used to produce and disseminate disinformation. … If the training data includes disinformation or false information, the model may produce similar false information in its generated text. Moreover, the ability of AI models to generate coherent and convincing text can be used to spread disinformation at scale and with speed, making it more challenging to counter such efforts. … It is important for organisations and individuals to take steps to prevent the spread of disinformation, such as verifying the source of information and using multiple sources to confirm its accuracy.
ChatGPT’s answers are quite unsatisfying to humans. It seems anxious to downplay its role and emphasise that humans and human institutions still play a critical role in geopolitical questions. ChatGPT is either not yet sufficiently developed to replace geopolitical analysts (phew!) or it has something to hide (uh-oh!). Either way, we humans should still consider for ourselves whether a new disruptive technology is emerging, equivalent to the PC, the internet, or mobile devices.
The discussion about ChatGPT’s consequences has just begun. The debate so far has focused on the market consequences and, in particular, whether it will allow Microsoft to challenge Google’s previously unassailable internet search monopoly in an advertisement market valued at $696 billion (roughly as much as Saudi Arabia’s GDP). We know that ChatGPT will not be alone in the market, but we cannot say whether we are witnessing a transition at the top of big tech. Should the baton pass from one US company to another, the geopolitical consequences will not be great. This would just confirm US tech dominance, with a new logo.
But Chinese companies are not just spectators in this struggle and if a Chinese company becomes dominant in the field, the geopolitical effects would be profound. Baidu, the Chinese e-commerce giant, has announced the launch of its AI chatbot with similar services to OpenAI’s ChatGPT for March 2023. Language models are set to be another battlefield of the technological competition between the United States and China over the development and mastering of AI-based technologies.
In this competition, the EU will face familiar dilemmas. In the past, US companies moved fast and broke things, to use the expression coined by Facebook founder Mark Zuckerberg, while the EU looked at these new technologies through regulatory eyes. We may witness something similar here as the EU’s heavy regulatory hand is likely to lead it to clash with language models in various ways. For example, data experts such as Alexander Hanff, member of the European Data Protection Board, have warned that scraping the internet for data to train an AI language system is unlawful. Others have questioned the capacity of these systems to remove traces of individual’s personal information as required by article 17 of the GDPR or to comply with intellectual property rights.
The generation and dissemination of disinformation raises a similar concern. AI could make disinformation campaigns more widespread and realistic than ever. ChatGPT and its owners insist that it does not bear responsibility if its answers give false information or mislead people to make decisions. This is a real possibility which has been tested with worrying results: false, incomplete, or misleading answers are called “hallucinations”, and can affect some 15-20 per cent of ChatGPT’s results.
But given the impact that disinformation has already had in fuelling polarisation in our democracies and eroding the legitimacy of elections and electoral results, it is likely that extremists and authoritarian regimes will seek to exploit these tools to pollute public spaces with toxic elements and further undermine citizens’ trust in democracies. ChatGPT can also be used as a tool of political influence by automating lobbying, generating letters for legislators, or creating other big ‘grassroots’ political campaigns.
Governments are unlikely to stand by and just allow this to happen, regardless of what ChatGPT says about its role. Since 2018, the EU has worked hard to create a capacity to fight disinformation. Now, in the context of Russia’s invasion of Ukraine, it is increasing its efforts to counter foreign information manipulation and interference (FIMI) and detect and counter Russia’s spread of misleading narratives in the so-called global south. Here again, language models and AI-based tools could be used to counter disinformation, but malicious actors are likely to exploit them faster and more efficiently.
ChatGPT is also likely to have a big impact on cybersecurity, which is another issue with huge market and geopolitical implications. Language models can be used for good: streamlining code construction, monitoring chats to detect suspicious activity, or helping manage cybersecurity incidents. However, they can also be used to find vulnerabilities in outdated platforms via questions asked by users. Also, because language generated systems can impersonate real people in a very credible way, they can be used for criminal purposes, such as generating spam or writing realistic emails asking people to give sensitive information, open malicious links, and install malware in their computers.
All of these effects remain quite speculative. ChatGPT and other language models have already proven that AI is here to stay and that it will affect us in profound ways. It may even affect the geopolitical competition between world powers. But its rather unsophisticated answers also tell us that AI has a long way to go, and the precise contours of its geopolitical effects remain unknowable to both humans and bots. We will keep asking ChatGPT and other AI bots these questions and let you know if their answers grow more revealing.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.