While offering new and exciting potential for the future of data processing, coding, and generating information, AI tools in the hands of Authoritarian regimes have been appropriated to suppress dissent and opposition.
New AI-generative tools such as OpenAI’s exciting ChatGPT–though very fun to play around with–have raised concerns about surveillance, censorship, and disinformation. Public concerns about artificial intelligence (AI) in the political sphere tend to focus on its potential to undermine or threaten liberal democracy. AI introduces new ways of monitoring people, producing disinformation at scale, and interfering with democratic representation. In states where maintaining power and control over society is paramount to regime survival, AI algorithms are likely to serve as a method of strengthening autocrats’ grip over the state. Disregard for freedom of information, privacy, and human rights, increases the potential for the exploitation of AI tools by authoritarian leaders.
Surveillance is key for authoritarian regimes to monitor their citizens in order to retain control over the population and its activities by suppressing any potential for opposition and dissent. In countries such as Kenya and the Philippines, the Chinese telecoms provider Huawei is building smart cities: cities furnished with intelligent technology used to improve operations and standards of living. Huawei provided Bonifacio Global City in the Philippines with high-definition cameras and surveillance “checkpoints” in various locations to allow for “comprehensive coverage of the city” and “24/7 intelligent security surveillance.”
On the other hand, this type of AI-facilitated surveillance can drive economic growth. A recent study published by the National Bureau of Economic Research found that authoritarian states might have “an inherent and decisive advantage over liberal democracies” in AI innovation. AI algorithms utilise processed data to become more advanced and reliable. Countries with mass surveillance offer more data for AI algorithms to process, and subsequently, more to learn from. A positive feedback loop is created: the more advanced surveillance becomes, the better advertisements can be targeted, and the easier it becomes to track consumer behaviour and preferences. This lends itself to a form of repression, where, if it is easier to monitor citizens via advanced surveillance technology, society becomes an extension of the commodities which it buys.
According to Freedom House, in 2023, AI was used by at least sixteen countries “to sow doubt, smear opponents, or influence public debate”. However, disinformation is not a new phenomenon. Allie Funk, Research Director for Technology and Democracy at Freedom House, observes two reasons why digital repression by governments is enabled. First, is the affordability and accessibility of generative AI, which lowers the barrier of entry for states and state-affiliated groups to disseminate disinformation campaigns. Earlier this year, Venezuelan state media outlets used AI avatars–more commonly known as deepfakes–developed by a company called Synthesia, to spread pro-government propaganda. As AI-generated content becomes standardised, people are likely to become more sceptical of true information as well, fuelling a distrustful and hostile environment for political and social discourse. This phenomenon, known as the “liar’s dividend”, occurs at times when false information saturates the Internet, such as during a political crisis or conflict.
Another factor which enables digital repression is the specific and subtle forms of censorship that are offered by the use of AI. In China, an image-creation AI tool developed by the multinational Chinese tech firm Baidu, images of Tiananmen Square are excluded. Automatically detecting and blocking specific content or keywords makes it much easier for governments to restrict access to certain information that has the potential to embolden opposition groups. It can also create a “chilling effect” on free expression, where people may self-censor their content and refrain from self-expression out of fear that it will be flagged and blocked, even if it doesn’t violate any explicit rules.
Although AI algorithms have huge potential in various fields–from education to space exploration–they also carry with them huge risks, particularly for strengthening authoritarian regimes. As the truth becomes easier to distort, and fiction more difficult to discern, authoritarian states will continue to find ways of manipulating generative AI to consolidate power and advance self-interested objectives.
Edited by Sahar Rabbani
0 Comments