Artificial intelligence technology is advancing and bringing opportunities for society but also profound challenges for individual freedom. AI is a powerful enabler of surveillance technology, such as facial recognition, and many countries are grappling with appropriate rules for use, weighing the security benefits against privacy risks. Authoritarian regimes, however, lack strong institutional mechanisms to protect individual privacy—a free and independent press, civil society, an independent judiciary—and the result is the widespread use of AI for surveillance and repression. This dynamic is most acute in China, where the Chinese government is pioneering new uses of AI to monitor and control its population. China has already begun to export this technology along with laws and norms for illiberal uses to other nations. As AI-enabled surveillance technology spreads around the globe, how it is used poses profound challenges for the future of democracy, liberty, and individual freedom.
The Chinese “big brother”
China is a global AI powerhouse and is breaking new ground in the use of AI tools for surveillance, repression, and social control. The Chinese government is engaged in a massive campaign of repression against ethnic Uighurs in Xinjiang, where over one million people are detained in camps. While many of the Chinese government’s tools are low-tech, the government has also begun leveraging data analytics, facial recognition, and predictive policing to monitor Uighurs. Voice recognition, facial recognition and gait analysis are used to track individuals, including at regular checkpoints in major areas. Networks of surveillance cameras use algorithms to detect anomalous public behavior, from improperly parked vehicles to running in certain areas. These tools allow the government to extensively monitor citizens’ behavior and control individuals. While ostensibly for “anti-terrorism” purposes, the Chinese government is using these tools, along with low-tech methods such as mass detention and human surveillance, to attempt to systematically destroy an entire culture. Xinjiang is also a testbed for AI tools for social control in the rest of China.
Algorithms have begun to supplement the massive network of over 200 million surveillance cameras in China, which are ubiquitous in major cities.
A major feature of surveillance technology in China is its use for “social control” or “social governance”—the control of the population by shaping individuals’ behavior. Social control goes beyond catching criminals and aims to control the behavior of citizens in small ways. Electronic billboards name and shame individuals caught jaywalking, displaying their faces to the public like a modern day version of putting criminals in stocks. The Shenzhen-based company Yuntian Lifei Technology has said that their Skyeye intelligent video surveillance system has already identified 6,000 incidents relating to “social governance,” leading to the system being deployed in nearly 80 Chinese cities. In some cases, AI tools are used for social control over seemingly trivial infractions, such as monitoring toilet paper use in public restrooms. State-linked organizations have explored options for even more sophisticated AI tools to surveil citizens, such as tracking individual homes’ electricity use. These AI tools enhance an already extensive system of social control, from China’s Social Credit System and blacklists to ID-linked QR-code tracking of sensitive items, such as knives purchased in Xinjiang.
AI tools allow China to not only monitor its population more effectively, but also control their information environment. China already heavily censors online information through a “Great Firewall” to keep out large parts of the internet and punish those who speak out against the government, even in blogs or chat discussions. Simple AI tools will allow the expansion of these efforts by flagging censored content at scale and automating responses and more advanced AI systems will enable more tailored responses. A Chinese think tank funded by the Ministry of Industry and Information Technology published a white paper in September 2018 outlining a number of potential uses of AI for “social governance.” These include monitoring public opinion online, providing “early warning” of unfolding events, and the ability to “preemptively intervene in and guide public sentiment to avoid mass online public opinion outbreaks.” If implemented, these tools would strengthen China’s existing Great Firewall for internet censorship and allow the state to exercise more nuanced and effective targeted propaganda.
The Chinese government can lean on a robust private sector for AI technology. The Chinese company SenseTime is a global leader in facial recognition and object identification and at a $4.5 billion valuation is also the highest-valued AI startup in the world. Half of the top 10 AI startups last year were Chinese. Right behind SenseTime is Toutiao, valued at $3 billion, which uses algorithms to not only curate online content but also automatically generate news stories—a powerful capability with many potential uses. China is home to some of the top AI firms in the world, including Baidu, Alibaba, and Tencent. In addition to top-tier AI companies, China can harness vast troves of data about its citizens on which to train machine learning systems. China’s 800 million internet users create exabytes of data on Chinese behavior, preferences and communications. Different legal and cultural institutions in China result in far less privacy protections for citizens’ data than exist in Western nations, giving companies ample resources to train algorithms. In 2017, China proclaimed in “New Generation Artificial Intelligence Development Plan” its intention to become the global leader in AI by 2030. It is on track to make that plan a reality.
US and Europe, balancing freedom with security
AI-based surveillance tools such as facial recognition and predictive policing are being deployed outside China as well. What differs in China compared to democratic nations is the lack of institutional mechanisms to provide checks against state power and protect citizens’ rights. Surveillance technologies have sparked vigorous debates in the United States and Europe about their use. In the U.S., the American Civil Liberties Union (ACLU) has criticized Amazon for alleged flaws in their facial recognition system Rekognition and has sued the U.S. government for records about government use. Congressional lawmakers have expressed concern and officials in San Francisco and other cities have banned government use of facial recognition. Even some tech companies are sounding the alarm, with Microsoft calling for government regulation and Amazon arguing that the government should specify guidelines for law enforcement use. Democratic societies have many avenues for checks on uncontrolled government power which authoritarian regimes lack. Employees at Google, Microsoft and Amazon have objected to their companies handing AI technology to the U.S. military or police. In the case of Google, employee pressure caused the company to discontinue a Pentagon AI project. Independent media, legislature, courts, civil society and open public discourse all contribute to a balance in democratic societies between security and civil liberties. When Apple disagreed with the FBI’s order to unlock a terrorist-linked iPhone in 2016, Apple fought the FBI in court. Even if Chinese tech companies objected to government demands to use their tools for surveillance, China lacks effective independent institutions to check government demands.
These tools are helping China build a techno-dystopian surveillance state, which China is beginning to export to others. In 2018, the Chinese company CloudWalk closed a deal to build a mass facial recognition system in Zimbabwe. The system will consist of intelligent surveillance systems at railways, bus stations and airports as well as a national facial database. At stake in the deal, which is part of China’s Belt and Road Initiative, is more than just money. CloudWalk will also have access to millions of Zimbabwean faces, helping CloudWalk improve its facial recognition systems against darker skin tones. In the age of artificial intelligence, data is the real currency of power.
The Zimbabwe deal follows a long-standing pattern of China exporting its digital surveillance technology along with Chinese-style surveillance laws and policies. China has held training seminars in over 30 countries on cyberspace and information policy, according to Freedom House. In Vietnam, Uganda and Tanzania restrictive media and cybersecurity laws closely followed Chinese engagement. The technology helps China gain access to new datasets as well as inroads for spying abroad. The social “software” of laws and policies help China export its evolving model of digital authoritarianism. Left unchecked, AI-enabled repression poses a profound challenge to freedom around the globe.
Technology is not destiny. Facial recognition and other surveillance technologies will be used in both democratic and authoritarian societies around the globe. The question is how they are used—to what ends, under what laws, and with what degree of transparency and privacy protections. In democratic societies that have well-functioning institutions such as civil society, a free press, and an independent judiciary, the give-and-take between different societal actors can help find the appropriate balance between privacy and security over time. In authoritarian regimes the lack of these protections means that the state can use surveillance technology to tighten its repressive grip, further eroding individual freedoms.
The need for rules to ensure correct use of AI technology
The stakes are high. The spread of AI-enabled surveillance technology risks undermining individual freedom and fostering the rise of a new high-tech illiberalism. Democratic states must work together to counter this trend. To do so they must first lead in technology, for whoever builds the roads sets the rules of the road. Second, democratic states must proactively establish norms for appropriate use of technology. Europe is already leading the way on data privacy with the General Data Protection Regulation (GDPR),, which set a standard for other nations. Third, democratic states must actively work to export the technology, laws, and policies for use, helping to set global norms. Human rights must be a foundational principle of how AI and surveillance technologies are used. Fourth, individuals and organizations from democratic states must not be silent in the face of abuse, whether it is China’s repression of Uighurs in Xinjiang or other human rights abuse. As regards artificial intelligence there has been a rapid spread of global principles of AI governance around the globe, with scores of countries, companies and organizations espousing principles for ethical use. This is an encouraging sign, but these principles will only be meaningful if actors live up to them and expose abuses when they exist. Principles must be more than empty words—they must translate to action. Finally, academic researchers, companies and democratic governments must terminate partnerships with those engaged in human rights abuse. Decisions made in the coming years will help shape the balance between freedom and authoritarianism for decades to come. These actions can help push back against high-tech illiberalism and help protect individual freedom and human rights.
The author: Paul Scharre
Paul Scharre is a senior fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of War.
Read more about technology, freedom and security
Selected content on this topic.