The Great Firewall: A Closer Look at Digital Censorship in China

• Bookmarks: 93


The Chinese government’s Internet censorship is among the most restrictive in the world. China’s decision to block access to the Bloomberg News website and Google’s decision to shut down its Chinese search site in response to Chinese censorship requirements have been met with criticism and increased funding for programs to fight internet censorship. Despite the importance of the issue for U.S.- China relations, very little is known about what exactly the Chinese government is censoring, beyond the general assumption that its primary objective is to silence criticism of the state.

New research by Gary King, Jennifer Pan, and Margaret Roberts, in their article “How Censorship in China Allows Government Criticism but Silences Collective Expression” sheds some light on how Chinese censorship operates. Contrary to expectations, King, Pan, and Roberts demonstrate that the Chinese government’s choice of which online content to censor is based not on criticism of the state, but rather on its potential for inducing collective action. The only exceptions were posts containing pornography or criticism of China’s Internet censorship, which were almost universally censored.

Previous literature on the Chinese government’s online censorship program has broadly supported the “state critique theory,” which argues that the Chinese government censors digital content which criticizes the state, its leaders, and its policies. This literature primarily relied upon government statistics, public opinion surveys, interviews, and measures of the government’s visible actions.

In contrast, King, Pan, and Roberts’ work collected and analyzed over eleven million posts from nearly 1,400 social media websites using a stratified random sampling design organized hierarchically by political sensitivity. The author’s data collection process was highly automated, theoretically allowing the authors to view posts before they were censored, as Chinese censorship is largely manual. Posts were revisited frequently to learn if they had been censored. When censored posts were observed, the authors compared the content of censored posts with their original content.

The study finds that censors are focused on removing posts that have collective action potential, whether they were critical or supportive of the government. One illustrative example is the censorship surrounding iodine after the 2011 Japan earthquake, where online rumors that the iodine in salt would protect from radiation exposure were highly censored. The authors suggest that while these postings were clearly non-political or critical of the state, they were threatening, due to their nature of localized and collective organization.

While the author’s findings cast doubt on the “state critique theory,” their findings are not conclusive. Significantly, the study focuses only on content that is manually removed. This excludes content censored by China’s automated filtering system known commonly as the “Great Firewall,” which blocks entire websites, and “keyword blocking,” which prevents a user from posting content containing banned words or phrases.

Nonetheless, the paper significantly adds to our understanding of how Chinese censorship operates. Furthermore, it also takes the first steps towards predicting major political moves via monitoring changes in censorship. In three instances – the arrest of Ai Weiwei, the 2011 treaty with Vietnam over disputes in the South China Sea, and the Bo Xilai incident – the authors found that dramatic changes in censorship patterns occurred days before the events took place. These preliminary predictive findings suggest that a deeper understanding of Chinese censorship tactics might provide policymakers in the U.S. with some insight into the Chinese government’s priorities and strategy.

Feature photo: cc/Olga Díez (Caliope)

315 views
bookmark icon