MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock
MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock

MIT econ prof Beraja shows how an 'AI-tocracy' emerges in China

Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to keep up with technological changes that help their opponents; they may also, by stifling rights, inhibit innovative economic activity and weaken the long-term condition of the country. 

But a new study co-led by an MIT professor suggests something quite different. In China, the research finds, the government has increasingly deployed AI-driven facial-recognition technology to suppress dissent; has been successful at limiting protest; and in the process, has spurred the development of better AI-based facial-recognition tools and other forms of software.

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI, subsequently, by local government units such as municipal police departments,” says MIT economist Martin Beraja, who is co-author of a new paper detailing the findings. 

What follows, as the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”

The scholars call this state of affairs an “AI-tocracy,” describing the connected cycle in which increased deployment of AI-driven technology quells dissent while also boosting the country’s innovation capacity.

The open-access paper, also called “AI-tocracy,” appears in the August issue of the Quarterly Journal of Economics. An abstract of the uncorrected proof was first posted online in March. The co-authors are Beraja, who is the Pentti Kouri Career Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management at the London School of Economics. 

To conduct the study, the scholars drew on multiple kinds of evidence spanning much of the last decade. To catalog instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020. 

The researchers then examined records of almost 3 million procurement contracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly in the quarter following an episode of public unrest in that area.

Given that Chinese government officials were responding to public dissent activities by ramping up facial-recognition technology, the researchers then examined a follow-up question: Did this approach work to suppress dissent?

The scholars believe that it did, although as they note in the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as one way of getting at that question, they studied the relationship between weather and political unrest in different areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest compared to prefectures that had not made the same investments. 

In so doing, the researchers also accounted for issues such as whether or not greater relative wealth levels in some areas might have produced larger investments in AI-driven technologies regardless of protest patterns. However, the scholars still reached the same conclusion: Facial-recognition technology was being deployed in response to past protests, and then reducing further protest levels. 

“It suggests that the technology is effective in chilling unrest,” Beraja says. 

Finally, the research team studied the effects of increased AI demand on China’s technology sector and found the government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For instance, firms that are granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products in the two years after gaining the government contract than they had beforehand. 

“We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools are not necessarily “crowding out” other kinds of high-tech innovation.

Adding it all up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state in which their political power is enhanced, rather than upended when they harness technological advances.

“In this age of AI, when the technologies not only generate growth but are also technologies of repression, they can be very useful” to authoritarian regimes, Beraja says. 

The finding also bears on larger questions about forms of government and economic growth. A significant body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, in part by creating better conditions for technological innovation. Beraja notes that the current study does not contradict those earlier findings, but in examining the effects of AI in use, it does identify one avenue through which authoritarian governments can generate more growth than they otherwise would have. 

“This may lead to cases where more autocratic institutions develop side by side with growth,” Beraja adds. 

Other experts in the societal applications of AI say the paper makes a valuable contribution to the field. 

“This is an excellent and important paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management at the University of Toronto. “The paper documents a positive feedback loop between the use of AI facial-recognition technology to monitor and suppress local unrest in China and the development and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”

For their part, the scholars are continuing to work on related aspects of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial recognition technologies around the world — highlighting a mechanism through which government repression could grow globally.

 

DKFZ deploys StorageMAP to manage 27 petabytes of unstructured data

The German Cancer Research Institute (DKFZ)'s Working Group Leader for Central Servers, Tobias Reber, highlights the value of Datadobi's software, StorageMAP, in addressing their data management challenges. Reber highlights the ease of deployment and the platform's capabilities in providing a comprehensive view of their data landscape.
 
{media id=313,layout=solo}

Read more

uOttawa-built models put the age of the universe at 26.7 billion years

Our universe could be twice as old as current estimates, according to a new study that challenges the dominant cosmological model and sheds new light on the so-called “impossible early galaxy problem.”

“Our newly-devised model stretches the galaxy formation time by several billion years, making the universe 26.7 billion years old, and not 13.7 as previously estimated,” says author Rajendra Gupta, adjunct professor of physics in the Faculty of Science at the University of Ottawa. Rajendra Gupta 41047

For years, astronomers and physicists have calculated the age of our universe by measuring the time elapsed since the Big Bang and by studying the oldest stars based on the redshift of light coming from distant galaxies. In 2021, thanks to new techniques and advances in technology, the age of our universe was thus estimated at 13.797 billion years using the Lambda-CDM concordance model.

However, many scientists have been puzzled by the existence of stars like the Methuselah that appear to be older than the estimated age of our universe and by the discovery of early galaxies in an advanced state of evolution made possible by the James Webb Space Telescopeexternal link. These galaxies, existing a mere 300 million years after the Big Bang, appear to have a level of maturity and mass typically associated with billions of years of cosmic evolution. Furthermore, they’re surprisingly small in size, adding another layer of mystery to the equation.

Zwicky’s tired light theory proposes that the redshift of light from distant galaxies is due to the gradual loss of energy by photons over vast cosmic distances. However, it was seen to conflict with observations. Yet Gupta found that “by allowing this theory to coexist with the expanding universe, it becomes possible to reinterpret the redshift as a hybrid phenomenon, rather than purely due to expansion.”

In addition to Zwicky’s tired light theory, Gupta introduces the idea of evolving “coupling constants,” as hypothesized by Paul Dirac. Coupling constants are fundamental physical constants that govern the interactions between particles. According to Dirac, these constants might have varied over time. By allowing them to evolve, the timeframe for forming early galaxies observed by the Webb telescope at high redshifts can be extended from a few hundred million years to several billion years. This provides a more feasible explanation for the advanced level of development and mass observed in these ancient galaxies.

Moreover, Gupta suggests that the traditional interpretation of the “cosmological constant,” which represents dark energy responsible for the universe's accelerating expansion, needs revision. Instead, he proposes a constant that accounts for the evolution of the coupling constants. This modification in the cosmological model helps address the puzzle of small galaxy sizes observed in the early universe, allowing for more accurate observations.

Volcano erupting near El Paso, La Palma, Spain  Credit: Andreas Weibel via Getty Images
Volcano erupting near El Paso, La Palma, Spain Credit: Andreas Weibel via Getty Images

Cambridge's simulations show the impacts of volcanic eruptions on climate are miscalculated

Researchers have found that the cooling effect that volcanic eruptions have on Earth's surface temperature is likely underestimated by a factor of two, and potentially as much as a factor of four, in common climate projections.

While this effect is far from enough to offset the effects of global temperature rise caused by human activity, the researchers, led by the University of Cambridge, say that small-magnitude eruptions are responsible for as much as half of all the sulfur gases emitted into the upper atmosphere by volcanoes. 

The results suggest that improving the representation of volcanic eruptions of all magnitudes will in turn make climate projections more robust.

Where and when a volcano erupts is not something that humans can control, but volcanoes do play an important role in the global climate system. When volcanoes erupt, they can spew sulfur gases into the upper atmosphere, which form tiny particles called aerosols that reflect sunlight into space. For very large eruptions, such as Mount Pinatubo in 1991, the volume of volcanic aerosols is so large that it single-handedly causes global temperatures to drop.

However, these large eruptions only happen a handful of times per century – most small-magnitude eruptions happen every year or two.  

“Compared with the greenhouse gases emitted by human activity, the effect that volcanoes have on the global climate is relatively minor, but it’s important that we include them in climate models, to accurately assess temperature changes in the future,” said May Chim, a Ph.D. candidate in the Yusuf Hamied Department of Chemistry.

Standard climate projections, such as the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report, assume that explosive volcanic activity over 2015–2100 will be at the same level as the 1850–2014 period, and overlook the effects of small-magnitude eruptions.

“These projections mostly rely on ice cores to estimate how volcanoes might affect the climate, but smaller eruptions are too small to be detected in ice-core records,” said Chim. “We wanted to make a better use of satellite data to fill the gap and account for eruptions of all magnitudes.”

Using the latest ice-core and satellite records, Chim and her colleagues from the University of Exeter, the German Aerospace Center (DLR), the Ludwig-Maximilians University of Munich, Durham University, and the UK Met Office, generated 1000 different scenarios of future volcanic activity. They selected scenarios representing lower, median, and high levels of volcanic activity, and then performed climate simulations using the UK Earth System Model.

Their simulations show that the impacts of volcanic eruptions on climate, including global surface temperature, sea level, and sea ice extent, are underestimated because current climate projections largely underestimate the plausible future level of volcanic activity.

For the median future scenario, they found that the effect of volcanoes on the atmosphere, known as volcanic forcing, is being underestimated in climate projections by as much as 50%, due in large part to the effect of small-magnitude eruptions.

“We found that not only is volcanic forcing being underestimated, but small-magnitude eruptions are actually responsible for as much as half of the volcanic forcing,” said Chim. “These small-magnitude eruptions may not have a measurable effect individually, but collectively, their effect is significant.

“I was surprised to see just how important these small-magnitude eruptions are – we knew they had an effect, but we didn’t know it was so large.”

Although the cooling effect of volcanoes is being underestimated in climate projections, the researchers stress that it does not compare with human-generated carbon emissions.

“Volcanic aerosols in the upper atmosphere typically stay in the atmosphere for a year or two, whereas carbon dioxide stays in the atmosphere for much, much longer,” said Chim. “Even if we had a period of extraordinarily high volcanic activity, our simulations show that it wouldn’t be enough to stop global warming. It’s like a passing cloud on a hot, sunny day: the cooling effect is only temporary.”

The researchers say that fully accounting for the effect of volcanoes can help make climate projections more robust. They are now using their simulations to investigate whether future volcanic activity could threaten the recovery of the Antarctic ozone hole, and in turn, maintain a relatively high level of harmful ultraviolet radiation at the Earth’s surface.

The research was supported in part by the Croucher Foundation and The Cambridge Commonwealth, European & International Trust, the European Union, and the Natural Environment Research Council (NERC), part of UK Research and Innovation (UKRI).

Artificial intelligence brain  Credit: Andriy Onufriyenko via Getty Images
Artificial intelligence brain Credit: Andriy Onufriyenko via Getty Images

Cambridge builds new type of memory that could greatly reduce energy use, improve performance

Researchers have developed a new design for computer memory that could both greatly improve performance and reduce the energy demands of internet and communications technologies, which are predicted to consume nearly a third of global electricity within the next ten years. 

The researchers, led by the University of Cambridge, developed a device that processes data in a similar way as the synapses in the human brain. The devices are based on hafnium oxide, a material already used in the semiconductor industry, and tiny self-assembled barriers, which can be raised or lowered to allow electrons to pass.

This method of changing the electrical resistance in computer memory devices, and allowing information processing and memory to exist in the same place, could lead to the development of computer memory devices with far greater density, higher performance, and lower energy consumption.

Our data-hungry world has led to a ballooning of energy demands, making it ever more difficult to reduce carbon emissions. Within the next few years, artificial intelligence, internet usage, algorithms, and other data-driven technologies are expected to consume more than 30% of global electricity.  

“To a large extent, this explosion in energy demands is due to shortcomings of current computer memory technologies,” said Dr Markus Hellenbrand, from Cambridge’s Department of Materials Science and Metallurgy. “In conventional computing, there’s memory on one side and processing on the other, and data is shuffled back between the two, which takes both energy and time.”

One potential solution to the problem of inefficient computer memory is a new type of technology known as resistive switching memory. Conventional memory devices are capable of two states: one or zero. A functioning resistive switching memory device, however, would be capable of a continuous range of states – computer memory devices based on this principle would be capable of far greater density and speed.

“A typical USB stick based on the continuous range would be able to hold between ten and 100 times more information, for example,” said Hellenbrand.

Hellenbrand and his colleagues developed a prototype device based on hafnium oxide, an insulating material that is already used in the semiconductor industry. The issue with using this material for resistive switching memory applications is known as the uniformity problem. At the atomic level, hafnium oxide has no structure, with the hafnium and oxygen atoms randomly mixed, making it challenging to use for memory applications.

However, the researchers found that by adding barium to thin films of hafnium oxide, some unusual structures started to form, perpendicular to the hafnium oxide plane, in the composite material.

These vertical barium-rich ‘bridges’ are highly structured, and allow electrons to pass through, while the surrounding hafnium oxide remains unstructured. At the point where these bridges meet the device contacts, an energy barrier was created, which electrons can cross. The researchers were able to control the height of this barrier, which in turn changes the electrical resistance of the composite material.

“This allows multiple states to exist in the material, unlike conventional memory which has only two states,” said Hellenbrand.

Unlike other composite materials, which require expensive high-temperature manufacturing methods, these hafnium oxide composites self-assemble at low temperatures. The composite material showed high levels of performance and uniformity, making them highly promising for next-generation memory applications.

A patent on the technology has been filed by Cambridge Enterprise, the University’s commercialization arm.

“What’s really exciting about these materials is they can work like a synapse in the brain: they can store and process information in the same place, like our brains can, making them highly promising for the rapidly growing AI and machine learning fields,” said Hellenbrand.

The researchers are now working with industry to carry out larger feasibility studies on the materials, to understand more clearly how the high-performance structures form. Since hafnium oxide is a material already used in the semiconductor industry, the researchers say it would not be difficult to integrate it into existing manufacturing processes.

The research was supported in part by the U.S. National Science Foundation and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).