QuTech takes important step in quantum supercomputing with error correction

Researchers at QuTech, a collaboration between the TU Delft, the oldest and largest Dutch public technical university, and TNO, Netherlands Organisation for Applied Scientific Research, have reached a milestone in quantum error correction. They have integrated high-fidelity operations on encoded quantum data with a scalable scheme for repeated data stabilization.  Artistic image of a seven-transmon superconducting quantum processor similar to the one used in this work  CREDIT DiCarlo Lab and Marieke de Lorijn

Physical quantum bits, or qubits, are vulnerable to errors. These errors arise from various sources, including quantum decoherence, crosstalk, and imperfect calibration. Fortunately, the theory of quantum error correction stipulates the possibility to compute while synchronously protecting quantum data from such errors.

“Two capabilities will distinguish an error-corrected quantum computer from present-day noisy intermediate-scale quantum (NISQ) processors”, says Prof Leonardo DiCarlo of QuTech. “First, it will process quantum information encoded in logical qubits rather than in physical qubits (each logical qubit consisting of many physical qubits). Second, it will use quantum parity checks interleaved with computation steps to identify and correct errors occurring in the physical qubits, safeguarding the encoded information as it is being processed.”  According to theory, the logical error rate can be exponentially suppressed provided that the incidence of physical errors is below a threshold and the circuits for logical operations and stabilization are fault-tolerant.

All the operations

The basic idea thus is that if you increase the redundancy and use more and more qubits to encode data, the net error goes down. The researchers at TU Delft, together with colleagues at TNO, have now realized a major step toward this goal, realizing a logical qubit consisting of seven physical qubits (superconducting transmons). “We show that we can do all the operations required for computation with the encoded information. This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a key step in quantum error correction”, says Prof Barbara Terhal, also of QuTech.

First-author and Ph.D. candidate Jorge Marques further explains: “Until now researchers have encoded and stabilized. We now show that we can compute as well. This is what a fault-tolerant computer must ultimately do: process and protect data from errors all at the same time. We do three types of logical-qubit operations: initializing the logical qubit in any state, transforming it with gates, and measuring it. We show that all operations can be done directly on encoded information. For each type, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants.” Fault-tolerant operations are key to reducing the build-up of physical-qubit errors into logical-qubit errors.

Long term

DiCarlo emphasizes the multidisciplinary nature of the work: “This is a combined effort of experimental physics, theoretical physics from Barbara Terhal’s group, and also electronics developed with TNO and external collaborators. The project is mainly funded by IARPA and Intel Corporation.”

“Our grand goal is to show that as we increase encoding redundancy, the net error rate decreases exponentially”, DiCarlo concludes. “Our current focus is on 17 physical qubits and next up will be 49. All layers of our quantum computer’s architecture were designed to allow this scaling.”

Temperature Record 101: How We Know What We Know about Climate Change

2021 was tied for the sixth warmest year on NASA’s record, stretching more than a century.But, what is a temperature record?GISTEMP, NASA’s global temperature analysis, takes in millions of observations from instruments on weather stations, ships and ocean buoys, and Antarctic research stations, to determine how much warmer or cooler Earth is on average from year to year.Stretching back to 1880...
...

Read more

NASA supercomputing shows 2021 tied for 6th warmest year in continued trend

Earth’s global average surface temperature in 2021 tied with 2018 as the sixth warmest on record, according to independent analyses done by NASA and the National Oceanic and Atmospheric Administration (NOAA). 2021 was tied for the sixth warmest year on NASA’s record, stretching more than a century. Because the record is global, not every place on Earth experienced the sixth warmest year on record. Some places had record-high temperatures, and we saw record droughts, floods and fires around the globe. Credits: NASA’s Scientific Visualization Studio/Kathryn Mersmann

Continuing the planet’s long-term warming trend, global temperatures in 2021 were 1.5 degrees Fahrenheit (0.85 degrees Celsius) above the average for NASA’s baseline period, according to scientists at NASA’s Goddard Institute for Space Studies (GISS) in New York. NASA uses the period from 1951-1980 as a baseline to see how global temperature changes over time.

Collectively, the past eight years are the warmest since modern recordkeeping began in 1880. This annual temperature data makes up the global temperature record – which tells scientists the planet is warming. 

According to NASA’s temperature record, Earth in 2021 was about 1.9 degrees Fahrenheit (or about 1.1 degrees Celsius) warmer than the late 19th-century average, the start of the industrial revolution.

“Science leaves no room for doubt: Climate change is the existential threat of our time,” said NASA Administrator Bill Nelson. “Eight of the top 10 warmest years on our planet occurred in the last decade, an indisputable fact that underscores the need for bold action to safeguard the future of our country – and all of humanity. NASA’s scientific research about how Earth is changing and getting warmer will guide communities throughout the world, helping humanity confront climate and mitigate its devastating effects.”

This warming trend around the globe is due to human activities that have increased emissions of carbon dioxide and other greenhouse gases into the atmosphere. The planet is already seeing the effects of global warming: Arctic sea ice is declining, sea levels are rising, wildfires are becoming more severe and animal migration patterns are shifting. Understanding how the planet is changing – and how rapidly that change occurs – is crucial for humanity to prepare for and adapt to a warmer world. 

{media id=275,layout=solo}

Weather stations, ships, and ocean buoys around the globe record the temperature at Earth’s surface throughout the year. These ground-based measurements of surface temperature are validated with satellite data from the Atmospheric Infrared Sounder (AIRS) on NASA’s Aqua satellite. Scientists analyze these measurements using supercomputer algorithms to deal with uncertainties in the data and quality control to calculate the global average surface temperature difference for every year. NASA compares that global mean temperature to its baseline period of 1951-1980. That baseline includes climate patterns and unusually hot or cold years due to other factors, ensuring that it encompasses natural variations in Earth’s temperature.

Many factors affect the average temperature any given year, such as La Nina and El Nino climate patterns in the tropical Pacific. For example, 2021 was a La Nina year and NASA scientists estimate that it may have cooled global temperatures by about 0.06 degrees Fahrenheit (0.03 degrees Celsius) from what the average would have been.

A separate, independent analysis by NOAA also concluded that the global surface temperature for 2021 was the sixth-highest since record-keeping began in 1880. NOAA scientists use much of the same raw temperature data in their analysis and have a different baseline period (1901-2000) and methodology.

“The complexity of the various analyses doesn’t matter because the signals are so strong,” said Gavin Schmidt, director of GISS, NASA’s leading center for climate modeling and climate change research. “The trends are all the same because the trends are so large.”

NASA’s full dataset of global surface temperatures for 2021, as well as details of how NASA scientists conducted the analysis, are publicly available from GISS.

RIKEN shows how the free-energy principle explains the brain

The RIKEN Center for Brain Science (CBS) in Japan, along with colleagues, has shown that the free-energy principle can explain how neural networks are optimized for efficiency. The study first shows how the free-energy principle is the basis for any neural network that minimizes energy cost. Then, as a proof-of-concept, it shows how an energy minimizing neural network can solve mazes. This finding will be useful for analyzing impaired brain function in thought disorders as well as for generating optimized neural networks for artificial bits of intelligence. General view of a solved maze. The maze comprises a discrete state space, wherein white and black cells indicate pathways and walls, respectively. The blue path is the trajectory. Starting from the left, the agent needs to reach the right edge of the maze within a certain amount of steps (time). The maze was solved following the free energy principle.

Biological optimization is a natural process that makes our bodies and behavior as efficient as possible. A behavioral example can be seen in the transition that cats make from running to galloping. Far from being random, the switch occurs precisely at the speed when the amount of energy it takes to gallop becomes less than it takes to run. In the brain, neural networks are optimized to allow efficient control of behavior and transmission of information, while still maintaining the ability to adapt and reconfigure to changing environments.

As with the simple cost/benefit calculation that can predict the speed that a cat will begin to gallop, researchers at RIKEN CBS are trying to discover the basic mathematical principles that underly how neural networks self-optimize. The free-energy principle follows a concept called Bayesian inference, which is the key. In this system, an agent is continually updated by new incoming sensory data, as well as its past outputs, or decisions. The researchers compared the free-energy principle with well-established rules that control how the strength of neural connections within a network can be altered by changes in sensory input.

“We were able to demonstrate that standard neural networks, which feature delayed modulation of Hebbian plasticity, perform planning and adaptive behavioral control by taking their previous ‘decisions’ into account,” says first author and Unit Leader Takuya Isomura. “Importantly, they do so the same way that they would when following the free-energy principle.”

Once they established that neural networks theoretically follow the free-energy principle, they tested the theory using simulations. The neural networks are self-organized by changing the strength of their neural connections and associating past decisions with future outcomes. In this case, the neural networks can be viewed as being governed by the free-energy principle, which allowed them to learn the correct route through a maze through trial and error in a statistically optimal manner.

These findings point toward a set of universal mathematical rules that describe how neural networks self-optimize. As Isomura explains, “Our findings guarantee that an arbitrary neural network can be cast as an agent that obeys the free-energy principle, providing a universal characterization for the brain.” These rules, along with the researchers’ new reverse engineering technique, can be used to study neural networks for decision-making in people with thought disorders such as schizophrenia and predict the aspects of their neural networks that have been altered.

Another practical use for these universal mathematical rules could be in the field of artificial intelligence, especially those that designers hope will be able to efficiently learn, predict, plan, and make decisions. “Our theory can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks, which will be important for next-generation artificial intelligence,” says Isomura.

JAIST supercomputer shows novel crystal structure for hydrogen under high pressure

Japanese researchers identify a potential crystal phase for hydrogen solidified at extreme pressures using data science and supercomputer simulations 

Elements in the periodic table can take up multiple forms. Carbon, for example, exists as diamond or graphite depending on the environmental conditions at the time of formation. Crystal structures that have been formed in ultra-high-pressure environments are particularly important as they provide clues to the formation of planets. However, recreating such environments in a laboratory is difficult, and materials scientists often rely on simulation predictions to identify the existence of such structures. A new crystal structure (atomic arrangement pattern) called the P21/c-8 type, which is predicted to be achieved under very high pressure, such as deep inside the Earth.  CREDIT Ryo Maezono from JAIST.

In this regard, hydrogen is especially important for analyzing the distribution of matter in the universe and the behavior of giant gas planets. However, the crystal structures of solid hydrogen formed under high pressure are still under contention owing to the difficulty in conducting experiments involving high-pressure hydrogen. Moreover, the structural pattern is governed by a delicate balance of factors including electric forces on the electrons and fluctuations imposed by quantum mechanics, and for hydrogen, the fluctuations are particularly large, making the predictions of its crystal phases even more difficult.

Recently, in a collaborative study published in Physical Review B, a global team of researchers involving Professor Ryo Maezono and Associate Professor Kenta Hongo from Japan Advanced Institute of Science and Technology tackled this problem using an ingenious combination of supercomputer simulations and data science, revealing various crystal structures for hydrogen at low temperatures near 0 K and high pressures.

“For crystal structures under high pressure, we have been able to generate several candidate patterns using a recent data science method known such as genetic algorithms etc. But whether these candidates are truly the phases that survive under high pressure can only be determined by high-resolution simulations,” explains Prof. Maezono.

Accordingly, the team looked for various possible structures that can be formed with 2 to 70 hydrogen atoms at high pressures of 400 to 600 gigapascals (GPa) using a technique called “particle swarm optimization” and density functional theory (DFT) calculations and estimated their relative stability using first-principles quantum Monte Carlo method and DFT zero-point energy corrections.

The search produced 10 possible crystal structures that were previously not found by experiments, including nine molecular crystals and one mixed structure, Pbam-8 comprising atomic and molecular crystal layers appearing alternatively. However, they found that all the 10 structures showed structural dynamic instabilities. To obtain a stable structure, the team relaxed Pbam-8 in the direction of instability to form a new dynamically stable structure­ called P21/c-8. “The new structure is a promising candidate for the solid hydrogen phase realized under high-pressure conditions such as that found deep within the Earth,” says Dr. Hongo.

The new structure was found to be more stable than Cmca-12, a structure that was previously found to be a valid candidate in the H2-PRE phase, one of the six structural phases identified for solid hydrogen at high pressure (360 to 495 GPa) that is stable at near 0 K. The team further validated their results by comparing the infrared spectrum of the two structures, which revealed a similar pattern typically observed for the H2-PRE phase.

While this is an interesting finding, Prof. Maezono explains the significance of their results: “The hydrogen crystal problem is one of the most challenging and intractable problems in materials science. Depending on the type of approximation used, the predictions can vary greatly and avoiding approximations is a typical challenge. With our result now verified, we can continue our research on other structure prediction problems, such as that for silicon and magnesium compounds, which have a significant impact on earth and planetary science."