Brown University physicist discovers a strange metal that could lead to quantum supercomputers, deep insights

Discovery could help scientists to understand “strange metals,” a class of materials that are related to high-temperature superconductors and share fundamental quantum attributes with black holes.

Scientists understand quite well how temperature affects electrical conductance in most everyday metals like copper or silver. But in recent years, researchers have turned their attention to a class of materials that do not seem to follow the traditional electrical rules. Understanding these so-called “strange metals” could provide fundamental insights into the quantum world, and potentially help scientists understand strange phenomena like high-temperature superconductivity. Using a material called yttrium barium copper oxide arrayed with tiny holes, researchers have discovered "strange metal" behavior in a type of system where charge carriers are bosons, something that's never been seen before.

Now, a research team co-led by a Brown University physicist has added a discovery to the strange metal mix. The team found strange metal behavior in a material in which electrical charge is carried not by electrons, but by more “wave-like” entities called Cooper pairs.

While electrons belong to a class of particles called fermions, Cooper pairs act as bosons, which follow very different rules from fermions. This is the first time strange metal behavior has been seen in a bosonic system, and researchers are hopeful that the discovery might help find an explanation for how strange metals work — something that has eluded scientists for decades.

“We have these two fundamentally different types of particles whose behaviors converge around a mystery,” said Jim Valles, a professor of physics at Brown and the study’s corresponding researcher. “What this says is that any theory to explain strange metal behavior can’t be specific to either type of particle. It needs to be more fundamental than that.”

Strange metals

Strange metal behavior was first discovered around 30 years ago in a class of materials called cuprates. These copper-oxide materials are most famous for being high-temperature superconductors, meaning they conduct electricity with zero resistance at temperatures far above that of normal superconductors. But even at temperatures above the critical temperature for superconductivity, cuprates act strangely compared to other metals.

As their temperature increases, cuprates’ resistance increases in a strictly linear fashion. In normal metals, the resistance increases only so far, becoming constant at high temperatures in accord with what's known as Fermi liquid theory. Resistance arises when electrons flowing in a metal bang into the metal’s vibrating atomic structure, causing them to scatter. Fermi-liquid theory sets a maximum rate at which electron scattering can occur. But strange metals don’t follow the Fermi-liquid rules, and no one is sure how they work. What scientists do know is that the temperature-resistance relationship in strange metals appears to be related to two fundamental constants of nature: Boltzmann’s constant, which represents the energy produced by random thermal motion, and Planck’s constant, which relates to the energy of a photon (a particle of light).

“To try to understand what’s happening in these strange metals, people have applied mathematical approaches similar to those used to understand black holes,” Valles said. “So there are some very fundamental physics happening in these materials.”

Of bosons and fermions

In recent years, Valles and his colleagues have been studying electrical activity in which the charge carriers are not electrons. In 1952, Nobel Laureate Leon Cooper, now a Brown professor emeritus of physics, discovered that in normal superconductors (not the high-temperature kind discovered later), electrons team up to form Cooper pairs, which can glide through an atomic lattice with no resistance. Despite being formed by two electrons, which are fermions, Cooper pairs can act like bosons.

“Fermion and boson systems usually behave very differently,” Valles said. “Unlike individual fermions, bosons are allowed to share the same quantum state, which means they can move collectively like water molecules in the ripples of a wave.”

In 2019, Valles and his colleagues showed that Cooper pair bosons can produce metallic behavior, meaning they can conduct electricity with some amount of resistance. That in itself was a surprising finding, the researchers say, because elements of quantum theory suggested that the phenomenon shouldn’t be possible. For this latest research, the team wanted to see if bosonic Cooper-pair metals were also strange metals.

The team used a cuprate material called yttrium barium copper oxide patterned with tiny holes that induce the Cooper-pair metallic state. The team cooled the material down to just above its superconducting temperature to observe changes in its conductance. They found, like fermionic strange metals, a Cooper-pair metal conductance that is linear with temperature.

The researchers say this discovery will give theorists something new to chew on as they try to understand strange metal behavior.

“It’s been a challenge for theoreticians to come up with an explanation for what we see in strange metals,” Valles said. “Our work shows that if you’re going to model charge transport in strange metals, that model must apply to both fermions and bosons — even though these types of particles follow fundamentally different rules.”

Ultimately, a theory of strange metals could have massive implications. Strange metal behavior could hold the key to understanding high-temperature superconductivity, which has vast potential for things like lossless power grids and quantum supercomputers. And because strange metal behavior seems to be related to fundamental constants of the universe, understanding their behavior could shed light on basic truths of how the physical world works.

Chinese astronomers reveal a lighter Milky Way based on Gaia EDR3 data

The mass of the Milky Way is a fundamental quantity in modern astrophysics and cosmology that has a direct impact on many astrophysical problems.

With the combination of high-precision data from Gaia EDR3 and a new-generation dynamical modeling method, an international research team has found that the total mass of the Milky Way ranges from 500–800 billion solar mass, which indicates a lighter Milky Way when compared to previous measurements.

The study was led by Chinese astronomer WANG Jianling from the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), in collaboration with Francois Hammer and YANG Yanbin from the Paris Observatory under the framework of the Sino-French collaborative "Tianguan" project.

Previous studies of galactic dynamics were affected by two factors: Either they were based on a too small data set that introduced large uncertainties, or the tracers they used lacked information. The latter could be a serious matter since simple hypotheses on the equilibrium of some distant tracers have been used, thus introducing unknown systematic problems.

Now we are entering a golden era of galactic archeology with progress on large-scale spectroscopic surveys and high-precision proper motion measurements from the Gaia satellite, which provides a huge amount of high-quality data. These data overcome many difficulties mentioned above, especially by providing full, six-dimensional information for high-precision tracers acquired by Gaia.

Using these unprecedented data to study how our Milky Way and its halo are structured and how they assembled together is the central task facing astronomers, and dynamic modeling with supercomputers is the central tool for accomplishing this task.

The astronomers used a new-generation, analytical dynamic modeling technique, i.e., action-based, distribution function dynamical modeling. They derived the Milky Way baryon mass and dark matter mass distribution function, which in turn provided the accurate total mass of the Milky Way.

Thanks to the precise proper motions of Gaia, they derived the most precise kinematic information for around 150 galactic globular clusters. They combined this information with the accurate rotation curve information from the disk region also based on Gaia data. The flexible action-based distribution function overcomes many simplistic assumptions adopted by previous studies, thus leading to a more realistic distribution function for the tracers, and to a Milky Way mass distribution function.

N-body simulation and realistic cosmological hydrodynamic simulations have been used in this work to quantify any systematics introduced by the Large Magellanic Cloud passing by as well as by unrelaxed substructures.

This study has significant implications for cosmological problems and the origin of Milky Way satellites.

Johns Hopkins builds cloud-based platform that opens genomics data to all

Harnessing the power of genomics to find risk factors for major diseases or search for relatives relies on the costly and time-consuming ability to analyze huge numbers of genomes. A team co-led by a Johns Hopkins University computer scientist has leveled the playing field by creating a cloud-based platform that grants genomics researchers easy access to one of the world’s largest genomics databases.

Known as AnVIL (Genomic Data Science Analysis, Visualization, and Informatics Lab-space), the new platform gives any researcher with an Internet connection access to thousands of analysis tools, patient records, and more than 300,000 genomes. The work, a project of the National Human Genome Institute (NHGRI), appears today in Cell Genomics.

“AnVIL is inverting the model of genomics data sharing, offering unprecedented new opportunities for science by connecting researchers and datasets in new ways and promising to enable exciting new discoveries,” said project co-leader Michael Schatz, Bloomberg Distinguished Professor of Computer Science and Biology at Johns Hopkins.

Typically genomic analysis starts with researchers downloading massive amounts of data from centralized warehouses to their own data centers, a process that is not only time-consuming, inefficient, and expensive, but also makes collaborating with researchers at other institutions difficult.

“AnVIL will be transformative for institutions of all sizes, especially smaller institutions that don’t have the resources to build their own data centers. It is our hope that AnVIL levels the playing field so that everyone has equal access to make discoveries,” Schatz said.

Genetic risk factors for ailments such as cancer or cardiovascular disease are often very subtle, requiring researchers to analyze thousands of patients’ genomes to discover new associations. The raw data for a single human genome comprises about 40GB, so downloading thousands of genomes can take takes several days to several weeks: A single genome requires about 10 DVDs worth of data, so transferring thousands means moving “tens of thousands of DVDs worth of data,” Schatz said.  

In addition, many studies require integrating data collected at multiple institutions, which means each institution must download its copy while ensuring that patient-data security is maintained. This challenge is expected to become even greater in the future, as researchers embark on ever-larger studies requiring the analysis of hundreds of thousands to millions of genomes at once.

“Connecting to AnVIL remotely eliminates the need for these massive downloads and saves on the overhead,” Schatz says. “Instead of painfully moving data to researchers, we allow researchers to effortlessly move to the data in the cloud. It also makes sharing datasets much easier so that data can be connected in new ways to find new associations, and it simplifies a lot of computing issues, like providing strong encryption and privacy for patient datasets.”

AnVIL also provides researchers with several major analysis tools, including Galaxy, developed in part at Johns Hopkins, along with other popular tools such as R/Bioconductor, Jupyter notebooks, WDLs, Gen3, and Dockstore to support both interactive analysis and large-scale batch computing. Collectively, these tools allow researchers to tackle even the largest studies without having to build out their computing environments.

Researchers from all over the world currently use the platform to study a variety of genetic diseases, including autism spectrum disorders, cardiovascular disease, and epilepsy. Schatz’s team, part of the Telomere-to-Telomere Consortium, used it to reanalyze thousands of human genomes with the new reference genome to discover more than 1 million new variants.

Already, the AnVIL team has collected petabytes of data from several of the largest NHGRI projects, including hundreds of thousands of genomes from the Genotype-Tissue Expression (GTEx), Centers for Mendelian Genetics (CMG), and Centers for Common Disease Genomics (CCDG) projects, with plans to host many more projects soon.

SAIT demos the world's first MRAM based in-memory computing

Samsung Electronics has announced its demonstration of the world's first in-memory computing based on MRAM (Magnetoresistive Random Access Memory). This research showcases Samsung’s leadership in-memory technology and its effort to merge memory and system semiconductors for next-generation artificial intelligence (AI) chips.

The research was led by Samsung Advanced Institute of Technology (SAIT) in close collaboration with Samsung Electronics Foundry Business and Semiconductor R&D Center. The researcher, Dr. Seungchul Jung, Staff Researcher at SAIT, and the co-corresponding scientists Dr. Donhee Ham, Fellow of SAIT and Professor of Harvard University, and Dr. Sang Joon Kim, Vice President of Technology at SAIT, spearheaded the research.

In the standard computer architecture, data is stored in memory chips and data computing is executed in separate processor chips. In contrast, in-memory computing is a new computing paradigm that seeks to perform both data storage and data computing in a memory network. Since this scheme can process a large amount of data stored within the memory network itself without having to move the data, and also because the data processing in the memory network is executed in a highly parallel manner, power consumption is substantially reduced. In-memory computing has thus emerged as one of the promising technologies to realize next-generation low-power AI semiconductor chips.

For this reason, research on in-memory computing has been intensely pursued worldwide. Non-volatile memories, in particular RRAM (Resistive Random Access Memory) and PRAM (Phase-change Random Access Memory), have been actively used for demonstrating in-memory computing. By contrast, it has so far been difficult to use MRAM ─ another type of non-volatile memory ─ for in-memory computing despite MRAM’s merits such as operation speed, endurance, and large-scale production. This difficulty stems from the low resistance of MRAM, due to which MRAM cannot enjoy the power reduction advantage when used in the standard in-memory computing architecture.

The Samsung Electronics researchers have provided a solution to this issue by an architectural innovation. Concretely, they succeeded in developing an MRAM array chip that demonstrates in-memory computing, by replacing the standard, current-sum in-memory computing architecture with a new, ‘resistance sum’ in-memory computing architecture, which addresses the problem of small resistances of individual MRAM devices.

Samsung’s research team subsequently tested the performance of this MRAM in-memory computing chip by running it to perform AI computing. The chip achieved an accuracy of 98% in the classification of hand-written digits and a 93% accuracy in detecting faces from scenes.

By ushering MRAM ─ the memory which has already reached commercial-scale production embedded in the system semiconductor fabrication ─ into the realm of in-memory computing, this work expands the frontier of the next-generation low-power AI chip technologies.

The researchers have also suggested that not only can this new MRAM chip be used for in-memory computing, but it also can serve as a platform to download biological neuronal networks. This is along the line of the neuromorphic electronics vision that Samsung’s researchers recently put forward in a perspective paper.

"In-memory computing draws similarity to the brain in the sense that in the brain, computing also occurs within the network of biological memories, or synapses, the points where neurons touch one another,” said Dr. Seungchul Jung, the first author of the paper. “While the computing performed by our MRAM network, for now, has a different purpose from the computing performed by the brain, such solid-state memory network may in the future be used as a platform to mimic the brain by modeling the brain’s synapse connectivity."

As highlighted in this work, by building on its leading memory technology and merging it with system semiconductor technology, Samsung plans to continue to expand its leadership in next-generation supercomputing and AI semiconductors.

Japanese cosmologists demo their new model for the upcoming LiteBIRD mission

The upcoming satellite experiment LiteBIRD is expected to probe the physics of the very early Universe if the primordial inflation happened at high energies. But now, new research shows it can also test inflationary scenarios operating at lower energies. An artist's conception of how gravitational waves distort the shape of space and time in the universe (Credit: Kavli IPMU).

Cosmologists believe that in its very early stages, the Universe underwent a very rapid expansion called “cosmic inflation”. A success story of this hypothesis is that even the simplest inflationary models can accurately predict the inhomogeneous distribution of matter in the Universe. During inflation, these vacuum fluctuations were stretched to astronomical scales, becoming the source of all the structures in the Universe, including the Cosmic Microwave Background anisotropies, distribution of dark matter, and galaxies.

The same mechanism also produced gravitational waves. These propagating ripples of space and time are important for understanding the physics during the inflationary epoch. In general, detecting these gravitational waves is considered to determine the energy at which inflation took place. It is also linked to how much the inflation field, or the energy source of inflation, can change during inflation — a relation referred to as the “Lyth bound”.

The primordial gravitational waves generated from vacuum are extremely weak and are very difficult to detect, but the Japanese-led LiteBIRD mission might be able to detect them via the polarization measurements of the Cosmic Microwave Background. Because of this, understanding primordial gravitational waves theoretically is gaining interest so any potential detection by LiteBIRD can be interpreted. It is expected LiteBIRD will be able to detect primordial gravitational waves if inflation happened at sufficiently high energies.

Several inflationary models constructed in the framework of quantum gravity often predict a very low energy scale for inflation, and so would be untestable by LiteBIRD. However, a new study by researchers, including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), has shown the opposite. The researchers argue such scenarios of fundamental importance can be tested by LiteBIRD if they are accompanied by additional fields, sourcing gravitational waves.

The researchers suggest an idea, logically very different from the usual.

"Within our framework in addition to the gravitational waves originating from vacuum fluctuations, a large amount of gravitational waves can be sourced by the quantum vacuum fluctuations of additional fields during inflation. Due to this, we were able to produce an observable amount of gravitational waves even if inflation takes place at lower energies.

“The quantum fluctuations of scalar fields during inflation are typically small, and such induced gravitational waves are not relevant in standard inflationary scenarios. However, if the fluctuations of the additional fields are enhanced, they can source a significant amount of gravitational waves,” said paper author and Kavli IPMU Project Researcher Valeri Vardanyan.

Other researchers have been working on related ideas, but so far no successful mechanism based on scalar fields alone had been found.

“The main problem is that when you generate gravitational waves from enhanced fluctuations of additional fields, you also simultaneously generate extra curvature fluctuations, which would make the Universe appear more clumpy than it is in reality. We elegantly decoupled the generation of the two types of fluctuations, and solved this problem,” said Vardanyan.

In their work, the researchers proposed a proof-of-concept based on two scalar fields operating during inflation.

"Imagine a car with two engines, corresponding to the two fields of our model. One of the engines is connected to the wheels of the car, while the other one is not. The first one is responsible for moving the car, and, when on a muddy road, for generating all the traces on the road. These represent the seeds of structure in the Universe. The second engine is only producing sound. This represents the gravitational waves, and does not contribute to the movement of the car, or the generation of traces on the road,” said Vardanyan.

The team quantitatively demonstrated their mechanism works and even calculated the predictions of their model for the upcoming LiteBIRD mission The green line is the lowest signal the LiteBIRD can still observe, so any observable signal should be above that line. The red and black lines are the team’s predictions for two different parameter specifications in their model, showing detection is possible. In contrast, the more standard inflationary models operating at the same energy as the team’s mechanism predict the lower gray (dashed) line, which is below the sensitivity limit of LiteBIRD. (Credit: Cai et al.).