VLTI uses machine learning to help find stars moving around the Milky Way’s supermassive black hole

The European Southern Observatory’s Very Large Telescope Interferometer (ESO’s VLTI) has obtained the deepest and sharpest images to date of the region around the supermassive black hole at the center of our galaxy. The new images zoom in 20 times more than what was possible before the VLTI and have helped astronomers find a never-before-seen star close to the black hole. By tracking the orbits of stars at the center of our Milky Way, the team has made the most precise measurement yet of the black hole’s mass. These annotated images, obtained with the GRAVITY instrument on ESO’s Very Large Telescope Interferometer (VLTI) between March and July 2021, show stars orbiting very close to Sgr A*, the supermassive black hole at the heart of the Milky Way. One of these stars, named S29, was observed as it was making its closest approach to the black hole at 13 billion kilometres, just 90 times the distance between the Sun and Earth. Another star, named S300, was detected for the first time in the new VLTI observations. To obtain the new images, the astronomers used a machine-learning technique, called Information Field Theory. They made a model of how the real sources may look, simulated how GRAVITY would see them, and compared this simulation with GRAVITY observations. This allowed them to find and track stars around Sagittarius A* with unparalleled depth and accuracy. Credit: ESO/GRAVITY collaboration

“We want to learn more about the black hole at the center of the Milky Way, Sagittarius A*: How massive is it exactly? Does it rotate? Do stars around it behave exactly as we expect from Einstein’s general theory of relativity? The best way to answer these questions is to follow stars on orbits close to the supermassive black hole. And here we demonstrate that we can do that to a higher precision than ever before,” explains Reinhard Genzel, a director at the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany who was awarded a Nobel Prize in 2020 for Sagittarius A* research. Genzel and his team’s latest results, which expand on their three-decade-long study of stars orbiting the Milky Way's supermassive black hole, are published today in two papers in Astronomy & Astrophysics.

On a quest to find even more stars close to the black hole, the team, known as the GRAVITY collaboration, developed a new analysis technique that has allowed them to obtain the deepest and sharpest images yet of our Galactic Centre. “The VLTI gives us this incredible spatial resolution and with the new images, we reach deeper than ever before. We are stunned by their amount of detail, and by the action and number of stars they reveal around the black hole,” explains Julia Stadler, a researcher at the Max Planck Institute for Astrophysics in Garching who led the team’s imaging efforts during her time at MPE. Remarkably, they found a star, called S300, which had not been seen previously, showing how powerful this method is when it comes to spotting very faint objects close to Sagittarius A*.

With their latest observations, conducted between March and July 2021, the team focused on making precise measurements of stars as they approached the black hole. This includes the record-holder star S29, which made its nearest approach to the black hole in late May 2021. It passed it at a distance of just 13 billion kilometers, about 90 times the Sun-Earth distance, at the stunning speed of 8740 kilometers per second. No other star has ever been observed to pass that close to, or travel that fast around, the black hole. 

{media id=266,layout=solo}

The team’s measurements and images were made possible thanks to GRAVITY, a unique instrument that the collaboration developed for ESO’s VLTI, located in Chile. GRAVITY combines the light of all four 8.2-meter telescopes of ESO’s Very Large Telescope (VLT) using a technique called interferometry. This technique is complex, “but in the end you arrive at images 20 times sharper than those from the individual VLT telescopes alone, revealing the secrets of the Galactic Centre,” says Frank Eisenhauer from MPE, principal investigator of GRAVITY.

Following stars on close orbits around Sagittarius A* allows us to precisely probe the gravitational field around the closest massive black hole to Earth, to test General Relativity, and to determine the properties of the black hole,” explains Genzel. The new observations, combined with the team’s previous data, confirm that the stars follow paths exactly as predicted by General Relativity for objects moving around a black hole of mass 4.30 million times that of the Sun. This is the most precise estimate of the mass of the Milky Way’s central black hole to date. The researchers also managed to fine-tune the distance to Sagittarius A*, finding it to be 27 000 light-years away.

To obtain the new images, the astronomers used a machine-learning technique, called Information Field Theory. They made a model of how the real sources may look, simulated how GRAVITY would see them, and compared this simulation with GRAVITY observations. This allowed them to find and track stars around Sagittarius A* with unparalleled depth and accuracy. In addition to the GRAVITY observations, the team also used data from NACO and SINFONI, two former VLT instruments, as well as measurements from the Keck Observatory and NOIRLab’s Gemini Observatory in the US.

GRAVITY will be updated later this decade to GRAVITY+, which will also be installed on ESO’s VLTI and will push the sensitivity further to reveal fainter stars even closer to the black hole. The team aims to eventually find stars so close that their orbits would feel the gravitational effects caused by the black hole’s rotation. ESO’s upcoming Extremely Large Telescope (ELT), under construction in the Chilean Atacama Desert, will further allow the team to measure the velocity of these stars with very high precision. “With GRAVITY+’s and the ELT’s powers combined, we will be able to find out how fast the black hole spins,” says Eisenhauer. “Nobody has been able to do that so far.

Stanford researchers show why heat may make weather less predictable

A Stanford University study shows chaos reigns earlier in midlatitude weather models as temperatures rise. The result? Climate change could be shifting the limits of weather predictability and pushing reliable 10-day forecasts out of reach. Climate change could be shifting the limits of weather predictability and pushing reliable 10-day forecasts out of reach. (Image credit: Pexels and Getty Images Signature via Canva)

A new Stanford University study shows rising temperatures may intensify the unpredictability of the weather in Earth’s mid-latitudes. The limit of reliable temperature, wind, and rainfall forecasts falls by about a day when the atmosphere warms by even a few degrees Celsius.

“Our results show the state of the climate, in general, has implications for how many days out you can say something that’s accurate about the weather,” said atmospheric scientist Aditi Sheshadri, lead author of the study published Nov. 29 in Geophysical Research Letters“Cooler climates seem to be inherently more predictable.”

Widespread changes in weather patterns and increased frequency and severity of extreme weather events are well-documented consequences of global climate change. These departures from old norms can bring storms, droughts, heatwaves, and wildfire conditions beyond what infrastructure has been designed to withstand or what people have come to expect.

Yet numerical weather models are still generally able to predict day-to-day weather 3 to 10 days out more reliably than they could in decades past, thanks to faster computers, better models of physical atmospheric processes, and more precise measurements.

The new research, based on computer simulations of a simplified Earth system and a comprehensive global climate model, suggests the window for accurate forecasts in the midlatitudes is several hours shorter with every degree (Celsius) of warming. This could translate to less time to prepare and mobilize for big storms in balmy winters than in frigid ones.

For precipitation, predictability falls by about a day with every 3 C rise in temperature. The effect is more muted for wind and temperature, with one day of predictability lost with each 5 C increase in temperature.

While global average temperatures have increased by 1.1 C (2 F) since the late 1800s, not all places are warming at the same rateSome U.S. cities have seen average annual temperatures rise by well over 2 C since 1970. Seasonal variations can be even more extreme.

Further analysis will be needed to assess whether winter weather is inherently more predictable than summer weather, Sheshadri said, but the new results strongly indicate a shorter time horizon for reliable weather predictions in places that warm beyond their historical norms.

Butterfly effect

The research comes as the U.S. government prepares to spend $80 million on supercomputing equipment for developing weather and climate models as part of the bipartisan infrastructure law enacted in November.

But the problem of predicting specific weather beyond 10 or possibly 15 days in the future with perfect accuracy isn’t one that can be solved with more computing power or better models. The chaotic nature of Earth’s atmosphere imposes insurmountable limits on forecasting.

This is the crux of meteorologist Edward Lorenz’s discoveries related to the “butterfly effect” in the 1960s. Lorenz found that minuscule differences in initial conditions – like the wind perturbations from a butterfly flapping its wings – produce dramatically different results in models of Earth’s weather system.

For each measure of barometric pressure, temperature, wind speed, and the like that might be included in numerical weather models, uncertainty is impossible to avoid. These imperfections propagate through the model over time, so as you look further into the future, the gap between predictions made from seemingly identical initial conditions grows. At a point, the results lose all resemblance to one another and are indistinguishable from predictions based on realistic but random starting conditions. The supercomputer model at this juncture is said to “lose memory” of its initial conditions.

There is value in unpacking the effects of atmospheric chaos. Meteorologists have long sought to identify the intrinsic limit of weather predictability, in part to find ways to improve models of Earth’s climate and atmosphere. The United Nations’ World Meteorological Organization has estimated the socioeconomic benefits of weather prediction amount to at least $160 billion per year.

“We’re working to understand what sets this finite limit of predictability, and also how it might change in different climates, so people can be prepared for these changes,” said Sheshadri, who is an assistant professor of Earth system science at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth).

For Earth’s middle latitudes, where most Americans live, the new research suggests errors propagate through weather models faster as temperatures rise, and there don’t appear to be any temperature thresholds where the trend shifts. According to the authors, this appears to be linked to the growth of storms known as eddies in the troposphere, the layer of atmosphere closest to Earth. Past research has shown that when air at the planet’s surface is warmer, changes in the vertical arrangement of heat and cold in the atmosphere fuel faster eddy growth.

“When the eddies grow quicker, the models seem to lose track of initial conditions very quickly. And that means that the window of prediction narrows,” Sheshadri said.

Japanese team uses ATERUI II to show stellar 'ashfall' could help distant planets grow

The world’s first 3D simulation simultaneously considering dust motion and growth in a disk around a young star has shown that large dust from the central region can be entrained by and then ejected by gas outflows, and eventually fall back onto the outer regions of the disk where it may enable planetesimal formation. This process can be likened to volcanic “ashfall” where ash carried up by gas during an eruption falls back on the area around the volcano. These results help to explain observed dust structures around young protesters. The dust particles swept up by the bipolar outflow from the center of the protoplanetary disk are piled up on the outer edge of the disk.

Observations by ALMA (Atacama Large Millimeter/submillimeter Array) have revealed gaps in protoplanetary disks of gas and dust around young stars. The gravitational effects of planets are thought to be one of the reasons for the formation of these rings. However, some rings are seen even further out than the position of Neptune in the Solar System. At these distances, dust, a vital component of planet formation, should be scarce. Furthermore, the dust is expected to move in towards the central region of the disk as it grows. So how planets can form in the outer regions has been a mystery.

A research team led by Yusuke Tsukamoto at Kagoshima University used ATERUI II, the world’s most powerful supercomputer dedicated to astronomy calculations at the National Astronomical Observatory of Japan, to perform the world’s first 3D simulation of dust motion and growth in a protoplanetary disk. The team found that large dust particles grown in the central region can be carried out perpendicular to the disk by streams of gas, called bipolar outflow, erupting out from the disk. This dust then drifts out from the outflow and gravity pulls it back down to the outer part of the disk. Tsukamoto comments, “Living in Kagoshima, in the shadow of the active volcano Mt. Sakurajima, I naturally thought of volcanic ashfall when I saw the simulation results.”

The simulation shows that this “stellar ashfall” can enrich large dust in the outer region of the protoplanetary disk and facilitate the planetesimal formation, which may eventually cause planet formation.

Washington researchers build Artificial intelligence that can create better lightning forecasts

Lightning is one of the most destructive forces of nature, as in 2020 when it sparked the massive California Lightning Complex fires, but it remains hard to predict. A new study led by the University of Washington shows that machine learning — computer algorithms that improve themselves without direct programming by humans — can be used to improve lightning forecasts. Lightning boldUnsplash

Better lightning forecasts could help to prepare for potential wildfires, improve safety warnings for lightning and create more accurate long-range climate models.

“The best subjects for machine learning are things that we don’t fully understand. And what is something in the atmospheric sciences field that remains poorly understood? Lightning,” said Daehyun Kim, a UW associate professor of atmospheric sciences. “To our knowledge, our work is the first to demonstrate that machine learning algorithms can work for lightning.”

recasts with a machine learning equation based on analyses of past lightning events. The hybrid method, presented Dec. 13 at the American Geophysical Union’s fall meeting, can forecast lightning over the southeastern U.S. two days earlier than the leading existing technique.

“This demonstrates that forecasts of severe weather systems, such as thunderstorms, can be improved by using methods based on machine learning,” said Wei-Yi Cheng, who did the work for his UW doctorate in atmospheric sciences. “It encourages the exploration of machine learning methods for other types of severe weather forecasts, such as tornadoes or hailstorms.” A comparison of the performance of the new, AI-supported method and the existing method for U.S. lightning forecasts. The AI-supported method was able to accurately forecast lightning on average two days earlier in places like the Southeast, where lightning is common. Because the method was trained on the entire U.S., it did less well in places where lightning is less common.Daehyun Kim/University of Washington. Map by Rebecca Gourley/University of Washington

Researchers trained the system with lightning data from 2010 to 2016, letting the supercomputer discover relationships between weather variables and lightning strokes. Then they tested the technique on weather from 2017 to 2019, comparing the AI-supported technique and an existing physics-based method, using actual lightning observations to evaluate both.

The new method was able to forecast lightning with the same skill about two days earlier than the leading technique in places, like the southeastern U.S., that get a lot of lightning. Because the method was trained on the entire U.S., its performance wasn’t as accurate for places where lightning is less common.

The approach used for comparison was a recently developed technique to forecast lightning based on the amount of precipitation and the ascent speed of storm clouds. That method has projected more lightning with climate change and a continued increase in lightning over the Arctic.

“The existing method just multiplies two variables. That comes from a human’s idea, it’s simple. But it’s not necessarily the best way to use these two variables to predict lightning,” Kim said. Observed (left) and machine-learning-predicted lightning flash density (right) over the continental U.S. on June 18, 2017. A neural network model was used for the machine learning prediction.Daehyun Kim/University of Washington. Map by Rebecca Gourley/University of Washington

The machine learning was trained on lightning observations from the World Wide Lightning Location Network, a collaborative based at the UW that has tracked global lightning since 2008.

“Machine learning requires a lot of data — that’s one of the necessary conditions for a machine-learning algorithm to do some valuable things,” Kim said. “Five years ago, this would not have been possible because we did not have enough data, even from WWLLN.”

Commercial networks of instruments to monitor lightning now exist in the U.S., and newer geostationary satellites can monitor one area continuously from space, supplying the precise lightning data to make more machine learning possible.

“The key factors are the amount and the quality of the data, which are exactly what WWLLN can provide us,” Cheng said. “As machine learning techniques advance, having an accurate and reliable lightning observation dataset will be increasingly important.”

The researchers hope to improve their method using more data sources, more weather variables, and more sophisticated techniques. They would like to improve predictions of particular situations like dry lightning, or lightning without rainfall since these are especially dangerous for wildfires.

Researchers believe their method could also be applied to longer-range projections. Longer-range trends are important partly because lightning affects air chemistry, so predicting lightning leads to better climate models.

“In atmospheric sciences, as in other sciences, some people are still skeptical about the use of machine learning algorithms — because as scientists, we don’t trust something we don’t understand,” Kim said. “I was one of the skeptics, but after seeing the results in this and other studies, I am convinced.”

Surfing spin waves brings us one step closer to spin superfluidity

Spin waves, a change in electron spin that propagates through a material, could fundamentally change how devices store and carry information. These waves, also known as magnons, don’t scatter or couple with other particles. Under the right conditions, they can even act as a superfluid, moving through a material with zero energy loss. star gbb3d327f8 1920 1 82291

But the very properties that make them so powerful also make them nearly impossible to measure.  In a previous study, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) demonstrated the ability to both excite and detect spin waves in a two-dimensional graphene magnet, but they couldn’t measure any of the wave’s specific properties.

Now, SEAS researchers have demonstrated a new way to measure the quintessential properties of spin waves in graphene.

“In previous experiments, we only knew that we could generate spin waves, but we didn’t know anything about their properties in a quantitative way,” said Amir Yacoby, Professor of Physics and Applied Physics at SEAS and senior author of the paper. “With this new work, we can determine all these quantitative numbers, including the energy and number of spin waves, their chemical potential, and temperature. This is an extremely important tool that we can use to explore new ways of generating magnons and get closer to achieving spin superfluidity.”

A charge sensor measuring the cost of electrons surfing on the spin wave (green wavy lines) (Credit: Yacoby Lab/ Harvard SEAS)Measuring the properties of a spin-wave is like measuring the properties of a tidal wave if the water itself was undetectable. If you couldn’t see water, how could you measure the speed, height, or the number of tidal waves? One way would be to introduce something into the system that you can measure, like a surfer. The speed of the tidal wave could be detected by measuring the speed of the surfer.

In this case, Yacoby and his team used an electron surfer.

The researchers began with a quantum Hall ferromagnet. Quantum Hall ferromagnets are magnets made from 2D materials, in this case, graphene, where all the electron spins are in the same direction.  If an electron with a different spin is introduced into this system, it will use energy to try to flip the spins of its neighbors.

But the research team found that when they injected an electron with a different spin into the system and then generated spin waves, the energy the electron needed to flip its neighbors went down.

“It’s striking that somehow the electrons that we’re putting into the system are sensitive to the presence of spin waves,” said Andrew T. Pierce, a graduate student at SEAS and co-first author of the study. “It’s almost as if these electrons are grabbing onto the wave and using it to help flip the spins of their neighbors.”

“Spin waves don’t like to interact with anything but by using electrons and this energy cost as a proxy to probe the properties of a spin waves, we can determine the chemical potential, which combined with knowing the temperature and a few other properties, gives us a full description of the magnon,” said Yonglong Xie, a postdoctoral fellow at SEAS and co-first author of the study. “This is critical to knowing whether the wave is approaching the limit where it achieves superfluidity.”

The research could also provide a general approach to studying other hard-to-measure exotic systems, such as the recently discovered moiré materials which are expected to support a variety of waves like the spin-wave studied in this work.