VLTI uses machine learning to help find stars moving around the Milky Way’s supermassive black hole

The European Southern Observatory’s Very Large Telescope Interferometer (ESO’s VLTI) has obtained the deepest and sharpest images to date of the region around the supermassive black hole at the center of our galaxy. The new images zoom in 20 times more than what was possible before the VLTI and have helped astronomers find a never-before-seen star close to the black hole. By tracking the orbits of stars at the center of our Milky Way, the team has made the most precise measurement yet of the black hole’s mass. These annotated images, obtained with the GRAVITY instrument on ESO’s Very Large Telescope Interferometer (VLTI) between March and July 2021, show stars orbiting very close to Sgr A*, the supermassive black hole at the heart of the Milky Way. One of these stars, named S29, was observed as it was making its closest approach to the black hole at 13 billion kilometres, just 90 times the distance between the Sun and Earth. Another star, named S300, was detected for the first time in the new VLTI observations. To obtain the new images, the astronomers used a machine-learning technique, called Information Field Theory. They made a model of how the real sources may look, simulated how GRAVITY would see them, and compared this simulation with GRAVITY observations. This allowed them to find and track stars around Sagittarius A* with unparalleled depth and accuracy. Credit: ESO/GRAVITY collaboration

“We want to learn more about the black hole at the center of the Milky Way, Sagittarius A*: How massive is it exactly? Does it rotate? Do stars around it behave exactly as we expect from Einstein’s general theory of relativity? The best way to answer these questions is to follow stars on orbits close to the supermassive black hole. And here we demonstrate that we can do that to a higher precision than ever before,” explains Reinhard Genzel, a director at the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany who was awarded a Nobel Prize in 2020 for Sagittarius A* research. Genzel and his team’s latest results, which expand on their three-decade-long study of stars orbiting the Milky Way's supermassive black hole, are published today in two papers in Astronomy & Astrophysics.

On a quest to find even more stars close to the black hole, the team, known as the GRAVITY collaboration, developed a new analysis technique that has allowed them to obtain the deepest and sharpest images yet of our Galactic Centre. “The VLTI gives us this incredible spatial resolution and with the new images, we reach deeper than ever before. We are stunned by their amount of detail, and by the action and number of stars they reveal around the black hole,” explains Julia Stadler, a researcher at the Max Planck Institute for Astrophysics in Garching who led the team’s imaging efforts during her time at MPE. Remarkably, they found a star, called S300, which had not been seen previously, showing how powerful this method is when it comes to spotting very faint objects close to Sagittarius A*.

With their latest observations, conducted between March and July 2021, the team focused on making precise measurements of stars as they approached the black hole. This includes the record-holder star S29, which made its nearest approach to the black hole in late May 2021. It passed it at a distance of just 13 billion kilometers, about 90 times the Sun-Earth distance, at the stunning speed of 8740 kilometers per second. No other star has ever been observed to pass that close to, or travel that fast around, the black hole. 

{media id=266,layout=solo}

The team’s measurements and images were made possible thanks to GRAVITY, a unique instrument that the collaboration developed for ESO’s VLTI, located in Chile. GRAVITY combines the light of all four 8.2-meter telescopes of ESO’s Very Large Telescope (VLT) using a technique called interferometry. This technique is complex, “but in the end you arrive at images 20 times sharper than those from the individual VLT telescopes alone, revealing the secrets of the Galactic Centre,” says Frank Eisenhauer from MPE, principal investigator of GRAVITY.

Following stars on close orbits around Sagittarius A* allows us to precisely probe the gravitational field around the closest massive black hole to Earth, to test General Relativity, and to determine the properties of the black hole,” explains Genzel. The new observations, combined with the team’s previous data, confirm that the stars follow paths exactly as predicted by General Relativity for objects moving around a black hole of mass 4.30 million times that of the Sun. This is the most precise estimate of the mass of the Milky Way’s central black hole to date. The researchers also managed to fine-tune the distance to Sagittarius A*, finding it to be 27 000 light-years away.

To obtain the new images, the astronomers used a machine-learning technique, called Information Field Theory. They made a model of how the real sources may look, simulated how GRAVITY would see them, and compared this simulation with GRAVITY observations. This allowed them to find and track stars around Sagittarius A* with unparalleled depth and accuracy. In addition to the GRAVITY observations, the team also used data from NACO and SINFONI, two former VLT instruments, as well as measurements from the Keck Observatory and NOIRLab’s Gemini Observatory in the US.

GRAVITY will be updated later this decade to GRAVITY+, which will also be installed on ESO’s VLTI and will push the sensitivity further to reveal fainter stars even closer to the black hole. The team aims to eventually find stars so close that their orbits would feel the gravitational effects caused by the black hole’s rotation. ESO’s upcoming Extremely Large Telescope (ELT), under construction in the Chilean Atacama Desert, will further allow the team to measure the velocity of these stars with very high precision. “With GRAVITY+’s and the ELT’s powers combined, we will be able to find out how fast the black hole spins,” says Eisenhauer. “Nobody has been able to do that so far.

Stanford researchers show why heat may make weather less predictable

A Stanford University study shows chaos reigns earlier in midlatitude weather models as temperatures rise. The result? Climate change could be shifting the limits of weather predictability and pushing reliable 10-day forecasts out of reach. Climate change could be shifting the limits of weather predictability and pushing reliable 10-day forecasts out of reach. (Image credit: Pexels and Getty Images Signature via Canva)

A new Stanford University study shows rising temperatures may intensify the unpredictability of the weather in Earth’s mid-latitudes. The limit of reliable temperature, wind, and rainfall forecasts falls by about a day when the atmosphere warms by even a few degrees Celsius.

“Our results show the state of the climate, in general, has implications for how many days out you can say something that’s accurate about the weather,” said atmospheric scientist Aditi Sheshadri, lead author of the study published Nov. 29 in Geophysical Research Letters“Cooler climates seem to be inherently more predictable.”

Widespread changes in weather patterns and increased frequency and severity of extreme weather events are well-documented consequences of global climate change. These departures from old norms can bring storms, droughts, heatwaves, and wildfire conditions beyond what infrastructure has been designed to withstand or what people have come to expect.

Yet numerical weather models are still generally able to predict day-to-day weather 3 to 10 days out more reliably than they could in decades past, thanks to faster computers, better models of physical atmospheric processes, and more precise measurements.

The new research, based on computer simulations of a simplified Earth system and a comprehensive global climate model, suggests the window for accurate forecasts in the midlatitudes is several hours shorter with every degree (Celsius) of warming. This could translate to less time to prepare and mobilize for big storms in balmy winters than in frigid ones.

For precipitation, predictability falls by about a day with every 3 C rise in temperature. The effect is more muted for wind and temperature, with one day of predictability lost with each 5 C increase in temperature.

While global average temperatures have increased by 1.1 C (2 F) since the late 1800s, not all places are warming at the same rateSome U.S. cities have seen average annual temperatures rise by well over 2 C since 1970. Seasonal variations can be even more extreme.

Further analysis will be needed to assess whether winter weather is inherently more predictable than summer weather, Sheshadri said, but the new results strongly indicate a shorter time horizon for reliable weather predictions in places that warm beyond their historical norms.

Butterfly effect

The research comes as the U.S. government prepares to spend $80 million on supercomputing equipment for developing weather and climate models as part of the bipartisan infrastructure law enacted in November.

But the problem of predicting specific weather beyond 10 or possibly 15 days in the future with perfect accuracy isn’t one that can be solved with more computing power or better models. The chaotic nature of Earth’s atmosphere imposes insurmountable limits on forecasting.

This is the crux of meteorologist Edward Lorenz’s discoveries related to the “butterfly effect” in the 1960s. Lorenz found that minuscule differences in initial conditions – like the wind perturbations from a butterfly flapping its wings – produce dramatically different results in models of Earth’s weather system.

For each measure of barometric pressure, temperature, wind speed, and the like that might be included in numerical weather models, uncertainty is impossible to avoid. These imperfections propagate through the model over time, so as you look further into the future, the gap between predictions made from seemingly identical initial conditions grows. At a point, the results lose all resemblance to one another and are indistinguishable from predictions based on realistic but random starting conditions. The supercomputer model at this juncture is said to “lose memory” of its initial conditions.

There is value in unpacking the effects of atmospheric chaos. Meteorologists have long sought to identify the intrinsic limit of weather predictability, in part to find ways to improve models of Earth’s climate and atmosphere. The United Nations’ World Meteorological Organization has estimated the socioeconomic benefits of weather prediction amount to at least $160 billion per year.

“We’re working to understand what sets this finite limit of predictability, and also how it might change in different climates, so people can be prepared for these changes,” said Sheshadri, who is an assistant professor of Earth system science at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth).

For Earth’s middle latitudes, where most Americans live, the new research suggests errors propagate through weather models faster as temperatures rise, and there don’t appear to be any temperature thresholds where the trend shifts. According to the authors, this appears to be linked to the growth of storms known as eddies in the troposphere, the layer of atmosphere closest to Earth. Past research has shown that when air at the planet’s surface is warmer, changes in the vertical arrangement of heat and cold in the atmosphere fuel faster eddy growth.

“When the eddies grow quicker, the models seem to lose track of initial conditions very quickly. And that means that the window of prediction narrows,” Sheshadri said.

Japanese team uses ATERUI II to show stellar 'ashfall' could help distant planets grow

The world’s first 3D simulation simultaneously considering dust motion and growth in a disk around a young star has shown that large dust from the central region can be entrained by and then ejected by gas outflows, and eventually fall back onto the outer regions of the disk where it may enable planetesimal formation. This process can be likened to volcanic “ashfall” where ash carried up by gas during an eruption falls back on the area around the volcano. These results help to explain observed dust structures around young protesters. The dust particles swept up by the bipolar outflow from the center of the protoplanetary disk are piled up on the outer edge of the disk.

Observations by ALMA (Atacama Large Millimeter/submillimeter Array) have revealed gaps in protoplanetary disks of gas and dust around young stars. The gravitational effects of planets are thought to be one of the reasons for the formation of these rings. However, some rings are seen even further out than the position of Neptune in the Solar System. At these distances, dust, a vital component of planet formation, should be scarce. Furthermore, the dust is expected to move in towards the central region of the disk as it grows. So how planets can form in the outer regions has been a mystery.

A research team led by Yusuke Tsukamoto at Kagoshima University used ATERUI II, the world’s most powerful supercomputer dedicated to astronomy calculations at the National Astronomical Observatory of Japan, to perform the world’s first 3D simulation of dust motion and growth in a protoplanetary disk. The team found that large dust particles grown in the central region can be carried out perpendicular to the disk by streams of gas, called bipolar outflow, erupting out from the disk. This dust then drifts out from the outflow and gravity pulls it back down to the outer part of the disk. Tsukamoto comments, “Living in Kagoshima, in the shadow of the active volcano Mt. Sakurajima, I naturally thought of volcanic ashfall when I saw the simulation results.”

The simulation shows that this “stellar ashfall” can enrich large dust in the outer region of the protoplanetary disk and facilitate the planetesimal formation, which may eventually cause planet formation.