Syracuse physicist Coughlin builds a model that maps a star’s surprising orbit around a supermassive black hole

Hundreds of millions of light-years away in a distant galaxy, a star orbiting a supermassive black hole is being violently ripped apart under the black hole’s immense gravitational pull. As the star is shredded, its remnants are transformed into a stream of debris that rains back down onto the black hole to form a very hot, very bright disk of material swirling around the black hole, called an accretion disc. This phenomenon – where a star is destroyed by a supermassive black hole and fuels a luminous accretion flare – is known as a tidal disruption event (TDE), and it is predicted that TDEs occur roughly once every 10,000 to 100,000 years in a given galaxy. 

{media load=media,id=297,width=350,align=left,display=inline}

With luminosities exceeding entire galaxies (i.e., billions of times brighter than our Sun) for brief periods (months to years), accretion events enable astrophysicists to study supermassive black holes (SMBHs) from cosmological distances, providing a window into the central regions of otherwise-quiescent - or dormant - galaxies. By probing these "strong-gravity’’ events, where Einstein's general theory of relativity is critical for determining how matter behaves, TDEs yield information about one of the most extreme environments in the universe: the event horizon – the point of no return – of a black hole. This illustration depicts a star (in the foreground) experiencing spaghettification as it’s sucked in by a supermassive black hole (in the background) during a ‘tidal disruption event’. Credit: ESO/M. Kornmesser

TDEs are usually “once-and-done” because the extreme gravitational field of the SMBH destroys the star, meaning that the SMBH fades back into darkness following the accretion flare. In some instances, however, the high-density core of the star can survive the gravitational interaction with the SMBH, allowing it to orbit the black hole more than once. Researchers call this a repeating partial TDE.

A team of physicists, including lead author Thomas Wevers, Fellow of the European Southern Observatory, and co-authors Eric Coughlin, assistant professor of physics at Syracuse University, and Dheeraj R. “DJ” Pasham, a research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, have proposed a model for a repeating partial TDE. Their findings, published in The Astrophysical Journal Letters, describe the capture of the star by an SMBH, the stripping of the material each time the star comes close to the black hole, and the delay between when the material is stripped and when it feeds the black hole again. The team’s work is the first to develop and use a detailed model of a repeating partial TDE to explain the observations, make predictions about the orbital properties of a star in a distant galaxy, and understand the partial tidal disruption process.

On a Collision Course with a Black Hole

The team is studying a TDE known as AT2018fyk (AT stands for "Astrophysical Transient’’). The star was captured by an SMBH through an exchange process known as “Hills capture,” where the star was originally part of a binary system (two stars that orbit one another under their mutual gravitational attraction) that was ripped apart by the gravitational field of the black hole. The other (non-captured) star was ejected from the center of the galaxy at speeds comparable to ~ 1000 km/s, which is known as a hypervelocity star.

Once bound to the SMBH, the star powering the emission from AT2018fyk has been repeatedly stripped of its outer envelope each time it passes through its point of closest approach with the black hole. The stripped outer layers of the star form the bright accretion disk, which researchers can study using X-Ray and Ultraviolet /Optical telescopes that observe light from distant galaxies.

According to Wevers, having the opportunity to study a repeating partial TDE gives unprecedented insight into the existence of supermassive black holes and the orbital dynamics of stars in the centers of galaxies.

“Until now, the assumption has been that when we see the aftermath of a close encounter between a star and a supermassive black hole, the outcome will be fatal for the star, that is, the star is destroyed,” he says. “But contrary to all other TDEs we know of when we pointed our telescopes to the same location again several years later, we found that it had re-brightened again. This led us to propose that rather than being fatal, part of the star survived the initial encounter and returned to the same location to be stripped of material once more, explaining the re-brightening phase.”

Living to Die Another Day

First detected in 2018, AT2018fyk was initially perceived as an ordinary TDE. For approximately 600 days the source stayed bright in the X-ray, but then abruptly went dark and was undetectable - a result of the stellar remnant core returning to a black hole, explains MIT physicist Dheeraj R. Pasham.

“When the core returns to the black hole it essentially steals all the gas away from the black hole via gravity and as a result, there is no matter to accrete and hence the system goes dark,” Pasham says.

It wasn’t immediately clear what caused the precipitous decline in the luminosity of AT2018fyk because TDEs normally decays smoothly and gradually – not abruptly – in their emission. But around 600 days after the drop, the source was again found to be X-ray bright. This led the researchers to propose that the star survived its close encounter with the SMBH for the first time and was in orbit about the black hole.

Using supercomputer modeling, the team’s findings suggest that the orbital period of the star about the black hole is roughly 1,200 days, and it takes approximately 600 days for the material that is shed from the star to return to the black hole and start accreting. Their model also constrained the size of the captured star, which they believe was about the size of the sun. As for the original binary, the team believes the two stars were extremely close to one another before being ripped apart by the black hole, likely orbiting each other every few days.

So how could a star survive its brush with death? It all comes down to a matter of proximity and trajectory. If the star collided head-on with the black hole and passed the event horizon – the threshold where the speed needed to escape the black hole surpasses the speed of light – the star would be consumed by the black hole. If the star passed very close to the black hole and crossed the so-called "tidal radius" – where the tidal force of the hole is stronger than the gravitational force that keeps the star together – it would be destroyed. In the model they have proposed, the star's orbit reaches a point of closest approach that is just outside of the tidal radius, but doesn't cross it completely: some of the material at the stellar surface is stripped by the black hole, but the material at its center remains intact.

A Repeat Performance?

How, or if, the process of the star orbiting the SMBH can occur over many repeated passages is a theoretical question that the team plans to investigate with future simulations. Syracuse physicist Eric Coughlin explains that they estimate between 1 to 10% of the mass of the star is lost each time it passes the black hole, with the large range due to uncertainty in modeling the emission from the TDE.

“If the mass loss is only at the 1% level, then we expect the star to survive for many more encounters, whereas if it is closer to 10%, the star may have already been destroyed,” notes Coughlin.

The team will keep their eyes on the sky in the coming years to test their predictions. Based on their model, they forecast that the source will abruptly disappear around March 2023 and brighten again when the freshly stripped material accretes onto the black hole in 2025.

The Future of TDE Research

The team says their study offers a new way forward for tracking and monitoring follow-up sources that have been detected in the past. The work also suggests a new paradigm for the origin of repeating flares from the centers of external galaxies.

“In the future, it is likely that more systems will be checked for late-time flares, especially now that this project puts forth a theoretical picture of the capture of the star through a dynamical exchange process and the ensuing repeated partial tidal disruption,” says Coughlin. “We’re hopeful this model can be used to infer the properties of distant supermassive black holes and gain an understanding of their 'demographics,' being the number of black holes within a given mass range, which is otherwise difficult to achieve directly.”

The team says the model also makes several testable predictions about the tidal disruption process, and with more observations of systems like AT2018fyk, it should give insight into the physics of partial tidal disruption events and the extreme environments around supermassive black holes.

“This study outlines a methodology to potentially predict the next snack times of supermassive black holes in external galaxies,” says Pasham. “If you think about it, it is pretty remarkable that we on Earth can align our telescopes to black holes millions of light years away to understand how they feed and grow.”

Schematic illustration of the study  CREDIT Xiaodan Guan
Schematic illustration of the study CREDIT Xiaodan Guan

China improves methods that perform parametrization of snow in climate models

Seasonal snow is sensitive to climate change and is always taken as a signal of local climate changes. Against the background of global warming, the annual snow cover in the Northern Hemisphere is following an overall decreasing trend. Since snow plays an important role in the water cycle and has significant effects on atmospheric circulation, it is important to be able to simulate it well in climate models. However, as a process that operates on a small scale, snow needs to be approximated (or “parameterized”), and so many studies have attempted to improve the schemes that perform this parametrization of snow in climate models.

In the mid-to-high latitudes of the Northern Hemisphere, the impacts of snow cover dynamics on vegetation show large differences among boreal biomes. Researchers from Lanzhou University, China, selected open shrubland, mixed forest, and evergreen needle leaf forest in the mid-to-high latitudes (45°–70°N), which are highly sensitive to snow changes. Then, over a selected area of North America, Europe, Central Asia, and East Asia, they addressed the unique relationships between snow cover and snow depth for typical vegetation cover types and explored the reproduction of snow accumulation processes by model parameterization. The results have recently been published in Atmospheric and Oceanic Science Letters.

According to the findings of this study, there are different relationships between snow cover and snow depth for these three typical land cover types, and the relationships not only represent the characteristic changes in the processes of snow accumulation and snow melt but can also be used in the model for predicting snow accumulation.

“However, partly because the influence of different land cover types is not fully considered, it was found that state-of-the-art climate models are unable to reproduce the relationships between snow cover and snow depth in both historical and future simulations, which will affect our understanding of the ecological impacts of snowmelt in spring”, says Prof. Xiaodan Guan, the first and corresponding author of this paper.

Therefore, it is important to improve simulation results by considering the interaction between land cover types and snow processes in snow parameterization schemes.

Lars Schäfer, Thomas Happe and Ulf-Peter Apfel (from left) collaborated on the current study. © RUB, Marquard
Lars Schäfer, Thomas Happe and Ulf-Peter Apfel (from left) collaborated on the current study. © RUB, Marquard

German researchers use MD simulations to learn how to protect biocatalysts from oxygen

A genetic modification can make hydrogen-producing enzymes more stable.

Certain enzymes from bacteria and algae can produce molecular hydrogen from protons and electrons – an energy carrier on which many hopes are riding. All they need for this purpose is light energy. The major obstacle to their use is that they are destroyed by contact with oxygen. An interdisciplinary research team from the RESOLV cluster of excellence at Ruhr University Bochum, Germany, has succeeded in genetically modifying a hydrogen-producing enzyme so that it is protected from oxygen. The researchers headed by Professor Thomas Happe, head of the Photobiotechnology group, Professor Lars Schäfer, and Professor Ulf-Peter Apfel's report in the journal ACS Catalysis.

For the energy transition to succeed, we require environmentally friendly energy carriers. Hydrogen could be one such source if it could be produced on a large scale in a carbon-neutral way. Researchers are relying on enzymes that occur naturally in certain algae and bacteria, to name just a few. “Due to their high conversion rates, they serve as a biological blueprint for the design of future hydrogen catalysts,” explains lead author Andreas Rutz. But their unique active site, known as the H-cluster, degrades on contact with oxygen. “This is the greatest hurdle in hydrogen research,” says Rutz.

Oxygen resistance increases considerably

The recently discovered [FeFe] hydrogenase called CbA5H is the only known enzyme of its class that can protect itself from oxygen by a molecular protection mechanism. However, a fraction of the hydrogenase is also destroyed in the process. To remedy this problem, the researchers specifically exchanged a building block of the enzyme. This genetic modification meant they could significantly increase the oxygen resistance of the hydrogenase.

The teams used site-directed mutagenesis in combination with electrochemistry, infrared spectroscopy, and molecular dynamics simulations to better understand the kinetics of the transformation at the atomic level. “We intend to use our findings to understand how local modifications of protein structure can significantly influence protein dynamics and how they can effectively control the reactivity of inorganic centers,” explain Lars Schäfer and Ulf-Peter Apfel.

Chinese researchers create fluidic memristors with diverse neuromorphic functions

Neuromorphic devices have attracted increasing attention because of their potential applications in neuromorphic supercomputing, intelligence sensing, brain-machine interfaces, and neuroprosthetics. However, most of the neuromorphic functions realized are based on the mimicking of electric pulses with solid-state devices. Mimicking the functions of chemical synapses, mainly neurotransmitter-related functions, is still a challenge in this research area.

In a study published in Science, the research group led by Prof. YU Ping and MAO Lanqun from the Institute of Chemistry of the Chinese Academy of Sciences developed a polyelectrolyte-confined fluidic memristor (PFM), which could emulate diverse electric pulse with ultralow energy consumption. Moreover, benefitting from the fluidic nature of PFM, chemical-regulated electric pulses, and chemical-electric signal transduction could also be simulated. 

The researchers first fabricated the polyelectrolyte-confined fluidic channel by surface-initiated atomic transfer polymerization. By systematically studying the current-voltage relationship, they found that the fabricated fluidic channel well satisfied the nature memristor, defined as PFM. The origin of ion memory originated from the relatively slow diffusion dynamics of anions into and out of the polyelectrolyte brushes.  

The PFM could well emulate short-term plasticity patterns (STP), including paired-pulse facilitation and paired-pulse depression. These functions can be operated at the voltage and energy consumption as low as those biological systems, suggesting the potential application in bioinspired sensorimotor implementation, intelligent sensing, and neuroprosthetics.  

The PFM could also emulate the chemical-regulated STP electric pulses. Based on the interaction between polyelectrolytes and counterions, the retention time could be regulated in different electrolytes. More importantly, in a physiological electrolyte (i.e., phosphate-buffered saline solution, pH7.4), the PFM could emulate the regulation of memory by adenosine triphosphate (ATP), demonstrating the possibility to regulate the synaptic plasticity by neurotransmitter.  

More importantly, based on the interaction between polyelectrolytes and counterions, the chemical-electric signal transduction was accomplished with the PFM, which is a crucial step toward the fabrication of artificial chemical synapses.

With structural emulation to ion channels, PFM features versatility and easily interfaces with biological systems, paving the way to building neuromorphic devices with advanced functions by introducing rich chemical designs. This study provides a new way to interface chemistry with neuromorphic devices. 

(l-r) Prof. Sarel Fleishman, Rosalie Lipsh-Sokolik and Dr. Olga Khersonsky
(l-r) Prof. Sarel Fleishman, Rosalie Lipsh-Sokolik and Dr. Olga Khersonsky

Israeli scientists develop machine-learning model for generating enzymes with unprecedented efficiency

Enzymes have the potential to transform the chemical industry by providing green alternatives to a slew of processes. These proteins act as biological catalysts, and with the help of molecular engineering, they can make naturally occurring reactions shift into turbo mode. Tailor-made enzymes could, for example, lead to nonpolluting drug manufacture; they could also safely break down pollutants, sewage, and agricultural waste, and then turn them into biofuel or animal feed.

A new Weizmann Institute of Science study, published today in Science, brings this vision closer to reality. In their report, the researchers, headed by Prof. Sarel Fleishman of the Biomolecular Sciences Department, unveil a computational method for designing thousands of different active enzymes with unprecedented efficiency by assembling them from engineered modular building blocks. 

{media id=296,layout=solo}

Biochemists typically design new enzymes by randomly tweaking the DNA of naturally existing ones and screening the resultant variants for a desired activity, a process that can be extremely time-consuming. Fleishman’s team came up with the idea of generating large numbers of vastly diverse enzymes by breaking down natural ones into constituent fragments that can then be altered and recombined in various ways.

The inspiration for this new approach came from within: our immune system, which is capable of making billions of different antibodies – proteins that in principle can counter any harmful microorganism – just from the bits dictated by a relatively small number of genes. “Antibodies are the only family of proteins in nature known to be generated in a modular way,” Fleishman explains. “Their huge diversity is achieved by recombining preexisting genetic fragments, similar to how a new kind of electronic device is assembled from preexisting transistors and processing units.”

Could enzymes be generated, like antibodies, from lab-designed modular fragments that combine into larger structures?

Rosalie Lipsh-Sokolik, a Ph.D. student who led the study in Fleishman’s lab, started experimenting with a family of several dozen enzymes that break down xylan, a common component of plant cell walls. “If we manage to boost the activity of these enzymes, they might be used for breaking down plant compounds such as xylan and cellulose into sugars, which in turn can help generate biofuels,” Lipsh-Sokolik says. “Instead of disposing of agricultural waste, for example, we should be able to turn it into an energy source.”

Lipsh-Sokolik developed an algorithm that uses physics-based protein design calculations together with a new machine-learning model. The algorithm broke down each of the different variants of xylan breaking enzyme sequences into several fragments and then introducing dozens of mutations into those pieces – all in ways that maximized the potential compatibility of the different bits. It then assembled fragments into different combinations and selected a million sequences of encoded enzymes that were deemed to be stable.

The next step for Lipsh-Sokolik and colleagues was to synthesize a million actual enzymes from these supercomputer models and test them in the lab. To their surprise, 3,000 were confirmed to be active. “The first time we looked at the experimental results, we were amazed,” Fleishman says. “The 0.3 percent success rate is not high, but the sheer number of different active enzymes we got was staggering. In typical protein design and engineering studies, you see maybe a dozen active enzymes.”

Armed with an extensive repertoire of enzymes, the researchers then asked a key question that interests protein researchers: What molecular features distinguish active enzymes from inactive ones?

Using machine learning tools, Lipsh-Sokolik examined about a hundred features that characterize enzymes and used the ten most promising ones to create an activity predictor. When she incorporated this activity predictor into her algorithm and repeated the design experiment with the xylan-breaking enzymes, this second-generation repertoire had as many as 9,000 enzymes that broke down xylan and another 3,000 that could break down cellulose, adding up to a total of 12,000 active enzymes. This was a tenfold increase in success rate over the initial experiment, and an unparalleled feat in the history of protein design: The team managed, in a single experiment, to design more potentially active enzymes than standard methods could produce in a decade.

Not only that, the thousands of these active variants were exceptionally diverse in terms of both sequence and structure, which suggests that they may perform a wide variety of new functions.

“When you can create enzymes with such high levels of activity using a completely automated method that you now know is also incredibly reliable, that is really good news,” Lipsh-Sokolik says. Fleishman says the new Weizmann method, which the scientists call CADENZ – short for Combinatorial Assembly and Design of Enzymes – can, theoretically, be applied to any family of proteins. His team is already exploring its applications to the generation of new, improved antibodies or the creation of variants of the fluorescent proteins widely used as labels in biology.

“One of my goals is to change the way people engineer enzymes, antibodies, and other proteins,” Fleishman says. “Protein engineering is becoming a central part of the economy and public health: Industrial enzymes are proteins; antibodies and vaccines are also proteins. We need to be able to optimize them and to generate new ones in a robust and reliable way.”

The study’s participants included Dr. Olga Khersonsky and Shlomo-Yakir Hoch of the Weizmann Institute of Science’s Biomolecular Sciences Department; Drs. Sybrin P. Schröder and Casper de Boer and Prof. Hermen S. Overkleeft of Leiden University, the Netherlands; and Prof. Gideon J. Davies of the University of York, UK.