Supercomputers simulate the ‘impossible’ black hole merger, but do they really explain it?

When gravitational-wave detectors picked up the signal known as GW231123, astronomers were stunned. Two black holes, far too massive and spinning far too fast, merged in a way that standard stellar evolution shouldn’t allow. One scientist summed up the global reaction: "Those black holes should not exist."
 
Now, researchers supported by the Simons Foundation believe they have an answer. Their new paper in The Astrophysical Journal Letters proposes that these massive black holes were born not from earlier mergers, but directly from the collapse of enormous, rapidly rotating stars, no supernova, no explosion, just a straight plunge into darkness. To test this idea, they turned to supercomputing. But does the model truly explain the mystery, or are we simply forcing the data to fit a convenient narrative?

The simulations

The research team employed state-of-the-art, end-to-end general-relativistic magnetohydrodynamic simulations (GRMHD), representing the most advanced black-hole formation simulations currently available. These models were executed on the supercomputers from the U.S. Department of Energy, specifically including the Argonne and NERSC clusters.
 
Unlike previous models that averaged out messy physics or assumed idealized collapse, these simulations:
  • Track a massive star from the late burning stage → collapse → black hole birth.
  • Include magnetic fields, rotation, and the feedback between jets and accretion disks.
  • Model how much mass falls into the black hole and how much is blasted away.

The key claim:
A 250-solar-mass helium star collapses into a black hole of ~40 solar masses, then accretes or blows off the rest, depending on how magnetic fields choke or feed the flow. With a moderate magnetic field, the simulation produced a final black hole around ~100 solar masses with high spin, strikingly similar to GW231123.

 
In other words: rotation + magnetic fields + gravitational collapse = black holes that shouldn’t exist, suddenly… exist.

The “mass gap” problem

There’s a no-man’s-land of black hole masses, from ~70 to ~140 solar masses, called the pair-instability mass gap. In current theory, stars in this mass band explode violently before they can ever form a black hole. So how did GW231123 contain two of them?
 
The paper posits that rapid rotation quells the explosive instability, directing mass toward the black hole. Magnetic fields then regulate mass accretion and ejection; too weak, and the black hole overgrows, while too strong, and it expels its own fuel. Only a "Goldilocks" magnetic field produces the massive, high-spin black holes observed.
 
The work is a tour de force of supercomputing and theoretical astrophysics. But skepticism is warranted.
  1. Too many knobs to turn. Rotation rate, magnetic strength, metallicity, tweak any one of these, and you get a different outcome. A perfect match may say more about parameter tuning than about nature.
  2. Assumptions stacked on assumptions. The model assumes these hyper-massive stars existed, paired in binaries, both rotating rapidly and with finely tuned magnetic fields. We have hints that such stars might exist, not proof.
  3. The simulation freezes spacetime. The GRMHD code evolves matter and magnetic fields but does not dynamically evolve the black hole itself; its mass and spin are adjusted afterward in post-processing. That means the most spectacular claim, reproducing the final mass and spin, comes partially from inference, not direct simulation.
  4. Explanations arrive after the observation. The merger was first called “impossible;” the theory arrived later. That’s classic scientific back-filling, plausible, but unproven.
Supercomputers are powerful, but they can turn into wish-fulfillment engines if we’re not careful.

What we can say with confidence

  • The event is real.
  • The black holes are too massive and too fast-spinning for standard models.
  • Supercomputing is the only way we can model the collapse with full magnetic, relativistic detail.
Whether this simulation reflects what nature actually does, or simply what our models are capable of doing, remains open.

The bottom line

A breakthrough? Possibly. A closed case? Not even close.
 
Supercomputers do not provide definitive truths; instead, they offer potential explanations. The validation of these possibilities rests on future observational data and the discovery of analogous black hole mergers.

Supercomputers unveil a new frontier: Could there be different types of black holes?

At the intersection of theory and extreme cosmic reality, physicists at Goethe University Frankfurt, in collaboration with international colleagues, have used cutting-edge supercomputing simulations to explore a profound question: Could there be more than one type of black hole? Their findings push the boundaries of astrophysics and suggest the "perfect black hole" might not exist.

A Shadow That Speaks

Black holes are often depicted as dark monsters swallowing light. But what is actually observed are not the black holes themselves, but the glowing matter swirling around them and the “shadow” the black hole casts against that luminous backdrop.
 
The research team led by Luciano Rezzolla (Goethe University) and collaborators from the Tsung‑Dao Lee Institute in Shanghai developed a method to simulate how black-hole shadows would differ if black holes obeyed different theories of gravity (not just Einstein’s).
 
Using vast supercomputing resources, they performed general-relativistic magnetohydrodynamic (GRMHD) and radiative transfer simulations of accretion flows around black holes that deviate from the standard Kerr solution, the mathematical description of rotating black holes in general relativity. 
 
By comparing synthetic images from these simulations, the team quantified how shadow images diverge when gravity is modified. They found that future imaging missions capable of percent-level fidelity (differences at the 2%–5% level) could discriminate between Einstein’s black holes and exotic alternatives.

Why Supercomputing Matters

Simulating black holes demands extreme high-performance computing. Researchers used clusters like TDLI-Astro and Siyuan Mark-I at Shanghai Jiao Tong University to run GRMHD and radiative-transfer models.
 
These models must account for plasma physics, magnetic fields, relativistic spacetime curvature, and light propagation in three dimensions, across numerous time steps and parameter variations.
 
Supercomputers are essential for this research. It positions this work at the intersection of astrophysics and computational science, transforming black holes from philosophical concepts into quantifiable objects, with supercomputers acting as our analytical tools.

What This Could Mean for Einstein

For over a century, Einstein's general relativity has been the standard theory of gravity. Within this framework, black holes have a defined form: the Kerr metric for rotating black holes. However, this new method poses a question: what if black holes deviate from the Kerr model?
 
What if gravity behaves differently in the strong field near the event horizon? This research proposes observables derived from shadow shapes and intensities that could enable future telescopes to test these alternative theories. Simply put, high-resolution images of black holes could reveal whether Einstein's theory holds true under extreme conditions or if new physics is hidden in their shadows.
 
The research indicates that with image comparison metrics at a ~2-5% mismatch level, missions can place meaningful observational constraints on deviations from the Kerr metric.

The Inspirational Takeaway

Imagine this: we are contemplating humanity's oldest questions, what is gravity really, are black holes monolithic or varied, and does Einstein's masterpiece hold in the universe's darkest corners? And we answer them with supercomputers and telescopes. The cosmic realm becomes computational. This work by Goethe University Frankfurt and international partners suggests that the next decade in astrophysics could be a golden era, either verifying or revolutionizing our understanding of gravity. The universe offers us a handshake, and we are building the device to grasp it.

Looking Ahead

  • Upcoming telescope networks and space-based interferometers will be vital. This research sets the criteria for what such missions need to deliver: extremely high image fidelity of black hole shadows.
  • Continued advances in supercomputing will allow even more detailed simulations (including spins, magnetic fields, exotic metrics) to deepen the catalog of “what variations look like.”
  • From a philosophical vantage, if deviations from Kerr are ever found, we could be witnessing a paradigm shift, a rewriting of gravity itself.
In conclusion, the combination of supercomputers and cosmic imagery is transforming black holes into experimental laboratories. Researchers at Goethe University Frankfurt have developed a framework to determine whether black holes are uniform or varied and whether Einstein's theory remains valid.
 

Japanese researchers use MD simulations to understand RNA folding

In a quietly riveting development, researchers at the Tokyo University of Science (TUS) have harnessed molecular dynamics simulations to unravel how RNA molecules fold. A new paper from Associate Professor Tadashi Ando’s team reports that they successfully simulated the folding of a broad library of RNA stem-loops with unprecedented accuracy.

Why This Matters

RNA isn’t just a messenger of genetic code; it folds into complex 3-D shapes (secondary & tertiary structures) that determine its function in cells. Understanding this folding is key to the design of RNA-based therapies. However, computationally modeling this process is extremely challenging, as it requires considering every atom, bond, solvent molecule, and timescale. This is where supercomputing comes in.
 
The team conducted large-scale molecular dynamics (MD) simulations, starting with completely unfolded RNA stem-loops (10–36 nucleotides). They employed two advanced computational components: the DESRES-RNA atomistic force field (refined for high-accuracy RNA modeling) and the GB-neck2 implicit solvent model, which treats the surrounding solvent as a continuous medium, accelerating the simulations.
 
Results: Out of 26 RNA molecules, 23 folded into their expected shapes. For simpler stem-loops (18 total), they achieved a root mean square deviation (RMSD) of < 2 Å for the stems and < 5 Å for the full molecule, closely matching experimental structures. Even some complex motifs with bulges and internal loops (5 of 8) folded correctly, revealing distinctive folding pathways.
 
While the article doesn't explicitly state this, the research needs more massively parallel computing, large memory footprints, and high-throughput sampling of molecular trajectories. The use of implicit solvent models (GB-neck2) helped make the problem tractable, though it remained computationally intensive. Given Japan's rich supercomputing history and high-end compute centers, Ando's team effectively applied this level of computing to a biomolecular-folding challenge.
 
This research establishes a reliable foundation for studying large-scale RNA conformational changes, a previously challenging area. Furthermore, it opens avenues for RNA-based drug design; accurate RNA folding simulations allow us to design molecules that target or mimic this folding.
 
Finally, it indicates a paradigm shift in supercomputing application, moving beyond raw power to employ smart methods, like force fields and solvent models, to optimize computational efficiency while maintaining accuracy.
 
Loop regions (parts of the RNA structure with internal loops or bulges) still showed lower accuracy (≈ 4 Å RMSD), indicating the models aren’t perfect yet. Implicit solvent models (GB-neck2) simplify the environment and accelerate simulations but might miss certain effects, such as how divalent cations (e.g., Mg²⁺) influence RNA structure. For supercomputing-scale applications, modeling even larger RNAs or including explicit solvent models will require significantly increased memory, compute time, and algorithmic complexity.

The Big Picture: Supercomputing → Biology → Therapies

The study used a combination of the DESRES-RNA atomistic force field and the GB-neck2 implicit solvent model to simulate 26 RNA stem-loops (10–36 nucleotides) from an unfolded state. They achieved folding success in 23/26 structures, with strong accuracy for many of them. The researchers explicitly mention that the use of an implicit solvent model (GB-neck2) is a compute-speed optimized because fewer explicit water molecules mean fewer total particles and, thus, less compute time.
 
Given the scale of the problem, simulating 26 RNA molecules using atomistic models starting from an unfolded state, even with an implicit solvent, here's a reasoned estimate: If each RNA simulation ran for tens to hundreds of nanoseconds of physical time, and accounting for simulation overhead, it would likely require hundreds to thousands of core-hours per RNA. Running these simulations in parallel on a mid-sized cluster (e.g., 100–1000 cores), the total wall-time could be anywhere from several days to a couple of weeks. While memory requirements per job might be moderate (a few tens of GB), the aggregate use across parallel jobs could easily reach hundreds of GB.
 
This work exemplifies the intersection of advanced computing and biology. The progression is clear: supercomputers, combined with refined algorithms, enable accurate simulations, paving the way for potential new medicines. This pipeline, once largely theoretical, is now entering practical application.

New supercomputing-enabled model offers fresh hope, but climate clock keeps ticking

A research team led by Hefei Institutes of Physical Science in China has unveiled a new deep-learning model that significantly improves the forecasting of roadside air pollutants. The model, called DSTMA-BLSTM (Dynamic Shared and Task-specific Multi-head Attention Bidirectional Long Short-Term Memory), achieved an R² above 0.94 on major pollutants and cut prediction errors by about 30% compared with conventional LSTM models.
 
The core innovation lies in how it decomposes the intertwined effects of traffic behavior, meteorology, and emissions: a shared “attention” layer extracts common temporal patterns across pollutants, while task-specific attention heads isolate the unique dynamics of each pollutant.
 
From a supercomputing and big-data standpoint, this matters: urban air pollution is a high-dimensional, non-linear system, subject to rapid shifts in traffic flows, weather, emission regimes, and chemical transformations. Taming this complexity requires serious computing power (for training these deep models) and real-time model inference that can integrate streaming sensor data, traffic flow telemetry, meteorological forecasts, and emissions inventories.
 
In other words, we are entering an era where supercomputing-class workflows (massive data, advanced AI architectures, real-time inference) are not just for cosmology or physics; they’re now essential for everyday environmental management.

Why the urgency? And why the timing is glaring

A high-accuracy pollutant forecasting system is not confined to the lab. In an era of accelerating climate change, urbanization, and increasing regulatory pressure, the ability to predict pollutant spikes (such as traffic-related NO₂, PM₂.₅, and ozone precursors) has direct implications for public health, energy-use strategies, and climate policy.
 
However, we are at a precarious point. The COP30 climate summit in Belém, Brazil (Nov 10-21, 2025), saw world leaders state clearly that the planet has already exceeded the 1.5 °C threshold above pre-industrial levels, a critical point for habitability. The summit agenda focuses not only on mitigation (reducing emissions) but also on adaptation, resilience, and science-based decision-making.
 
This directly relates to the Hefei team's work: one enabler of adaptation is improved forecasting of environmental hazards (including air quality), made possible by computing power and AI. If cities can anticipate problems sooner, they can respond more quickly.
 
But here’s the catch:
  • Better forecasting is necessary, but not sufficient: You can predict pollutant spikes, but if the infrastructure, policies, or finance to act are missing, forecasting becomes an academic exercise.
  • The compute-intensive nature of such models means only organizations with high-performance infrastructure or dedicated cloud investments can deploy them, raising concerns about inequality across cities and nations.
  • At COP30, despite abundant promises, a significant gap persists. According to policy analysts, current national plans (NDCs) still place the world on a warming trajectory of 2.3-2.8 °C, well exceeding the 1.5 °C target.
  • Brazil’s hosting of COP30 is symbolically powerful; the Amazon region is central to global climate dynamics, yet the infrastructure demands of such a summit (and the larger transition) place additional pressure on ecosystems and resources.

What this means for cities

For any firms working at the intersection of big data, real estate, and predictive systems, here’s the play:
  • Integrate supercomputing-grade forecasting models into urban-scale platforms (e.g., neighborhood-level pollutant alerts, real estate risk dashboards, development-planning tools).
  • Recognize that climate risk is now ambient: air-quality shocks, energy-use surges, and infrastructure strain all feed into property value, tenant demand, and regulatory exposure.
  • Position real-estate intelligence tools to reflect the new era: not just “location, condition, comps” but “real-time environmental intelligence, resilience capacity, compute-enabled forecasts”.
  • Advocate for compute equity: if only select cities can afford real-time supercomputing models, the climate justice gap widens. Platforms that democratize access become strategic.

Bottom line

The Hefei team’s advance is a hopeful sign: supercomputing and AI are proving to be potent levers in environmental forecasting and management. But the larger picture remains sobering: at COP30, the world was warned we are already beyond critical thresholds, and cities face accelerating hazards. The compute muscle is necessary now; it must be matched by policy, infrastructure, equity, and action.
 
If we don’t build the “compute infrastructure for resilience” alongside our climate infrastructure, forecasts risk becoming unused tools in a climate-stressed world. Let’s keep these worlds, supercomputing, urban resilience, and climate policy tightly coupled.

Antarctica’s cry, and the supercomputer answers: a grim forecast

In research resembling a cosmic warning, scientists at the University of Rhode Island (URI) and collaborators used advanced supercomputing to simulate how the melting Antarctic Ice Sheet will reshape our climate and coastlines over the next two centuries. The results are sobering. Dr. Ambarish Karmalkar, assistant professor in URI’s Department of Geosciences and co-author of the study, helped design and run simulations that integrate the ice sheet, ocean, and atmosphere simultaneously.
 
“Simulating ice-sheet–climate interactions … is challenging but critical,” he says. Supercomputing is proving to be the heavy lifter of climate truths. To gain meaningful insight into complex systems like Antarctica’s ice and the global climate, the team relied on high-end supercomputing resources. In their experiment, they ran interactive models on a supercomputer that allowed the meltwater discharge from Antarctica to dynamically affect oceans and atmosphere, rather than just being included as a simple input.

Why does this matter? 

Previous models, lacking real-time feedback from the ice sheet, painted an overly optimistic picture. However, when we fully couple the ice, ocean, and atmosphere, we uncover hidden risks: uneven sea level rise, particularly in the Pacific, Indian Ocean, and Caribbean; unexpected warming in regions far from Antarctica, such as eastern North America; and complex, counter-intuitive dynamics where meltwater cools the Southern Hemisphere but warms the Northern Hemisphere. In short, the supercomputer didn't just predict global sea level rise; it revealed where, how fast, and how unevenly it will occur.

The Forecast: One to Three Meters by 2200, Unless We Act

Under a “very high emissions” scenario, the melting Antarctic sheet alone could contribute over 3 meters (10 feet) of global sea level rise by the year 2200. Under a more moderate scenario, it’s still ~1 meter (3 feet).
 
Meanwhile, some low-lying islands and coastal regions in the Pacific, Indian, and Caribbean zones could see regional rises of up to 1.5 meters (5 feet) due to gravitational and Earth-deformation effects.
 
These are not distant problems. By 2060, more than one billion people will live in low-elevation coastal zones already vulnerable today to storms, erosion, and surge.
 
And the ripple effects from major sea-level rise reach inland: migration pressures, infrastructure costs, economic shifts, all rippling waves.

The Takeaway

The team at URI used cutting-edge supercomputing to reveal a harder truth: melting Antarctica isn't a far-off apocalypse; it's an unfolding structural change with winners, losers, and vast uncertainties. The models show a world where your location and speed of action make a difference. Shrugging won't help. Investing in "knowing the future" via data, modeling, narratives, tools will.