Can Scientific AI truly solve quantum chemistry’s hardest problems?

Today's press release from Heidelberg University in Germany highlights a notable advance in quantum chemistry: researchers have leveraged “scientific artificial intelligence” to address a longstanding challenge, calculating molecular energies and electron densities without relying on orbitals. This approach, called orbital-free density functional theory (OF-DFT), has often been dismissed as impractical because even small errors in electron density can lead to non-physical outcomes. The university’s new AI-driven model, STRUCTURES25, reportedly overcomes these hurdles by stabilizing the calculations and producing physically meaningful results, even for more complex molecules.
 
The main appeal of this orbital-free method is efficiency: by bypassing the explicit calculation of quantum mechanical wave functions (orbitals), which become computationally expensive as system size grows, chemists could significantly cut computational costs and enable simulations of much larger molecules, a persistent barrier in materials design, drug discovery, and energy research.
 
At its core, the new method trains a neural network to map electron density directly to energy and other quantum properties, using training data from conventional, more expensive quantum chemical calculations. The researchers emphasize that their model was trained not just on optimal solutions, but also on perturbed data around the correct answer, a strategy they argue helps the system avoid getting “lost” in unphysical results during prediction. The result? According to the team, STRUCTURES25 achieves a level of accuracy competitive with established reference methods while scaling more efficiently with molecular size.
 
The press materials present these findings as a major triumph for scientific artificial intelligence, implicitly suggesting that AI has matured enough to solve central problems of quantum chemistry. Yet a closer look reveals reasons for cautious interpretation, especially for SC Online’s technically informed readership.

Promise vs. Practicality

The underlying scientific goal, constructing a reliable density functional that predicts energy from electron density alone, is grounded in the Hohenberg-Kohn theorems, which mathematically guarantee that such a functional exists. But the theorems do not tell us how to find it, and decades of theoretical work have shown that constructing an exact, universally accurate functional remains elusive.
 
Most practical quantum chemistry remains anchored in Kohn-Sham density functional theory (KS-DFT), which introduces orbitals to approximate the true many-electron problem with usable accuracy while still facing steep computational cost. OF-DFT, in contrast, has always struggled with accuracy because the electron kinetic energy, a dominant contributor to total molecular energy, is not known exactly in terms of density alone. Supervised machine learning can fit complex mappings, but it does not change the fact that the underlying physics is approximated rather than derived from first principles.
 
Even the most recent advances in the field acknowledge that machine learning can narrow the gap between theory and practice, but they stop short of claiming a definitive solution. For example, a recent paper in the Journal of the American Chemical Society demonstrates that ML-enhanced orbital-free DFT can achieve chemical accuracy for a benchmark dataset when trained with high-quality reference data, a noteworthy achievement on its own, yet it depends on that reference data and its applicability outside trained molecular classes is still an open question.

The AI Hype Trap

This distinction matters: while AI-driven models like STRUCTURES25 can accelerate and scale certain calculations, they do not replace the fundamental approximations and assumptions of the underlying physics. The model’s success on organic molecules drawn from benchmark sets is a necessary proof of concept, but it is not yet evidence that AI has unlocked a universal remedy for the computational complexity of quantum chemistry. Indeed, even classical machine learning approaches applied to OF-DFT have shown promise in limited domains but struggle with generalization beyond trained chemical spaces.
 
For researchers in computational science and supercomputing, the real takeaway should be this: AI can be a powerful tool when combined with robust physical models and vast computational resources, but it is not a silver bullet that magically solves NP-hard problems in quantum mechanics. Supercomputers remain indispensable for generating the high-quality reference calculations needed to train and validate these models, and HPC continues to be the arena where theory, data, and computation intersect.

Where Future Challenges Remain

Key questions include:
* How well do AI-trained orbital-free models generalize to systems outside their training data?
* Can such models maintain physical consistency in extreme chemical environments, such as transition metal complexes or excited states?
* Do the computational savings of orbital-free approaches outweigh the costs of generating training data on large supercomputing installations?
 
Until such questions are rigorously addressed, claims of “solving” central problems with AI should be viewed with an appropriately critical lens, not to dismiss progress, but to contextualize it.
 
In summary, the Heidelberg work represents an interesting computational advance built on the interplay between machine learning and quantum chemistry. But rather than signifying a definitive breakthrough, it fits into a broader pattern: AI augments existing methods and enriches the toolkit of computational chemistry, yet still depends on supercomputing and fundamental physics to realize its potential.

Mystery beneath the ice: Supercomputers illuminate the Antarctic gravity anomaly

For years, geophysicists have been baffled by an unusual gravitational “hole” beneath Antarctica’s massive ice sheet. Recent advances in supercomputer modeling are now revealing what lies beneath the frozen landscape and how deep-Earth processes may be influencing the continent’s surface. Research led by the University of Florida demonstrates how sophisticated computational tools are bringing hidden aspects of our planet’s interior to light.
 
This anomaly, an area with unexpectedly low gravitational pull, about the size of a small country, was first identified using satellite gravity data. Usually, gravity readings over ice correspond to the total mass of rock and ice below. However, in this region of Antarctica, the gravitational pull was weaker than anticipated, hinting that something unusual lies within the deep crust or upper mantle. The anomaly is located inland from the Ross Ice Shelf, one of Antarctica’s largest floating ice extensions.
 
To investigate the anomaly, a team of geoscientists, led by the U.S. Antarctic Program and collaborating with researchers worldwide, turned to supercomputer-based geophysical models. Their goal was to test whether variations in rock composition, temperature, and structure could reproduce the gravity signal seen at the surface. These models combine a range of data, seismic imaging from prior surveys, satellite gravity measurements, and the physics governing how rocks deform under pressure, into a comprehensive simulation of Earth’s interior beneath Antarctica.
 
Running these simulations is a formidable computational challenge. Researchers must solve the complex equations of continuum mechanics and gravity simultaneously, accounting for thousands of variables that span many orders of magnitude in scale. The only tools capable of handling such a workload are high-performance computing (HPC) systems with extensive parallel processing capabilities. Without supercomputers, exploring thousands of potential configurations of rock density and structure beneath Antarctica would be all but impossible.
 
The results suggest that the gravity hole may be explained by a combination of lighter-than-expected rock compositions and localized thermal anomalies in the upper mantle. In particular, regions where rocks are warmer and thus less dense can create a measurable reduction in gravitational acceleration. These warmer zones may arise from ancient mantle processes, remnants of tectonic activity that predate Antarctica’s current icy quilt.
 
Lead author Dr. Matthew Schmidt describes the finding as “a fascinating clue to Antarctica’s deep past.” Rather than pointing to a void or missing mass beneath the ice, the gravity anomaly appears to reflect variations in the physical properties of deep rocks, information that can only be teased out through computational modeling anchored in robust physics and constrained by observational data.
 
For computational geoscientists, this work exemplifies the transformative role of supercomputing in Earth science. Supercomputers allow researchers to experiment with a wide range of theoretical models, fine-tuning parameters until the simulations align with real-world measurements. In the case of the Antarctic gravity hole, this meant iterating through many plausible combinations of rock types, temperature distributions, and structural configurations, an effort that would be impractical on conventional computing hardware.
 
The implications extend beyond one anomaly. Understanding gravitational variations beneath Antarctica has significance for models of ice sheet stability and long-term sea level change, because subtle differences in the Earth’s internal structure can influence how ice flows and how the land beneath it responds. As climate change accelerates ice loss in polar regions, accurate models of both ice dynamics and the solid Earth are essential for forecasting future impacts.
 
Supercomputing has become the bridge between observation and understanding in such contexts, enabling scientists to visualize what cannot be seen and test hypotheses that would otherwise remain speculative. By integrating diverse datasets and the laws of physics into unified simulations, researchers are now able to explore what lies beneath remote and inaccessible places like Antarctica.
 
In a broader sense, the Antarctic gravity hole reminds us that the Earth still holds deep mysteries, and that supercomputers are among the most powerful instruments available for unlocking them. As computational capabilities continue to grow, so too will our ability to decode the planet’s hidden signals and better understand the forces that shape the world beneath our feet.
Flooding impacts in Worcester, VT (2024). Photo by AOT.

NextGen Water Resources Modeling Framework: Integrating hydrologic science, data systems

As torrential storms drive rivers to overflow, the importance of precise flood forecasting has never been greater. With climate extremes becoming more severe, scientists increasingly rely on advanced computing, and especially supercomputing, to expand the frontiers of water prediction. A recent partnership between the National Weather Service’s Office of Water Prediction (OWP) and the University of Vermont (UVM) has resulted in a potentially game-changing advancement in forecasting technology, grounded in supercomputing and next-generation modeling.
 
At the heart of this effort is the newly published NextGen Water Resources Modeling Framework. This framework isn’t just another hydrologic model; it is a flexible, model-agnostic platform designed for the modern era of computing. It enables researchers to run diverse hydrologic and hydraulic models under a common architecture, whether on a laptop, in the cloud, or on a high-performance supercomputer.
 
What makes NextGen intriguing for the supercomputing community is its ambition to fuse massive geospatial datasets, physical process models, and performance-oriented compute resources. Traditional flood forecasting systems have often been constrained by rigid, single-model architectures that struggle to scale across regions or use the full capacity of parallel computing systems. The NextGen framework sidesteps these limits by allowing heterogeneous models, written in languages such as C, Fortran, and Python, to execute concurrently in a unified environment, leveraging standards like the Basic Model Interface for data exchange and configuration.
 
Supercomputers excel at breaking down complex equations across millions of computing cores. Flood forecasting requires solving sophisticated, multi-dimensional physical processes, rainfall infiltration, snowmelt runoff, and river routing across vast spatial domains. By opening doors to distributed execution and modular coupling of models, NextGen lays the groundwork for future implementations that could harness supercomputers to deliver real-time, high-resolution forecasts at continental scales.
 
In their institutional announcement, UVM researchers highlighted how the framework addresses long-standing challenges in hydrologic prediction, particularly the need to simulate water’s movement through a landscape that varies wildly in terrain, soil, vegetation, and climate. With computing at the crux, NextGen treats a wide variety of models and data inputs with standardized outputs, enabling researchers and forecasters to run experiments that were once computationally prohibitive.
 
For computational scientists, the framework’s support for high-performance environments isn’t just about raw speed; it’s about collaboration across disciplines. The ability to prototype a new flood-inundation algorithm in Python one day, and then scale it to run across thousands of nodes on a supercomputer the next, opens doors for innovative research pipelines that blur the line between development and deployment.
 
Looking ahead, the NextGen framework promises to influence not just national operational models, such as the forthcoming version of the National Water Model, but also fundamental research in hydrology and Earth system simulation. When paired with advances in machine learning, GPU-accelerated computing, and real-time data assimilation, this modular foundation could spur a new generation of forecasting applications that bring supercomputing power directly to the urgent task of flood prediction.
 
Every hour of reliable flood warning can mean saved lives and billions of dollars saved in damages. The integration of supercomputing and hydrologic science is no longer a technological novelty; it is an urgent need. As NextGen takes the lead, the flood forecasting field stands poised for a paradigm shift, fueled by high-performance computing once exclusive to fields like physics and cosmology.

How big can a planet be? Supercomputing unlocks the secrets of giant worlds

Planetary science is undergoing a remarkable transformation as astronomers revisit a core cosmic mystery: What are the true limits on how large a planet can grow? By combining the latest astronomical observations with the extraordinary capabilities of supercomputers, researchers are discovering that the boundary between massive planets and failed stars is less distinct than previously believed. This work highlights how crucial computational power has become in unraveling the complexities of the universe. Driving this scientific revolution is the HR 8799 star system, situated roughly 133 light-years from Earth in the constellation Pegasus. Here, four gigantic gas planets, each five to ten times the mass of Jupiter, are challenging traditional models of planet formation that are based on our own solar system.

From JWST’s Spectra to Computational Insights

The groundbreaking observations came from the James Webb Space Telescope (JWST), humanity's most powerful space observatory. JWST’s advanced spectrographs captured faint light from these distant giants, around 10,000 times fainter than their star, and revealed the spectral fingerprints of molecules previously hidden from view. Among these was hydrogen sulfide (H₂S), a refractory molecule that is a tell-tale marker of solid materials in the early planetary disk.
 
Identifying sulfur and other heavy elements in these far-off worlds was only possible thanks to supercomputing-driven atmospheric models and spectral extraction techniques. Researchers had to push simulations far beyond traditional grids, iteratively refining the physics and chemistry encoded in their models to match the rich JWST data. These computational efforts let scientists separate the faint planetary signals from the overwhelming glare of the host star, and decode what the spectral lines say about formation paths.
 
What they found is remarkable: the HR 8799 giants appear to have formed via core accretion, a process where planets grow gradually by accumulating solids into a dense core before capturing surrounding gas. This is the same fundamental mechanism thought to have shaped Jupiter and Saturn, but on a much grander scale and at far greater distances from their star.

Uniform Enrichment: A Shared Planetary Heritage

In the companion work, scientists reported that these massive exoplanets are uniformly enriched in heavy elements compared to their star across both volatile (like carbon and oxygen) and refractory species such as sulfur. This uniformity strongly points to efficient solid accretion during planet formation and suggests that the ingredients of planet-building are similar across a wide range of environments, even for giants many times Jupiter’s mass.
 
Crucially, interpreting these complex chemistry wouldn’t be possible without high-performance computing. Supercomputers are used to:
  • Simulate protoplanetary disk conditions, exploring how cores form and accrete material over millions of years.
  • Generate atmospheric models that predict how molecules absorb and emit light under varying temperatures and pressures.
  • Fit these models to real spectral data from JWST, using optimization techniques only feasible at scale.
These tasks require petaflops of processing power and terabytes of memory, and they leverage algorithms developed by astrophysicists and computational scientists alike.

Beyond Our Solar System and Beyond Traditional Limits

Why does this matter for supercomputing? Answering today's big questions about planets, whether they are Earth-like, Neptune-like, or giants towering over Jupiter depends on the ability to compute the physics of formation and evolution under conditions we cannot recreate in the lab.
 
Where once planetary formation theories were built around our solar system’s modest giants, the HR 8799 results push us to ask even bolder questions: Can planets reach 15, 20, or even 30 times Jupiter's mass while still forming like planets, rather than stars? And, if so, what does that mean for how we define planets versus brown dwarfs?
 
With supercomputing as our engine, astronomers are not just cataloging distant worlds; they are rewriting the science of how those worlds came to be. As more data from JWST and future observatories pour in, this fusion of observation, theory, and computation promises to transform our understanding of planetary systems across the galaxy.
 
In that sense, the answer to "how big can a planet be?" isn’t just about mass, it’s about the growing scale of human curiosity and the computational tools we build to answer it.

Cracking the code of spider silk: Supercomputers reveal nature's molecular secrets

Spider silk is renowned as one of nature's most extraordinary materials, being both lightweight and exceptionally strong. It surpasses Kevlar in toughness and is stronger than steel when compared by weight. For years, scientists could only speculate about how this protein-based fiber achieved such a unique blend of strength and flexibility. Recently, however, researchers from King's College London and San Diego State University have revealed the molecular secret behind spider silk's remarkable properties. By combining advanced computational modeling with laboratory experiments, they have shown how supercomputers are transforming our understanding of materials science.
 
The study identifies how specific chemical interactions between the amino acids arginine and tyrosine drive the transformation of spider silk proteins from a dense liquid into solid, high-performance fibers. These interactions serve as molecular "stickers," triggering protein clustering in the earliest moments of silk formation and continuing to influence the fiber as its complex nanostructure develops.
 
Understanding this process at the molecular level would have been nearly impossible without computational tools. The researchers used molecular dynamics simulations, structural predictions from tools like AlphaFold3, and other high-performance modeling techniques to explore how vast numbers of atoms interact over time as the silk proteins assemble. These calculations involve solving complex physics equations for millions of interacting particles, a task that demands supercomputing resources capable of parallel processing at scale.
 
Professor Chris Lorenz, lead author and expert in computational materials science, explains that the study reveals atom-by-atom mechanisms previously hidden from view. "This study provides an atomistic-level explanation of how disordered proteins assemble into highly ordered, high-performance structures," he said, highlighting the power of computational modeling to connect molecular behavior directly to macroscopic material performance.
 
Indeed, spider silk’s performance has puzzled scientists for decades precisely because its constituent proteins begin as a concentrated liquid, often referred to as “silk dope,” before being spun into fibers that combine elasticity and toughness in ways few man-made materials approach. The key insight from the new study is that arginine–tyrosine interactions create clustering behavior during the liquid-to-solid transition, guiding the assembly of nanoscale structures that underpin silk’s exceptional mechanical properties.
 
Such detailed mechanistic insight isn’t merely academic. By uncovering the design principles that nature uses to build spider silk, researchers now have a blueprint for engineering next-generation sustainable materials, from lightweight protective gear and aircraft components to biodegradable medical implants and soft robotics. These applications are only imaginable because computational models allow scientists to test hypotheses in silico before moving to costly and time-consuming experiments.
 
The implications extend beyond materials science. Gregory Holland, co-author from SDSU, noted that the mechanisms observed in silk protein assembly mirror molecular processes seen in other biological systems, including those involved in human health and disease. “What surprised us was how sophisticated the chemistry turned out to be,” he said, suggesting that insights from silk may inform studies of protein phase separation in conditions such as Alzheimer’s disease.
 
For the supercomputing community, this research exemplifies how advanced modeling and simulation are transforming our ability to decode complex biological materials. Supercomputers enable scientists to explore how and why nature optimizes performance at the molecular level, and to translate those insights into engineered solutions that could be more sustainable, resilient, and energy-efficient than current technologies.
 
As computational power continues to grow, researchers anticipate that even more intricate biological materials will yield their secrets to simulation-based science. For now, the decoding of spider silk’s molecular stickers offers a striking example of how supercomputing not only accelerates discovery but also inspires new directions in engineering and materials design.