Supercomputing drives materials breakthrough for green computing: 3D graphene-like electronic behavior unlocks new low-energy electronics

Marking a major advance in sustainable computing, researchers at the University of Liverpool have developed a groundbreaking three-dimensional material that mirrors the remarkable electronic properties of two-dimensional graphene, while offering the durability needed for practical use.
 
Detailed in the journal Matter, this innovation holds the potential to enable greener, more energy-efficient electronics and underscores the essential role of supercomputing in discovering and designing new materials, a development that could reshape the landscape of high-performance and low-power computing.
 
Graphene is a single layer of carbon atoms organized in a honeycomb pattern. This material has fascinated scientists and engineers due to its exceptional electrical, thermal, and mechanical characteristics. Electrons in graphene act like massless Dirac fermions, which allows for extremely fast electron movement with minimal energy loss. Despite these impressive qualities, applying graphene's unique properties to practical, large-scale devices has faced persistent obstacles: its ultra-thin structure is fragile, hard to incorporate into bulk technologies, and expensive to manufacture at scale.
 
The new study addresses this by demonstrating that hafnium tin, HfSn₂, a fully three-dimensional crystal, can mimic graphene’s fast, two-dimensional electron flow. In the HfSn₂ structure, honeycomb layers are arranged in a special chiral stacking pattern that preserves the signature electronic behavior of graphene, specifically, high electron mobility with low energy dissipation, despite the material being fully 3D. This electronic behavior is associated with Weyl points in the material’s band structure, points where conduction and valence bands touch, allowing electrons to move with minimal resistance.
 
These insights emerged from a combination of theoretical modeling, crystallographic simulations, and experimental characterization, and could not have been realized without high-performance computational tools. Supercomputers enable researchers to explore how atomic arrangement, chemical bonding, and quantum mechanical effects interplay across multiple length scales, from electrons to crystals, and to identify Weyl electronic states and transport properties that are inaccessible to simpler computational methods.
 
In particular, density functional theory (DFT) and related ab initio simulation frameworks, inherently computationally intensive, were crucial in predicting how electrons behave within the 3D honeycomb lattice and how different stacking arrangements influence transport. These simulations, typically run on supercomputing clusters equipped with optimized parallel solvers and high memory bandwidth, allow researchers to map out electronic band structures and isolate topological features such as Weyl points with high precision. Without this scale of computation, evaluating the energetic and structural feasibility of such new materials would be prohibitively slow and less reliable.
 
The ability to use supercomputer-driven simulations to screen candidate materials accelerates the discovery process dramatically. Instead of relying solely on costly and time-consuming experimental synthesis of countless samples, researchers can now refine materials candidates through in silico modeling, identifying promising structures that combine desired electronic properties with robustness and environmental resilience.
 
Why does this matter for the green computing agenda? Modern computing systems, from mobile devices to data centers, consume vast amounts of energy. Next-generation logic and spintronic devices (which exploit electron spin as well as charge) require materials that combine low-energy electronic transport with stability under operational conditions. A 3D material that mimics graphene’s electron transport while being easier to integrate into conventional device architectures could lead to significantly lower energy consumption in future information processing and memory technologies, directly addressing sustainability challenges in both artificial intelligence and high-performance computing sectors.
 
Moreover, supercomputing plays a central role beyond discovery; it enables multiscale modeling that connects atomic-scale electronic behavior with device-level performance predictions. By integrating quantum mechanical simulations with larger-scale finite-element and mesoscopic models, researchers can assess how new materials will behave under real operational loads, including temperature variation, stress, and electron-phonon interactions, before ever fabricating a prototype.
 
The discovery of HfSn₂ highlights a compelling convergence of materials science, quantum physics, and high-performance computing. Together, these disciplines are enabling new approaches to energy-efficient electronics. As researchers increasingly rely on supercomputing resources to navigate complex materials landscapes, the pace of breakthroughs aimed at reducing the environmental footprint of computing is expected to accelerate, pointing toward a more sustainable and environmentally responsible digital infrastructure.

Supercomputing’s next frontier: NVIDIA, CoreWeave unite to build the AI factories of tomorrow

In a defining moment for the high-performance computing (HPC) and artificial intelligence (AI) landscape, NVIDIA and CoreWeave have announced an expanded collaboration to accelerate the construction of massive AI factories, purpose-built data centers optimized for large-scale AI workloads. This partnership marks a significant leap forward for the supercomputing community, combining cutting-edge hardware, software innovation, and strategic infrastructure expansion to meet the growing demand for AI compute resources.
 
At the heart of the announcement is a $2 billion investment by NVIDIA in CoreWeave’s Class A common stock, underscoring NVIDIA’s confidence in CoreWeave’s strategy and setting the stage for an ambitious build-out of more than 5 gigawatts of AI-optimized compute capacity by 2030. These facilities, often referred to as AI factories, are expected to become the backbone of next-generation AI research, training, and deployment, offering unprecedented access to accelerated computing for enterprises, startups, and scientific institutions alike.
 
This deepening partnership goes beyond financial backing. Under the expanded agreement, CoreWeave will adopt NVIDIA CPU and storage platforms and deploy multiple generations of NVIDIA accelerated computing architectures across its cloud infrastructure, including future innovations such as the Rubin AI platform, Vera CPUs, and advanced Bluefield storage systems. CoreWeave’s purpose-built software stacks, such as CoreWeave Mission Control and its reference architectures, will be jointly tested and validated to ensure seamless performance at scale.
 
For the supercomputing community, this represents more than a business transaction; it heralds the maturation of an ecosystem where dense GPU clusters, optimized interconnects, and advanced orchestration software come together to deliver supercomputing-class performance for AI workloads. These AI factories will support ultra-large neural network training, complex simulations, and inference tasks that push the limits of parallel processing and memory bandwidth, work that would be inconceivable without HPC-grade infrastructure underpinning the operations.
 
CoreWeave’s CEO, Michael Intrator, encapsulated this vision poignantly in his “The Year AI Gets to Work” blog post: the era of AI is no longer about possibility, but about making it operational at a global scale, powering real-world impact across industries and scientific fields. In his reflection, Intrator emphasized that AI has crossed a crucial threshold where the challenge has shifted from what’s possible to how do we deliver it everywhere it’s needed? This “working” phase requires infrastructure that can meet the relentless pace of innovation, and that is exactly what the expanded collaboration with NVIDIA seeks to enable.
 
What makes this partnership especially noteworthy for HPC practitioners is the tight integration of evolving hardware platforms with cloud-native supercomputing architectures. CoreWeave has been among the first cloud providers to deploy NVIDIA’s advanced GPU platforms, such as the GB200 NVL72 systems, at scale, demonstrating that purpose-built AI infrastructure can rival traditional supercomputer installations in both performance and flexibility. These deployments exemplify how the modern supercomputing stack is increasingly GPU-centric, designed to support massive parallel workloads with efficiency and resilience.
 
Moreover, the collaboration underscores a broader industry trend: the convergence of HPC and AI infrastructure, where the traditional boundaries between scientific computing, enterprise AI, and cloud-native services continue to blur. The AI factories envisioned by NVIDIA and CoreWeave will serve not only core AI model training and inference but also data-intensive simulation tasks, real-time reasoning engines, and agentic AI, workloads that demand HPC-level compute, networking, and orchestration.
 
For the supercomputing community, this development is inspirational on multiple fronts. It validates the central role of accelerated computing architectures in driving the next wave of AI and scientific discovery. It illustrates how deep collaboration between hardware innovators and infrastructure builders can unlock new levels of performance and accessibility. And it signals that the age of supercomputers is expanding from traditional national-lab behemoths into a distributed ecosystem of cloud-native AI super-infrastructure that anyone with visionary applications can tap into.
 
As we enter the AI era, our ability to construct, expand, and make these AI factories widely accessible will shape the years to come. The NVIDIA and CoreWeave partnership stands as a model for realizing these remarkable opportunities.

Supercomputing advances the quest to resolve the Hubble tension in cosmology

In a significant step toward solving a longstanding puzzle in cosmology, a team led by Simon Fraser University is leveraging supercomputing power to investigate the Hubble tension, a paradox at the core of modern astrophysics that questions our grasp of the universe’s expansion. Their latest findings merge creative theoretical perspectives with sophisticated numerical simulations, suggesting that primordial magnetic fields may be crucial in reconciling conflicting measurements of the cosmic expansion rate. Importantly, these advances were only possible thanks to state-of-the-art supercomputing infrastructure.
 
The Hubble tension refers to the persistent discrepancy between two independent methods of measuring the rate of expansion of the universe. Local measurements using Type Ia supernovae and other distance indicators yield a higher value for the Hubble constant (H₀) than estimates derived from the cosmic microwave background, the afterglow of the Big Bang, as observed by missions such as Planck. This mismatch has challenged the standard cosmological model (ΛCDM) and inspired a plethora of hypotheses that require rigorous theoretical and numerical assessment.
 
In the new study, the research team proposes that primordial magnetic fields, tiny magnetic fields present in the early universe, could have subtly altered the physics of recombination, the epoch when electrons and protons first combined to form neutral atoms. This alteration affects the interpretation of the cosmic microwave background and, consequently, inferences about the Hubble constant. If confirmed, the existence and influence of such fields would not merely ease the tension between different measurements; they could also illuminate the origin of cosmic magnetism observed throughout galaxies and intergalactic space.
 
However elegant the theory, testing it against the wealth of cosmological data requires formidable computational effort. Over the past three years, the international collaboration, including SFU’s Levon Pogosian, Karsten Jedamzik, Tom Abel, and Yacine Ali-Haimoud has utilized SFU’s Cedar supercomputer and its successor, Fir, to run large-scale simulations of recombination processes under various magnetic field scenarios. These simulations incorporate the physics of the early universe at high resolution and are used to generate predicted observational signatures that can be directly compared against data from the Hubble Space Telescope, Planck, and ground-based observatories.
 
Supercomputing plays an indispensable role in this endeavor. The complex dynamics of recombination and its imprint on cosmological observables involve solving coupled systems of equations that govern plasma physics, radiative transfer, and statistical inference. By breaking down these calculations into parallel tasks, HPC systems such as Cedar and Fir allow researchers to execute large parameter sweeps and statistical fits that would otherwise take prohibitively long on conventional machines. The result is a computational feedback loop in which simulations refine theoretical models, which in turn guide the next generation of simulations.
 
According to Pogosian, “We wouldn’t have been able to carry out our research without the supercomputer. It was crucial for our tests and calculations.” The ability to process vast datasets in parallel not only saves time but dramatically expands the scope of inquiry, enabling tests of subtle physical effects in regimes where analytical approximations fail.
 
The simulations have yielded encouraging outcomes: the primordial magnetic field hypothesis “survives the most detailed and realistic tests available today,” and the work provides clear targets for future observational campaigns. In the coming years, next-generation observatories and more refined simulations will be key to determining whether these ancient magnetic fields indeed influenced the evolution of the early universe.
 
For the supercomputing community, this research embodies the inspirational synergy between numerical simulation and fundamental physics. Here, HPC is not a mere amplifier of computational throughput; it is an enabler of discovery, allowing scientists to probe phenomena at the intersection of theory and observation. As cosmologists continue to confront deep questions about the universe’s origin, composition, and fate, supercomputers like Cedar and Fir stand at the forefront of a new era in astrophysical research.

Supercomputers power evolutionary insight: From house sparrows to conservation strategies

Amidst growing concerns over biodiversity loss and environmental change, scientists are employing advanced computational methods to reveal the genetic and evolutionary factors that contribute to species' resilience. At the Norwegian University of Science and Technology (NTNU), researchers are at the forefront of this movement, utilizing decades of ecological data on house sparrows in northern Norway and harnessing the powerful computational capabilities of NTNU’s flagship supercomputer, IDUN. These efforts not only enhance our knowledge of evolutionary dynamics in wild populations but also provide robust quantitative tools that could inform conservation strategies for a diverse range of species.
 
House sparrows (Passer domesticus), though ubiquitous across much of the world, present a compelling model for studying evolution in fragmented, wild populations. Along the coast of Helgeland, archipelagos of small islands have been the site of continuous biological monitoring for over three decades. Biologists have meticulously recorded the life histories, from birth to death, of tens of thousands of individual sparrows, amassing an unparalleled dataset of genetic, morphological, and ecological measurements.
 
In a recent study published in Evolution, NTNU researchers applied a sophisticated statistical method known as genomic prediction (GP) to this extensive dataset, aiming to assess the accuracy of predicting genetic traits across distinct wild populations. Although widely used in agriculture and breeding programs, genomic prediction has rarely been applied within the context of wild populations due to the complexity and scale of the data.
 
Where observational fieldwork leaves off, supercomputing fills the gap. Kenneth Aase, a Ph.D. research fellow at NTNU’s Department of Mathematical Sciences, emphasizes that testing model assumptions and running high-dimensional simulations requires computational resources capable of handling large datasets and complex statistical models. For the most challenging computations in his analyses, Aase turns to IDUN, NTNU’s powerful HPC system, enabling large-scale simulations and hypothesis testing that would be infeasible on standard computing platforms.
 
Supercomputers such as IDUN provide not only raw processing power but also the ability to manage multifactorial models involving hundreds of thousands to millions of genetic markers, environmental variables, and phenotypic traits. This capability enables researchers to simulate the interaction of genetic variation and environmental pressures over time, a crucial step in understanding evolutionary trajectories in fluctuating habitats.
 
The insights emerging from this work extend far beyond the sparrow populations themselves. By evaluating how genomic prediction performs across separated island populations, the researchers revealed limitations and opportunities in applying such models to wild species with distinct genetic backgrounds. These findings inform not only evolutionary biology but also conservation strategies for species facing rapid environmental change.
 
Crucially, the computational framework developed and tested with IDUN simulations lays the groundwork for broader applications. The GPWILD project, funded by a European Research Council grant, aims to generalize these methods to other species, including Svalbard reindeer, Scottish deer, and arctic foxes, each with unique evolutionary dynamics and conservation challenges.
 
As climate change and habitat loss continue to exert pressure on wild populations globally, quantitative tools that couple genomic data with supercomputing-enabled modeling become indispensable. They allow scientists to evaluate adaptive potential, predict responses to environmental stressors, and identify populations at greatest risk of decline, all through simulation frameworks that capture the complex interplay of genetics and ecology.
 
For SC Online readers, the NTNU house sparrow initiative highlights a key insight: supercomputers now play a pivotal role beyond physical sciences and artificial intelligence, serving as powerful catalysts in evolutionary biology and conservation research. By merging decades of detailed ecological data with high-performance computing simulations and advanced statistical models, scientists are forging innovative approaches to better understand and safeguard the natural world amid rapid global change.

Supercomputing accelerates breakthroughs in diabetes drug discovery

Showcasing the transformative impact of high-performance computing on biomedical research, scientists at The Herbert Wertheim UF Scripps Institute for Biomedical Innovation & Technology have leveraged the HiPerGator supercomputer to fast-track the discovery of new treatments for Type 2 diabetes. By employing advanced computational simulations, their research is overcoming some of the toughest challenges in drug design, reducing development timelines, and significantly improving predictive accuracy at the earliest stages.
 
Type 2 diabetes affects tens of millions of people worldwide and is characterized by the body’s reduced sensitivity to insulin, a hormone essential for glucose metabolism. Current treatment options, while effective for some patients, carry limitations and significant side effects, particularly for individuals with chronic kidney disease. Researchers led by molecular biologist Patrick Griffin, Ph.D., and his team set out to design compounds that improve insulin sensitivity by modulating a complex cellular protein known as PPAR gamma, a “master regulator” of fat cell and insulin metabolism that has long eluded safe, effective therapeutic targeting.
 
Crucially, the team integrated multiple technologies in their workflow, combining biochemical assays, structural analyses, and high-fidelity molecular simulations performed on HiPerGator, one of academia’s most powerful supercomputers. These simulations allowed researchers to model the dynamic motion and flexibility of PPAR gamma when bound to potential therapeutic compounds, yielding insights that would be exceedingly difficult to obtain through laboratory experiments alone.
 
Molecular dynamics simulations are indispensable tools in modern drug discovery. For this project, a single 100-nanosecond simulation run on HiPerGator required approximately six hours, and with 26 candidate compounds and three replicates for each, the total compute time approached 20 days of continuous processing. This illustrates not only the computational intensity of structure-based drug design but also the indispensable role of HPC in making such calculations feasible.
 
Without access to a high-performance infrastructure like HiPerGator, such simulations could take months or longer on conventional computing systems, a pace that stands at odds with the urgency of unmet medical needs. HiPerGator’s vast array of CPU and GPU resources provides the parallel processing capabilities necessary to execute numerous complex simulations concurrently, enabling researchers to explore multiple molecular interactions and conformations in a compressed timeframe.
 
Beyond accelerating individual simulation runs, supercomputing enables scientists to adopt iterative, data-driven design strategies. By rapidly simulating how different chemical modifications influence protein dynamics, researchers can refine their hypotheses and prioritize the most promising compounds for subsequent experimental validation. This creates a computational feedback loop that bridges theory and laboratory work, ultimately streamlining the early phases of drug development.
 
The implications of this work extend well beyond diabetes. The framework established by Griffin’s team, integrating structural characterization with HPC-driven simulations and biological testing, provides a transferable blueprint for other drug discovery challenges, particularly those involving “difficult” signaling proteins with complex, multifaceted roles in human physiology.
 
As supercomputing resources such as HiPerGator evolve, with increased core counts and architectures tailored for scientific modeling and artificial intelligence, their impact on biomedical innovation is set to expand dramatically. For diseases that have long defied conventional treatments, advanced computational power now opens a new frontier, enabling researchers to test hypotheses in silico with speed and precision previously unimaginable.
 
For SC Online readers, this story underscores a clear reality: supercomputers are no longer just tools of physics, climate, or astrophysics research; they have become indispensable engines of discovery in biology and medicine. By enabling detailed simulations that inform experimental science, HPC platforms like HiPerGator are helping transform the pace and promise of drug discovery for diseases that affect millions worldwide.