At SC25, Phison pushes AI storage to Gen5 speeds, brings AI agents to everyday laptops

SuperComputing 2025 (SC25) delivered no shortage of big swings this week, but Phison presented a rare, cohesive vision that extends from the densest enterprise racks to the laptops in classrooms and corporate offices. At booth 4532, the storage leader debuted two new PCIe Gen5 enterprise SSDs, Pascari X201 and Pascari D201, and a live demo showcasing AI agents running on an integrated-GPU laptop using its aiDAPTIV+ technology. The message was clear: AI acceleration shouldn't be restricted to high-end GPUs or data center budgets.

PCIe Gen5 Muscle for AI and Cloud

Phison’s new Pascari X201 and D201 drives push Gen5 performance to the edge of the envelope:
  • Up to 14.5 GB/s read, 12 GB/s write
  • Up to 3.3M / 1.05M random read/write IOPS
  • Configurations up to 30.72 TB (X201) and 15.36 TB (D201)
The X201 targets high-intensity applications, including AI training nodes, analytics engines, financial modeling, and HPC workloads. The D201 is designed for hyperscalers and cloud builders who need high density with predictable QoS, particularly for object storage and large-scale database clusters. Both represent the steady march toward AI-first storage design: low latency, deterministic operations, and the throughput needed to saturate GPU clusters.

AI Agents on iGPUs, 25× Faster Than Before

The unexpected star of Phison’s booth was a consumer-class laptop demo. With aiDAPTIV+, the system turned an integrated GPU, normally the weak link in AI workflows, into a surprisingly capable AI agent platform.
 
Phison says the tech delivers:
  • Up to 25× faster AI agent performance
  • A drop in latency from 73 seconds to ~4 seconds in one real-world demo, GenAI inference on YouTube video content.
This is significant beyond mere convenience. Universities, IT departments, and early-stage businesses can now conduct meaningful AI experiments using their existing hardware. For students and corporate employees, this indicates a move toward AI agents becoming as commonplace as web browsers or office software.

Scaling Toward Extreme Capacity

Phison reminded SC25 attendees that the capacity race is not slowing. The company's Pascari D205V, a 122.88TB E3.L behemoth already shipping to selected OEMs, continues to set the ceiling for PCIe Gen5. Phison confirmed a roadmap path to 245TB, a number that would have sounded like science fiction just a few cycles ago.

Industry Voices at SC25

Michael Wu, GM and President of Phison US, framed the announcement in the larger arc of AI adoption: “Every sector is somewhere on the AI journey… Storage is vital at every stage.”

Why SC25 Cares

SC25 is increasingly the place where the AI stack, compute, networking, storage, and software gets pressure-tested. Phison’s lineup shows a company positioning itself not just as a NAND supplier but as a critical backbone for AI at every tier:
  • Client: AI agents on iGPUs
  • Enterprise: X201 for training and HPC
  • Cloud/hyperscale: D201 and the ultra-dense D205V series
With shipments of the X201 and D201 headed to enterprise customers by year-end and iGPU systems with aiDAPTIV+ coming in early 2026, the company is clearly betting on a future where AI workloads blur across devices and form factors.

Availability

  • Pascari X201 / D201: Shipping to select enterprise customers and OEMs by end of 2025
  • aiDAPTIV+ iGPU systems: OEM rollouts in early 2026
  • More details at phison.com
Phison didn't just bring new hardware to SC25; they presented a clear vision: AI infrastructure should be fast, scalable, power-efficient, and accessible to everyone, from hyperscale operators to students with a laptop. The future of AI won't be confined to one place, and Phison seems determined to connect it all.

MSI unveils next-gen AI, data center platforms at SC25

 
MSI stepped into the SuperComputing 2025 spotlight this week with a full slate of next-generation server and AI systems, signaling a major escalation in the company’s push into high-performance computing, hyperscale infrastructure, and enterprise AI.
 
At Booth #205, MSI debuted its ORv3 rack solution and a refreshed portfolio of DC-MHS–based compute platforms built in collaboration with AMD, Intel, and NVIDIA. The message was clear: the next era of data centers will be denser, more energy-efficient, and more modular, and MSI plans to be one of the vendors powering that shift.
 
Danny Hsu, General Manager of Enterprise Platform Solutions, framed it plainly: MSI wants to give operators scalable infrastructure that can move as fast as AI models evolve. “Our goal is to deliver scalable, energy-efficient infrastructure that empowers customers to accelerate AI development and next-generation computing with performance, reliability, and flexibility at scale,” Hsu said.

Rack-Scale Ambition: The ORv3 Platform

The star of MSI’s showcase was its ORv3 21-inch, 44OU rack, a fully validated, integrated design specifically designed for hyperscale cloud builders. Outfitted with sixteen CD281-S4051-X2 2OU DC-MHS servers, the rack features centralized 48V power, front-facing I/O, and a streamlined thermal design that maximizes CPU, memory, and storage density in every square inch.
 
Each node leverages AMD’s EPYC 9005 processors in a single-socket layout. Per-node, operators get 12 DDR5 DIMM slots and 12 E3.S PCIe 5.0 NVMe bays, providing ample capacity for AI pipelines, large-scale analytics, and bandwidth-intensive cloud workloads.
 
High-Density Compute for the Modern Data Center
MSI also expanded its DC-MHS Core Compute lineup, offering both AMD and Intel variants with TDP envelopes up to 500W. Available in 2U 4-node and 2U 2-node configurations, these systems target high-density environments where rack efficiency is king.
 
On the AMD EPYC side, MSI highlighted two platforms (CD270-S4051-X4 and X2), while Intel Xeon 6 versions (CD270-S3061-X4 and CD270-S3071-X2) bring expanded DDR5 memory and PCIe 5.0 storage options. All share a standardized modular architecture designed to simplify deployment, upgrades, and serviceability.
 
The enterprise-focused “CX” series broadened that theme with higher memory ceilings, extensive PCIe lanes, and configurations optimized for cloud, virtualization, and storage providers. Dual-socket Xeon 6 versions deliver up to 32 DIMM slots in 1U and 2U footprints, a density profile aimed at operators balancing compute with I/O-heavy workloads.

AI Systems Powered by NVIDIA Hopper and Blackwell

With AI dominating both the SC25 conversation and data center budgets, MSI backed up its hardware story with new NVIDIA-powered AI systems. These include MGX-based servers, DGX-class AI stations, and workstation-scale development nodes.
 
The flagship CG481-S6053 and CG480-S5063 4U servers support up to eight dual-width GPUs (up to 600W each), paired with either AMD EPYC 9005 CPUs or Intel Xeon 6 processors. These are built for heavyweight tasks: large language model training, deep learning acceleration, and NVIDIA Omniverse workloads.
 
A compact 2U option, the CG290-S3063, delivers four 600W GPUs in a single-socket Xeon 6 system, aimed at edge-inference clusters and smaller research deployments.
 
To bring AI development directly to the desktop, MSI introduced the AI Station CT60-S8060, a workstation built around NVIDIA’s GB300 Grace Blackwell Ultra Superchip, offering up to 784GB of unified memory. Its pitch: DGX-scale power without the data center footprint.

Why It Matters

SC25 is the annual pulse check for supercomputing, a place where vendors unveil real hardware, not vaporware. MSI’s move signals an intensifying competition among server manufacturers to meet surging AI demand while tackling the constraints everyone feels: power, heat, density, and time-to-deploy.
 
Their approach leans into modularity. DC-MHS standardization, ORv3 rack integration, and MGX compatibility allow operators to build AI-ready data centers faster and adapt them as GPUs evolve.
The broader takeaway is that data centers are shifting from “build once and upgrade later” to “assemble, scale, swap, repeat.” MSI’s portfolio pushes that philosophy from edge to hyperscale.
 
More details, demo videos, and supporting technical resources are available directly from MSI following the SC25 exhibition.
Characteristics of the graphene/In2Se3 heterostructure transport device that shows the spin chirality switch.  Credit Martin Gmitra from the Slovak Academy of Sciences and Marcin Kurpas from University of Silesia in Katowice.
Characteristics of the graphene/In2Se3 heterostructure transport device that shows the spin chirality switch. Credit Martin Gmitra from the Slovak Academy of Sciences and Marcin Kurpas from University of Silesia in Katowice.

Supercomputing sheds light on electrically controlling spin currents in graphene

In an European collaboration blending quantum materials science and high-performance computing, researchers have discovered how ferroelectric switching can modulate spin currents in a graphene-based heterostructure, a revelation made possible by supercomputers.

From Charge to Spin: A New Spintronics Platform

The study, "Ferroelectric switching control of spin current in graphene proximitized by In₂Se₃," published in Materials Futures, explores a heterostructure of graphene, a two-dimensional conductor, stacked atop a ferroelectric monolayer of In₂Se₃. The team found that switching the polarization of the In₂Se₃ layer reverses the sign of the charge-to-spin conversion coefficient in the graphene layer, effectively flipping the chirality (spin orientation pattern) of the generated spin current. In one configuration (17.5° twist angle between layers), an unconventional "radial Rashba field" emerged for one polarization direction, a rare phenomenon in planar heterostructures.

Supercomputing: The Hidden Engine

This project would have been impossible without extensive computing power. The researchers combined first-principles calculations (density-functional theory) with tight-binding modelling to capture electronic structure, spin-orbit coupling, ferroelectric polarization effects, and interface proximity influences.
 
Such simulations involve large Hamiltonian matrices, fine k-space sampling, spin-texture mapping, and multiple twist-angle geometries, tasks that scale poorly without parallel, high-performance systems. By leveraging supercomputing clusters, the team was able to:
  • Evaluate both polarization states of the ferroelectric layer.
  • Model two twist angles (0° and 17.5°) to identify emergent fields;
  • Extract charge-to-spin conversion coefficients and Rashba phase directly from computational data.
These capabilities underline how HPC is no longer just for weather and astrophysics; now it’s central to designing tomorrow’s spintronic devices.

Why It Matters

Modern electronics are approaching the limits of charge-based logic. Spintronics, using the electron’s spin rather than its charge, promises faster, lower-power, non-volatile devices. The challenge: controllably steering spin currents without bulky magnetic fields.
 
By showing that ferroelectric polarization can electrically flip spin current direction (and spin texture) in graphene, the study opens a pathway to magnet-free, ultra-efficient spin logic devices. In short, you apply a voltage, you flip a spin current, no magnetic coil needed.

A Timely Breakthrough for the HPC World

With the SC25 supercomputing conference opening next week in St. Louis, the research underscores a widening frontier: supercomputers aren’t just solving equations, they’re beginning to decode nature’s design language.
 
Although the study is not confirmed as an official SC25 presentation, its ideas are likely to circulate in hallway conversations, workshops, and poster sessions, where the fusion of physics, simulation, and computing continues to accelerate innovation.

Looking Ahead

While this work is theoretical (computational), the authors propose that the predicted effects "can be experimentally detected" under realistic conditions. The next step involves device fabrication, nanoscale spin current measurements, and benchmarking against conventional spintronic architectures.
 
The larger picture is HPC-driven material discovery. As supercomputers become more powerful and accessible, the timeline from concept to device may shorten, leading to a shift towards compute-to-create workflows, rather than the current synthesize-then-hope approach.
The Large Helical Device (LHD) and the heavy ion beam probe (HIBP) system.
The Large Helical Device (LHD) and the heavy ion beam probe (HIBP) system.

Supercomputers help scientists decode turbulent plasma behaviors in fusion reactors

A new international study, published in Nuclear Fusion and announced through a press release, offers one of the most detailed looks yet at plasma behavior inside fusion reactors, thanks to modern supercomputers. The research highlights the central role of high-performance computing (HPC) in advancing fusion energy science.

A Breakthrough in Plasma Modeling

Japanese researchers led a team that used state-of-the-art numerical simulations to capture how micro-scale plasma turbulence interacts with large-scale flows inside tokamak confinement systems. These interactions have long puzzled physicists because they contribute to unexpected energy losses, undermining reactor performance.
 
The new simulations reveal coupling mechanisms that had not been directly observed before. By resolving turbulence, particle transport, and fast-ion behavior simultaneously, the researchers were able to build a more complete picture of how fusion plasmas evolve under reactor-relevant conditions.
 
According to the study, these insights may guide improvements in the design and operation of future fusion devices.

Powered by Supercomputers

The research leveraged massive computational resources, including GPU-accelerated clusters and petascale CPU hours, to capture plasma behavior across multiple spatial and temporal scales. Advanced techniques, such as domain decomposition, hybrid MPI/OpenMP parallelization, and fine-mesh refinement, enabled the simulation of sub-millimeter turbulence while still modeling the full evolution of a reactor-scale plasma.
 
The authors emphasize that without supercomputer-level performance, such multiscale modeling would be impossible. In essence, HPC is becoming a “virtual reactor,” allowing scientists to test physics theories and device configurations in silico before real-world experiments.

SC25 in St. Louis

The timing of this publication is noteworthy, as the global supercomputing community will convene next week in St. Louis for SC25, the premier HPC conference. While the study directly relates to high-performance computing and could be discussed informally at SC25, there's no confirmation that it will be officially presented. Nonetheless, the study's themes of extreme-scale computation, energy modeling, and plasma physics align with key SC25 tracks, making it an ideal meeting point for researchers and vendors.

A Step Toward Fusion’s Future

Fusion holds immense promise for clean, abundant energy, but understanding plasma behavior presents a significant hurdle. This research helps bridge the gap between theory and experiment by providing predictive tools that can improve reactor design and operational strategies.
 
With new supercomputing tech, researchers anticipate the ability to simulate entire fusion devices under reactor conditions, potentially expediting the path to practical fusion energy. 
 
For now, this work serves as a compelling illustration of how supercomputing is transforming one of the world’s most challenging scientific frontiers.
The Cocos Islands in the Indian Ocean between Australia and Sri Lanka.
The Cocos Islands in the Indian Ocean between Australia and Sri Lanka.

Continents peeling from below: Supercomputers reveal the hidden hand shaping Earth’s oceans

When continents undergo separation, the effects are not limited to the surface. A gradual revolution is occurring beneath our feet, detectable only through the utilization of the world's most advanced supercomputers. Researchers from the UK's University of Southampton and Germany's GFZ Helmholtz Centre for Geosciences have discovered that the Earth's continents are undergoing a "peeling" process from below, thereby triggering volcanic activity across the ocean floor. Their recent study suggests that this deep churning of the planet's mantle may be responsible for many of the volcanic islands scattered across our oceans, including the Indian Ocean's Christmas Island seamounts and the Atlantic's Walvis Ridges.

Peeling Continents, Boiling Oceans

The team's simulations, powered by high-performance computing models, demonstrate that as continents stretch and fracture, their thick roots of ancient rock (the subcontinental lithospheric mantle) are eroded by organized "chains" of convective currents. These instabilities act like conveyor belts, transporting chemically enriched material from deep beneath the continents into the oceanic mantle, where it can later erupt as seafloor volcanoes.
 
Over tens of millions of years, this subterranean process moves vast amounts of continental material outward, enriching the mantle in patterns that match the timing and chemistry of known oceanic volcanic provinces. “It’s as if the continents shed their skin into the sea,” said lead author Dr. Tom Gernon of Southampton. “We’ve uncovered a missing piece of Earth’s deep recycling system.”

Supercomputing the Deep Earth

To capture this invisible movement, the researchers relied on ASPECT, a powerful geodynamic modeling tool that simulates rock behavior under extreme pressures and temperatures. These thermomechanical simulations, run on supercomputers in the UK and Germany, tracked the flow of molten rock and heat through the mantle over spans exceeding 100 million years.
 
Such calculations require enormous computational power, similar to that used in climate modeling or astrophysical simulations, because they solve complex equations of energy, mass, and momentum at microscopic scales within a planet-sized domain. The models revealed that continental "peeling" begins within a few million years of tectonic breakup and peaks approximately 50 million years later, a finding that aligns with isotope data from Indian Ocean volcanoes.
 
These insights wouldn’t have been possible without advances in high-performance computing (HPC). This area has been prominently showcased at recent supercomputing gatherings, such as the COP30-linked climate and Earth system sessions in Brazil. As global attention turns toward planetary resilience, HPC has become a bridge between climate science, energy modeling, and now, deep-Earth geodynamics, allowing researchers to model entire planetary systems in silico.

Rethinking Oceanic Volcanism

Traditionally, scientists attributed oceanic volcanism to deep mantle plumes, columns of hot rock rising from near the Earth’s core. But this new study proposes a more surface-linked mechanism: the long-term “convective erosion” of continental roots. It explains why enriched volcanic rocks often appear along continental margins and even billions of years after the continents split apart.
 
This finding also has implications for the global carbon cycle, as the peeling and melting of carbon-rich rocks could regulate the release of greenhouse gases from deep within the Earth. It hints at a feedback loop between the planet’s tectonic heartbeat and its atmospheric chemistry, a process both ancient and ongoing.

The New Frontier Beneath Us

The Southampton team's discovery adds a fascinating layer to our understanding of planetary evolution. Beneath the seemingly stable crust, continents are quietly dissolving from below, feeding a slow planetary respiration that shapes the chemistry of oceans, the formation of islands, and perhaps even the stability of climate over eons.
 
It’s a humbling reminder that the ground beneath us is not still, merely patient. And with the help of supercomputers, we’re finally starting to hear its pulse.