AI rides into the arena: how code is reimagining rodeo
AI rides into the arena: how code is reimagining rodeo

Edge-AI meets spurs, saddles

Palantir Technologies, together with TWG AI and backed by Teton Ridge, is launching a bold experiment that brings real-time artificial intelligence and computer vision into the dusty, data-scarce world of rodeo. This week, they announced a partnership with NVIDIA to deploy “edge AI” systems at live rodeo venues.
 
Instead of streaming raw video to the cloud and waiting, the new system processes footage on-site, using NVIDIA’s Holoscan infrastructure and powerful RTX PRO 6000 Blackwell GPUs, enabling lightning-fast analytics.
 
In effect, the rodeo arena becomes a living lab. Horses, riders, bulls, all tracked not just by human judges or spectators, but by silicon and algorithms. 
 

From past scores to live feedback

The project isn’t starting from scratch. Teton Ridge and its partners aggregated years of historical data: ride times, animal performance, rider stats across different rodeo disciplines. Using Palantir’s Foundry and AI-Platform (AIP), the collaborators trained computer-vision models to interpret each ride,  detecting motion, evaluating interactions between human and animal athletes, and exposing biomechanical and performance insights invisible to the naked eye.
 
What this means: Instead of relying solely on judges or memory, rodeo organizers and coaches can tap into a data-rich backend that dissects every gallop, pivot, and buck in near real-time.
 
According to reporting in Fast Company, this isn’t just a novelty; it reflects a broader push by Teton Ridge to transform one of America’s oldest sports through AI.

Why AI may change the rodeo game

  • Performance optimization for cowboys and cowgirls
    Algorithms can quantify subtle motion: body posture, reaction time, animal-rider dynamics. Over time, aggregated analytics might highlight training blind spots or ideal riding techniques.
  • Animal-athlete safety & welfare
    Tracking animal behavior and movement could help veterinarians, trainers, and event organizers detect stress or injury risks, giving rodeo a more humane, data-backed side.
  • Enhanced fan experience & broadcasting
    Real-time stats and analytics, delivered as overlays during broadcasts or arena jumbo-trons,  bring rodeo into the 21st century of immersive sports viewing. This aligns with broader trends of AI reshaping sports media and fan engagement.
  • Validating tradition through modern measurement
    Rodeo has always thrived on tradition, intuition, and human judgment. Now AI introduces a layer of objective data, a way to measure excellence and performance beyond lore and anecdotes.

Challenges and questions because reality isn't a clean Git push

This isn't Hollywood. Implementing real-time AI in the rodeo world will bump against real constraints:
  • Edge-AI hardware in dusty, unpredictable arenas may face connectivity, maintenance, or latency challenges. Running GPUs under such conditions isn’t trivial.
  • Data fairness and animal welfare: Introducing analytics could shift the spotlight. Will riders, trainers, or animals be pressured into chasing numbers rather than safety or tradition?
  • Cultural pushback: Rodeo is deeply rooted in heritage; adding algorithmic scrutiny might ruffle feathers among purists who believe in gut, instinct, and human judgment over code.

The long view, rodeo 2.0

Those dusty arenas, once reserved for tradition and adrenaline, may soon host another kind of spectacle, data + performance + insight. With Palantir, TWG AI, and NVIDIA building the infrastructure, and Teton Ridge investing in the vision, rodeo could evolve into one of the first “high-tech frontier sports.”
 
Watching a cowboy ride? Soon, you might also see real-time stats on posture, force, animal response, and analytics dashboards powered by edge AI. Maybe one day you’ll even watch a chatbot co-commentate a bull ride.
 
It’s wild west meets high-tech. And just like that, the future of rodeo looks like code rode sidesaddle with tradition.

Scholars harness supercomputers to peer inside black holes, through code, not telescopes

A team of computational astrophysicists has broken new ground, using the planet's most powerful supercomputers to simulate, in full fidelity, how matter spirals into a black hole and lights up in a blaze of radiation. Their results, published on December 3, 2025, by the Institute for Advanced Study (IAS) and the Flatiron Institute, deliver what may be the most detailed, realistic model yet of "luminous black hole accretion."

From Toy Models to Full-Blown Virtual Realities

For decades, astrophysicists have studied black hole accretion, the process by which gas, dust, and other matter fall into black holes, using simplified models. These toy-model approximations treated radiation as if it were a fluid, glossing over the real physics of how light moves through warped spacetime around a black hole.
 
Thanks to a new computational algorithm coupled with access to exascale-class supercomputers,  namely Frontier at Oak Ridge and Aurora at Argonne, the researchers directly solved the full radiation-transport equations under general relativity, without simplifying assumptions.
 
Lead author Lizhong Zhang describes it as "observing" black hole behavior not through telescopes, but through the computer, effectively creating a digital observatory of regions impossible to image directly.
What the Simulations Reveal
  • The simulations show that, even in a radiation-dominated, highly turbulent environment, matter forms a dense, thin thermal disk near the black hole, embedded inside a magnetically dominated envelope. The envelope appears to stabilize the system, a surprising sign of structural order emerging from chaos.
  • Around the disk, the model captures winds and sometimes powerful jets: outflows of matter and energy that match what astronomers see in real systems like ultraluminous X-ray sources and X-ray binaries.
  • When the team compared the simulated radiation spectra to real observations, the match was strong. That suggests the simulation is more than theoretical; it may faithfully represent how black holes behave in nature.

Why Supercomputers Were Critical

Modeling a black hole's accretion in full detail is computationally brutal. Gravity warps spacetime (general relativity), matter behaves under magneto-hydrodynamics (MHD), and radiation interacts with gas, all tightly coupled in nonlinear, dynamic ways. Solving that in 3D over time requires billions of calculations per second and software optimized down to the metal.
 
The combination of cutting-edge algorithm design (led by co-authors such as Christopher White and Patrick Mullen) with the brute force of exascale machines allowed the team to finally do this computation, the kind of problem that would have been intractable a decade ago.

What's Next: Cosmic Simulations Go Big

This is just the first in a series of papers. The team plans to apply their model to a wider range of black hole systems, from stellar-mass holes (a few times the mass of the Sun) to the supermassive giants that lurk at the centers of galaxies.
 
If successful, this work could reshape our understanding of how black holes grow and affect their surroundings, from the jets they shoot out to the winds they drive, and how they light up in X-rays and other wavelengths.

Big Picture: When Code Becomes Our Telescope

We're living in an era where code + supercomputing = cosmic telescope. With enough computational power and smart algorithms, researchers can simulate regions of the universe that not even our most advanced telescopes can resolve. The result is a kind of synthetic observation, a digital microscope turned on the universe's darkest objects.
 
It's a perspective shift: rather than just watching the universe, we're now capable of recreating pieces of it in silico, exploring how extreme gravity, magnetism, and radiation dance together around black holes.
 
The cosmic circus is no longer only for telescopes; now, supercomputers get front-row seats.

Seeing the unseeable: How AI, Supercomputers provide a clearer view of black holes

The world gasped when the first image of the black hole M87 was released in 2019. The hazy ring with a dark core confirmed what Einstein predicted decades prior: black holes cast "shadows" where no light escapes.
 
However, for scientists at the Perimeter Institute for Theoretical Physics (PI) in Canada, this image was merely the starting point. Now, thanks to supercomputing and the rise of artificial intelligence (AI), researchers are uncovering layers of cosmic fog, enabling them not only to see black holes but also to understand their dynamics with unprecedented precision.

From fuzzy ring to data-rich portraits

The tool doing much of this heavy lifting is a machine-learning model developed by PI researcher Avery Broderick and his team. Their system, called ALINet, can generate billions of candidate images, a thousand times faster than traditional methods, enabling scientists to compare real observational data against thousands of theoretical black hole models in a matter of hours.
 
Traditionally, interpreting data from the Event Horizon Telescope (EHT) meant painstakingly reconstructing images, then matching them by hand to models of how black hole plasma behaves. That process could take weeks, even on powerful hardware. Now, with ALINet, what once took a month can be achieved in a day, using a fraction of the computational cores.

Denoising the cosmos, even through the galactic haze

The challenge isn’t just speed. The center of our own galaxy, home to Sagittarius A*, the supermassive black hole at the Milky Way’s heart, lies behind a dense curtain of interstellar gas, dust and turbulent plasma. That material distorts radio waves, blurring and scattering the signals that astronomers receive.
 
Broderick’s team has now trained neural networks to perform “de-scattering,” essentially deblurring cosmic interference and letting scientists peer through the galactic veil. Early results published in 2025 show this can almost completely reverse the scattering at the EHT’s operational wavelength, offering a much clearer view of Sgr A*.

Supercomputing + AI: a combo that changes the game

This isn’t just about pretty pictures. Supermassive black holes, M87*, Sagittarius A*, and many more, are extreme gravitational laboratories. Understanding their behavior helps physicists probe deep questions: how matter behaves under extreme gravity, how space–time warps, how quantum effects might play out in the most intense conditions in the cosmos.
 
In fields beyond imaging, AI + high-performance computing (HPC) is already making waves. Teams have used distributed AI models running on supercomputers to detect gravitational wave signals from colliding black holes, and do so far faster than older methods. The success of such efforts shows that combining AI with raw compute scale isn’t just clever, it’s essential for the next frontier of astrophysics.

Why this matters, and why now

With tools like ALINet, astronomers can now treat black hole observations as data-rich investigations rather than fuzzy guesses. Instead of asking "Does this look like a ring?", scientists can now ask, "What spin, mass, and plasma configuration best matches the data?" They can also get answers rapidly, enabling more frequent updates as new observations come in.
 
For humanity, this means black holes, once relegated to science fiction and unreachable math, are becoming real, measurable entities. AI and supercomputers are turning the unknown into the known.
 
As Broderick puts it, this is "enabling technology," transforming a month-long computational slog into a swift, repeatable analysis. The cosmos just got sharper.

Big numbers, big bets: Dell scales up HPC for the AI era

Dell Technologies (Dell) posted strong third-quarter results for fiscal 2026, with $27.0 billion in revenue, up 11% year-over-year, and diluted EPS of $2.28. Its Infrastructure Solutions Group (servers and networking) was the standout, delivering $10.1 billion in revenue, up 37% YoY, with overall ISG revenue hitting $14.1 billion, up 24%.
 
Dell says this growth stems from surging demand for AI servers, with $12.3 billion in new AI-server orders during the quarter alone, and a year-to-date pipeline of about $30 billion, mixed across enterprise, sovereign-cloud, and large-scale "neocloud" customers.
 
In plain terms: Dell is investing heavily in high-performance computing infrastructure. This includes building large HPC clusters, deploying custom AI servers, and providing flexible scaling options for global enterprises and sovereign cloud buyers. Their ability to provide complete HPC solutions, including compute, networking, support, and storage, makes them a key partner for organizations needing powerful, scalable computing resources, from research institutions to cloud providers.

The GPU King: Nvidia’s Q3 Rocket Fuel for HPC Infrastructure

NVIDIA delivered blow-out third-quarter results: $57.0 billion in revenue, a 62% increase over last year. Data center revenue alone hit a record $51.2 billion, up 66% YoY.
 
Nvidia executives highlighted that demand for its latest GPU architecture, NVIDIA Blackwell, remains red-hot and that cloud GPUs are “sold out.” The firm sees this demand driven by exploding workloads in training and inference for generative AI, large-language models, HPC, and emerging “agentic” AI. 
 
On margins and profitability, Nvidia remains a beast, non-GAAP gross margin of around 73.6%, operating income and EPS both rising sharply.
 
Bottom line: Nvidia is arguably the single most influential driver of high-performance AI and HPC compute capacity today. Its GPUs, systems, and software stack (e.g., CUDA) have become the backbone for data centers, research labs, and cloud providers racing to build next-gen AI infrastructure.

Dell vs. Nvidia: Two Sides of the HPC Coin

Business Model
Selling servers, networking gear, storage, services, full-stack HPC and AI infrastructure.
Selling GPU accelerators (and full systems) the compute “engines” behind AI/HPC workloads.
Q3 FY26 Results (scale) Revenue: $27B; Servers & Networking revenue up 37% YoY; strong cash-flow, $30B+ pipeline in AI server orders. Revenue: $57B; Data-center revenue: $51.2B; GPU demand “off the charts”, high margins.
Value Prop in HPC
Custom, turnkey computing + networking + support, ideal for enterprises, sovereign clouds, large HPC deployments.
Massive compute density and efficiency, enabling cutting-edge AI training/inference and HPC workloads; the horsepower behind workloads.
Strategic Strength
Engineering and integration, combining compute + infrastructure + global support + customization.
Tech leadership, GPU performance, software ecosystem, scale, and brand dominance in AI/HPC.
Best Fit Use Cases Organizations that want turnkey HPC clusters, enterprise AI deployments, or regulated/sovereign environments. Entities needing raw GPU compute for AI training, large-scale inference, simulation, scientific computing where maximum performance matters.
 
In other words: Dell builds the highway; Nvidia builds the engines that run fastest on it.

Why This Matters and What’s Next

With both firms posting record results, the HPC and AI-infrastructure space is clearly firing on all cylinders. For enterprises and institutions in any region, this means two things:
  • Access to enterprise-grade HPC infrastructure is becoming easier and more affordable. Institutions needing heavy compute (data analysis, big data, simulation, AI modeling) can now tap into turnkey server/GPU clusters from Dell, powered by Nvidia GPUs.
  • AI and HPC scale are accelerating. Given Nvidia’s GPU dominance and Dell’s global delivery + support capabilities, the barrier to entry for building powerful AI-powered compute environments is dropping. We might soon see more data-heavy, compute-intensive startups or public-sector deployments outside traditional tech hubs.
Looking ahead, if current order backlogs, demand for AI servers, and GPU supply hold, we could be on the brink of a new wave of HPC deployments across research, modeling, enterprise AI, climate modeling, healthcare genomics, and other data-heavy fields.
 
This quarter's numbers from Dell and Nvidia aren't just financial wins; they signal that high-performance computing is shifting from niche to mainstream. As someone involved in software, and big data, this is a signal worth paying attention to.

SC25 pushes network frontiers as Pegatron unveils modular server ambitions

In STL, the high-performance computing world thrives on pushing limits, and this year’s SC25 conference delivered another leap forward, both on the show floor and across the wires of the legendary SCinet network.
 
Pegatron, a global leader in electronics manufacturing, showcased its next-generation server roadmap, emphasizing the company’s vision for modular, power-efficient systems engineered for the AI-accelerated era. Today’s press release has highlighted a strategic expansion into advanced rack-scale design, with an emphasis on flexibility, field-replaceable modules, and full-stack energy optimization. But even that technical momentum was matched, if not eclipsed, by the sheer scale of the network beneath attendees’ feet.

SCinet Hits a New Threshold: 13.72 TB/s

SCinet, the volunteer-built engineering marvel that powers every Supercomputing conference, announced its highest throughput ever recorded: 13.72 terabytes per second (TB/s) for SC25.
 
To put this into perspective, SCinet’s wide-area network (WAN) backbone has grown at a pace few global networks can match:
  • SC25 (St. Louis): 13.72 TB/s
  • SC24 (Atlanta): 8.71 Tbps
  • SC23 (Denver): 6.71 Tbps
  • SC22: 5.01 Tbps
  • SC19: 4.22 Tbps
Every year, SCinet is torn down and rebuilt by an army of volunteers of engineers, network architects, and researchers from around the world, who converge to create the fastest temporary network on Earth. Its sole mission: enable the bleeding-edge demos that define the HPC community.
 
As datasets balloon and GPU clusters grow hungrier by the day, SCinet’s growth isn’t a luxury; it’s a necessity.

Pegatron’s Modular Pivot: A Server for the AI Era

In its SC25 release, Pegatron detailed its next-gen server platform built around modularity, thermal efficiency, and rapid deployment, all themes dominating this year’s conference.
 
Key takeaways from Pegatron’s announcement include:
• Modular AI-ready infrastructure
Pegatron outlined blade-style compute modules designed to scale from traditional HPC to dense GPU and accelerator configurations.
• Energy-optimized design
The company emphasized new power-distribution and cooling architectures intended to support the surge of high-wattage AI accelerators without sacrificing stability or serviceability.
• Manufacturing muscle
 
Leveraging Pegatron’s global supply chain, the company aims to support hyperscalers, enterprise AI builders, and research labs that need rapid, consistent deployment cycles as models grow more compute-intensive.
 
Pegatron’s SC25 presence signals its intent to be more than an OEM; it wants to shape the future of rack-scale AI infrastructure.

Why the Two Stories Intersect

SCinet’s explosive bandwidth growth and Pegatron’s hardware ambitions aren’t isolated trends, they’re parallel responses to the same fundamental shift: AI workloads are becoming the dominant driver of HPC system design.
 
Training runs now require:
  • Uncompressed terabyte-scale dataset transfers
  • Multi-site distributed training
  • Real-time visualization pipelines
  • Exascale-class telemetry
At SC25, the relationship between compute, cooling, networking, and manufacturing has never been more visible. Pegatron’s modular hardware approach pairs naturally with a world where SCinet-class networks will soon be the norm, not the exception.

A Future Built on Collaboration and Momentum

SCinet’s volunteers, the invisible heroes of the SC conference, have once again demonstrated what’s possible when the global HPC community collaborates without restraint.
 
Pegatron’s announcement adds another layer of optimism: that the companies powering AI and HPC infrastructure are evolving just as quickly as the workloads they support.
 
SC25 feels like a hinge moment. Faster networks. Smarter servers. Greener cooling systems. More modular racks. And an industry that’s learning to innovate at the pace of AI itself.
 
The bar has officially been raised. And judging by the energy on the SC25 floor, the community seems ready to clear it again next year.