AI hardware processing is going 3D, from square to cube, to boost processing power

A team of researchers from the University of Oxford, in collaboration with other universities, has developed an innovative hardware system that combines photonic and electronic technologies to process 3D data. The system significantly enhances processing power for AI tasks. To test the hardware, the team analyzed 100 electrocardiogram signals simultaneously and achieved a 93.5% accuracy rate in identifying the risk of sudden death. The researchers believe that this approach could lead to a 100-fold increase in energy efficiency and compute density compared to current electronic processors if scaled up.

The efficiency of conventional computer chip processing doubles every 18 months. However, modern AI tasks require processing power that is currently doubling every 3.5 months. This means that new supercomputing paradigms are urgently needed to cope with this rising demand.

One possible solution is to use light instead of electronics to carry out multiple calculations in parallel using different wavelengths to represent different sets of data. In 2021, the same authors published groundbreaking work demonstrating a form of integrated photonic processing chip that could carry out matrix-vector multiplication at a much faster speed than the fastest electronic approaches. This breakthrough led to the creation of Salience Labs, a photonic AI company that emerged from the University of Oxford.

The team has now taken this concept further by adding an extra parallel dimension to the processing capability of their photonic matrix-vector multiplier chips. This higher-dimensional processing is made possible by using multiple different radio frequencies to encode the data, thereby achieving a level of parallelism that was previously impossible.

The team tested the hardware by applying it to the task of assessing the risk of sudden death from electrocardiograms of heart disease patients. They were able to successfully analyze 100 electrocardiogram signals simultaneously, accurately identifying the risk of sudden death with 93.5% accuracy.

The researchers estimated that even with a moderate scaling of 6 inputs x 6 outputs, this approach could outperform state-of-the-art electronic processors, potentially providing a 100-times enhancement in energy efficiency and compute density. The team anticipates further enhancement in supercomputing parallelism in the future by exploiting more degrees of freedom of light, such as polarization and mode multiplexing.

Dr. Bowei Dong, the first author of the publication, expressed his gratitude for the vibrant and collaborative platform provided by Oxford, which gave him the opportunity and courage to push the frontiers of advanced AI supercomputing hardware. Professor Harish Bhaskaran, the co-founder of Salience Labs and leader of this work, said that this is an exciting time to be doing research in AI hardware at the fundamental scale, and this work is one example of how what we assumed was a limit can be further surpassed.

A weir on the Koeye River is one location where Wild Salmon Center is partnering with First Nations to pilot the Salmon Vision technology. (PC: Olivia Leigh Nowak/Le Colibri Studio.)
A weir on the Koeye River is one location where Wild Salmon Center is partnering with First Nations to pilot the Salmon Vision technology. (PC: Olivia Leigh Nowak/Le Colibri Studio.)

WSC, First Nations develop Salmon Vision, a real-time machine learning model to track salmon returns

The Wild Salmon Center has partnered with several First Nations to use a combination of cutting-edge artificial intelligence tools and traditional Indigenous fishing methods to gain a better understanding of salmon runs in real time. The Salmon Vision deep learning model, which uses advanced artificial intelligence tools to identify and count fish species, is currently being utilized in various rivers around the North and Central Coasts of British Columbia. By 2024, Salmon Vision aims to provide reliable real-time fish count data to First Nations fisheries managers, thereby increasing their involvement in fisheries management decisions.

Fisheries managers on British Columbia’s Central Coast have to make decisions without knowing how many salmon are returning until after fishing seasons are over. They have to make forecasts and set harvest targets for commercial and recreational fisheries based on modeled data from the past. Emergency closures also have to be decided on when salmon populations start to decline. However, with the unpredictable and accelerating effects of climate change, it is increasingly difficult to rely on past data to predict future salmon returns. 

Dr. Will Atlas, Wild Salmon Center Senior Watershed Scientist, suggests a solution called “Salmon Vision.” A first-of-its-kind technology that combines artificial intelligence with ancient fishing weir technology, the Salmon Vision computer deep learning model can identify and count fish species. Developed by WSC in data partnership with the Gitanyow Fisheries Authority and Skeena Fisheries Commission, Salmon Vision aims to enable real-time salmon population monitoring for First Nations fisheries managers and beyond.

Automating fish counting is crucial for informed decisions while salmon are still running, according to many of our First Nations partners. Dr. Atlas suggests that underwater video technology can help us see those salmon returning to rivers. 

The Salmon Vision pilot study has annotated over 500,000 video frames captured at Indigenous-run fish counting weirs on the Kitwanga and Bear Rivers of B.C.'s Central Coast. Early assessments indicate that the technology is adept at tracking 12 different fish species passing through custom fish-counting boxes at the two weirs, with scores surpassing 90 and 80 percent accuracy for coho and sockeye salmon: two of the principal fish species targeted by First Nations, commercial, and recreational fishers. 

The Heiltsuk Nation is running Salmon Vision on a weir on the Koeye River. For First Nations like the Heiltsuk, weirs represent more than a revitalization of an age-old fishing technology. The rebuilding of weirs on rivers like the Koeye is a statement of First Nations sovereignty and their seat at the table in fisheries management decisions, as they were banned in the late 1800s by Canada's Department of Fisheries and Oceans as a way to consolidate control of fishery resources. 

"Modern-day expression of Heiltsuk title and rights and an avenue for us to be a part of the latest science," says William Housty, Associate Director of the Heiltsuk Integrated Resource Management Department. "And to make decisions not just for the betterment of this creek, but for the whole ecosystem." 

The Salmon Vision team is implementing automated counting on a trial basis in several rivers around the B.C. North and Central Coasts with partner First Nations. The goal is to provide reliable real-time fish count data to these partners by 2024. Ultimately, Dr. Atlas says, this groundbreaking A.I. technology could be in place in rivers across the North Pacific. 

"How many salmon are returning everywhere that we're fishing for salmon is the information we need," Dr. Atlas says. "You can't tell me with a straight face that you're having a sustainable fishery if you don't know how many fish you have coming back. And that's a problem right around the Pacific Rim." 

It's a problem with a promising solution, one that's just now coming into focus.

Xiaoqian Jiang, PhD, chair of the Department of Health Data Science and Artificial Intelligence at McWilliams School of Biomedical Informatics.
Xiaoqian Jiang, PhD, chair of the Department of Health Data Science and Artificial Intelligence at McWilliams School of Biomedical Informatics.

UTHealth Houston wins $6.4M NIH grant to develop deep learning model for Alzheimer’s

UTHealth Houston has been granted a $6.4 million fund by the National Institute on Aging for the next five years. The fund is intended to help develop an artificial intelligence approach to study the genetic factors related to Alzheimer’s disease. A team of researchers led by Zhongming Zhao, Ph.D., and Xiaoqian Jiang, Ph.D., both principal investigators and professors at McWilliams School of Biomedical Informatics in UTHealth Houston, are in the process of creating a deep-learning AI system. This system will link brain imaging with cell-specific genetic factors. To validate their AI models, the researchers will use neuroimaging and genetic data from Rush University Medical Center. They will also join the National Alzheimer’s Disease Sequencing Project AI/Machine Learning Consortium.

Zhao, who is also the director of the Center for Precision Health at McWilliams School of Biomedical Informatics, stated that this project will bridge the gap in Alzheimer's disease research between neuroimaging and genetic studies. Although there have been numerous computational analytical approaches published in each field, few can better address the link between neuroimaging and genetic data for a deeper understanding of the disease.

A significant amount of data related to molecular neuroimaging biomarkers and clinical information has already been generated in the context of Alzheimer's disease. However, researchers have not been able to connect many of the causal factors associated with the disease. To address this, scientists plan to use advanced machine-learning technology and an AI multimodality approach to group genetic and functional data. This will help to characterize the genetic risk of Alzheimer's disease. The researchers are calling this approach the "deep-learning brain" because it will focus on brain reading. The goal is to extend this model to the single-cell level, which will be called the "single-cell deep brain." This will allow for a more powerful way to dissect the genetic components of Alzheimer's disease.

To address the cognitive decline associated with Alzheimer's disease, researchers plan to integrate neuroimaging data into the deep-learning system. This will involve pairing distinct imaging features with genomic data to visualize their commonalities. Overall, the approach holds great promise for studying Alzheimer's disease and improving our understanding of this neurodegenerative disorder.

Researchers will use neuroimaging and genetic data from Rush University Medical Center, led by Christopher Gaiteri, PhD, assistant professor in the Department of Neurosciences, to validate their AI models. They will also collaborate with the national Alzheimer's Disease Sequencing Project AI/Machine Learning Consortium. The goal is to identify the link between genes and neuroimages to combine them into neuroimaging genetics, which can ultimately help explain the causes of cognitive decline in Alzheimer's disease. This understanding can help researchers and patients find better treatment options. The study's co-investigators are Paul Schulz, MD, a professor in the Department of Neurology at McGovern Medical School, and Kai Zhang, PhD, Yejin Kim, PhD, Yulin Dai, PhD, and Xiangning Chen, PhD, with McWilliams School of Biomedical Informatics. This research is funded by NIH grant U01AG079847.

This image was generated with the assistance of AI (assistance of DALL·E 2).  Credit: Professor James Sprittles, University of Warwick.
This image was generated with the assistance of AI (assistance of DALL·E 2). Credit: Professor James Sprittles, University of Warwick.

Scientists are using the mechanics of giant waves on a nanometric scale

Scientists have shown that the mechanics of rogue waves can be applied to the scale of a nanometer, with potential applications in various industries, including manufacturing and medicine. The study involved direct simulations of molecules and the development of new mathematical models. This theory can help control when and how layers rupture, leading to advancements in nanotechnologies and providing insights into dry eye disorders.

Rogue waves were once considered to be a myth and are known to hit oil rigs and ships in their path. Unlike tsunamis, rogue waves form by the chance combination of smaller waves in the ocean, making them rare events.

Researchers have been studying rogue waves for years, but now they are showing how this phenomenon can be applied on a much smaller scale - nanometrically. This new approach to the behavior of liquids on a nanometric scale has been published as a letter in Physical Review Fluids. A nanometer is a million times smaller than the thickness of a page of a book.

Scientists have found that the holes and bumps caused by rogue waves can be manipulated to produce patterns and structures for use in nano-manufacturing, which is manufacturing on a scale one-billionth of a meter. For instance, patterns formed that rupture liquid films can be used to build microelectronic circuits, which could be used in the production of low-cost components of solar cells. Furthermore, the behavior of thin liquid layers could help to explain why millions of people worldwide suffer from dry eye, which occurs when the tear film covering the eye ruptures.

The University of Warwick’s Mathematics Institute led a study that used direct simulations of molecules and new mathematical models to discover how nanoscopic layers of liquid behave in unexpected ways. Although spilled coffee on a table may seem still, at the nanoscale, the chaotic motion of molecules creates random waves on a liquid’s surface. A rare event occurs when these waves conspire to create a large 'rogue nano wave' that bursts through the layer and creates a hole. The new theory explains how and when this hole is formed, providing new insights into an unpredictable effect. The team of researchers is excited about the potential of this research in various industries. The applications are far-reaching.

Professor James Sprittles from the Mathematics Institute at the University of Warwick said, "We were thrilled to discover that mathematical models initially developed for quantum physics and recently applied to predict rogue ocean waves are essential for predicting the stability of nanoscopic layers of liquid. We hope that in the future, the theory can be used to develop a range of nano-technologies where controlling when and how layers rupture is critical. It could also have implications in related areas, such as the behavior of emulsions in foods or paints, where the stability of thin liquid films determines their shelf-life."

Two neutron stars at the moment of their merger.  CREDIT Dana Berry SkyWorks Digital, Inc.
Two neutron stars at the moment of their merger. CREDIT Dana Berry SkyWorks Digital, Inc.

German scientists use supercomputing tech to gain a better understanding of the 3D structure of kilonovae

A team of scientists from GSI Helmholtzzentrum für Schwerionenforschung and Queen's University Belfast have produced a 3D supercomputer simulation of the light that is emitted after the merger of two neutron stars, similar to a kilonova that has been observed. The simulation brought together several areas of physics, such as the behavior of matter at high densities, properties of unstable heavy nuclei, and atom-light interactions of heavy elements. This breakthrough in research has provided new insights into the phenomenon of kilonovae.

Recent observations that combine both gravitational waves and visible light have pointed to neutron star mergers as the major site of this element production. According to Luke Shingles, a scientist at GSI/FAIR and the leading author, the unprecedented agreement between their simulations and the observation of kilonova AT2017gfo indicates a broad understanding of the event.

The light that we see through telescopes from the material ejected from a neutron-star merger is determined by the interactions between electrons, ions, and photons within it. Supercomputer simulations of radiative transfer can model these processes and the emitted light. Recently, researchers produced a three-dimensional simulation for the first time that can self-consistently follow the neutron-star merger, neutron-capture nucleosynthesis, energy deposited by radioactive decay, and radiative transfer with tens of millions of atomic transitions of heavy elements.

The 3D model can predict the observed light for any viewing direction. When viewed almost perpendicular to the orbital plane of the two neutron stars, as observational data indicates for the kilonova AT2017gfo, the model predicts a sequence of spectral distributions that look very similar to what has been observed for AT2017gfo. This research area will help us to understand the origins of elements heavier than iron (such as platinum and gold) that were mainly produced by the rapid neutron capture process in neutron star mergers, says Shingles.

Almost half of the elements heavier than iron are produced in an environment of extreme temperatures and neutron densities, as achieved when two neutron stars merge. When they spiral in and coalesce, the resulting explosion leads to the ejection of matter with the appropriate conditions to produce unstable neutron-rich heavy nuclei by a sequence of neutron captures and beta-decays. These nuclei decay to stability, releasing energy that powers an explosive ‘kilonova’ transient, a bright emission of light that rapidly fades in about a week.

The 3D simulation combines several areas of physics, including the behavior of matter at high densities, the properties of unstable heavy nuclei, and atom-light interactions of heavy elements. However, further challenges remain, such as accounting for the rate at which the spectral distribution changes and the description of material ejected at late times. Future progress in this area will increase the precision with which we can predict and understand features in the spectra and will further our understanding of the conditions in which heavy elements were synthesized. High-quality atomic and nuclear experimental data, provided by the FAIR facility, is a fundamental ingredient for these models.