UK prof Nawaz deploys new algorithm for reconstructing particles at the LHC

Professor Nawaz on a visit to CERNThe Large Hadron Collider (LHC) is the most powerful particle accelerator ever built which sits in a tunnel 100 meters underground at CERN, the European Organisation for Nuclear Research, near Geneva in Switzerland. It is the site of long-running experiments which enable physicists worldwide to learn more about the nature of the Universe.

The project is part of the Compact Muon Solenoid (CMS) experiment – one of seven installed experiments that use detectors to analyze the particles produced by collisions in the accelerator.

The subject of a new study on in high occupancy imaging calorimeters with graph neural networks, the project has been carried out ahead of the high luminosity upgrade of the Large Hadron Collider. The High Luminosity Large Hadron Collider (HL-LHC) project aims to crank up the performance of the LHC to increase the potential for discoveries after 2029. The HL-LHC will increase the number of proton-proton interactions in an event from 40 to 200.

Professor Raheel Nawaz, Pro Vice-Chancellor for Digital Transformation, at Staffordshire University, has supervised the research. He explained: “Limiting the increase of computing resource consumption at large pileups is a necessary step for the success of the HL-LHC physics program and we are advocating the use of modern machine learning techniques to perform particle reconstruction as a possible solution to this problem.”

He added: “This project has been both a joy and a privilege to work on and is likely to dictate the future direction of research on particle reconstruction by using more advanced AI-based solution.”

Dr. Jan Kieseler from the Experimental Physics Department at CERN added: "This is the first single-shot reconstruction of about 1000 particles from and in an unprecedentedly challenging environment with 200 simultaneous interactions each proton-proton collision. Showing that this novel approach, combining dedicated graph neural network layers (GravNet) and training methods (Object Condensation), can be extended to such challenging tasks while staying within resource constraints represents an important milestone towards future particle reconstruction.”

Shah Rukh Qasim, leading this project as part of his Ph.D. at CERN and Manchester Metropolitan University, said: "The amount of progress we have made on this project in the last three years is truly remarkable. It was hard to imagine we would reach this milestone when we started!"

Professor Martin Jones, Vice-Chancellor and Chief Executive at Staffordshire University, added: “CERN is one of the world’s most respected centers for scientific research and I congratulate the researchers on this project which is effectively paving the way for even greater discoveries in years to come.

“Artificial Intelligence is continuously evolving to benefit many different industries and to know that academics at Staffordshire University and elsewhere are contributing to the research behind such advancements is both exciting and significant.”

University of Oregon develops models of the edge waves, continental shelf that fueled the 2021 Acapulco Bay tsunami

Acapulco de Juárez, Guerrero, México. | Herminio González HuizarTrapped inside the shoreline of a bay, the resonant interactions of a tsunami with regular waves can prolong the tsunami disturbance. For the 2021 magnitude 7 Acapulco, Mexico earthquake and tsunami, edge waves in the bay and the short continental shelf also had a surprisingly significant effect on the tsunami’s duration, according to a new study published in the Bulletin of the Seismological Society of America.

In the study, Diego Melgar of the University of Oregon and colleagues from research institutions in Mexico, Iceland, and the United States developed a slip model for the earthquake that they used to model the tsunami and learn more about why it lasted almost 17 hours inside the bay.

Edge waves—the coastal waves generated by a tsunami that travels back and forth parallel to a shoreline—and the energy generated by waves bouncing off the short continental shelf helped to continuously re-excite the bay resonance, the researchers found.

“The tsunami lasts as long as it does because the bay sloshes it around like a bathtub, but also whatever is going on in the shelf sort of gives it a whack every couple of oscillations and keeps it going,” Melgar explained.

Although previous models suggested that a continental shelf might contribute to tsunami waves in a bay, it was somewhat surprising that such a short shelf—the drop-off to the deep ocean occurs close to shore—would have a notable impact.

“It’s a second-order effect that makes a bad problem slightly worse,” said Melgar. “Every bay needs to think about these problems, and they’re probably going to be worse in places where the shelf is longer and there’s the chance to trap these waves in there.”

In the first tens of minutes of the event, the uplift caused by the earthquake “flushed” out the bay at the same time tsunami waves were rushing in. The mix caused hazardous rip currents with speeds up to three kilometers per hour in certain areas.

“Sometimes in Mexico, we give the ‘all-clear’ signal after a tsunami that applies to big waves, but we should also think of currents because they can last a long time,” Melgar said.

Long-duration tsunamis and rip currents could damage bay infrastructure—a significant concern for a metropolitan area like Acapulco where many of its 800,000-plus residents depend economically on the bay.

Melgar and colleagues used strong motion, GNSS, satellite, and tide gauge data to model the earthquake, which occurred at the southeastern end of the Guerrero gap. The gap marks a region along the Pacific coast of Mexico where the Cocos tectonic plate subducts under the North American plate, but where there have been surprisingly few large earthquakes in roughly the past 100 years.

The Acapulco earthquake was relatively compact, leaving a lot of the megathrust fault in the region unbroken, the researchers noted.

A single tidal gauge in Acapulco Bay also provided an intriguing finding: based on the gauge recordings, the 2021 earthquake and a magnitude 7 earthquake that struck the bay in 1962 are “strikingly similar,” Melgar and colleagues found.

A recent study published in Seismological Research Letters about the 2021 earthquake also noted that the 2021 and 1962 events looked nearly identical on seismic recordings made at a station in Germany.

“We can’t say for sure because it’s just one recording, but this really does look like a repeating earthquake every 50-ish years or so,” said Melgar.

“It seems like the seismic activity in this part of the world happens in bursts,” he added, noting that there have been several magnitude 7 earthquakes recently along the Pacific coast between Acapulco and Oaxaca to the south.

Instead of studying one earthquake at a time, Melgar said, it’s time to look for “systematic behaviors between these events because that would color how we calculate seismic hazard.”

Melgar credited national agencies that manage seismic and tidal networks, like the Servicio Mareográfico Nacional and the Instituto de Ingeniería at Universidad Nacional Autónoma de México, “for keeping these networks running for decades” and providing the data necessary to analyze these earthquakes and tsunamis in detail.

South Korean President Yoon visits University of Toronto for AI roundtable

The University of Toronto welcomed South Korean President Yoon Suk-yeol to campus last week to discuss artificial intelligence (AI) – its rise, potential applications, and opportunities for further collaboration between U of T and South Korean partners. Alex Mihailidis and President Gertler led President Yoon and Fedeli through four demonstrations showcasing some of the cutting-edge technologies being developed by U of T professors and their graduate students. The technologies included a sensory soft robotic hand for human-robot interaction demonstrated by Professor Xinyu Liu of the department of mechanical and industrial engineering in the Faculty of Applied Science & Engineering.  CREDIT Johnny Guatto

President Yoon hailed Toronto as an AI powerhouse, saying that Canada’s status as a world leader in AI and a center of the global AI supply chain was the result of the country recognizing the potential economic and social impacts of the technology early on.

He also said the tenacity and persistence of researchers such as University Professor Emeritus Geoffrey Hinton, a pioneer of the AI field of deep learning, served as a “benchmark” for South Korean efforts to advance the technologies of the future, adding that he was delighted to visit U of T, which he described as “one of the most prestigious universities in North America.”

U of T President Meric Gertler, for his part, said he was “deeply honored” to welcome President Yoon, who, he said, “has made it a priority to work closely with South Korea's allies and partners, advancing openness, human rights, democracy and the rule of law, with clear purpose and integrity.”

President Gertler noted that the South Korean delegation’s visit comes at a time when Toronto has emerged as the third-largest tech hub in North America, with the city’s AI and machine learning ecosystem at the heart of this growth.

“Together with the Vector Institute, the Canadian Institute for Advanced Research (CIFAR), MaRS, and other partners – all within a walking distance of this room – we have created one of the world’s richest pools of talent,” President Gertler said.

He added that U of T, its local partners, and South Korean organizations stand to learn much from each other when it comes to AI research, development, innovation, and education.

“Partnering with Korea’s leading universities, innovative firms and exceptionally talented researchers is an extraordinary opportunity for all parties to benefit as we deepen our collective commitment to excellence and to tackling the world’s most pressing challenges.”

President Yoon’s visit to U of T took place during the first day of his two-day visit to Canada, which included a meeting with Prime Minister Justin Trudeau in Ottawa the following day.

It also came less than two weeks after the government of Ontario concluded a trade mission to South Korea and Japan, led by Vic Fedeli, the province’s minister of economic development job creation, and trade.

Fedeli, who attended the U of T event, said Toronto’s reputation as a global hub in AI was regularly impressed upon him during his time in South Korea.

“At every single stop that we made, we heard people talk about Canada, AI, U of T, the Vector Institute – they see Canada as a real leader in AI and they’re very eager to learn,” Fedeli said.

He noted there was a strong desire in South Korea to see more Korean students come to Canada to further their education in STEM fields, including AI. “They want a bigger influx of Korean students – and we told them, ‘The door’s open,’ because we really believe this is going to help society. We’ve seen some examples of what AI has done and we’re very eager to continue to see the development of AI.”

Fedeli added that he hoped the high-level meeting would further strengthen the economic relationship between Ontario and South Korea, helping to spark AI advances that give both Ontarian and Korean companies a competitive edge on the global stage.

Held at Simcoe Hall, the meeting included a roundtable discussion titled “AI for the Better Future of Humanity,” which featured AI leaders and luminaries, including Hinton and Lee Jong-ho, the Republic of Korea’s Minister of Science and ICT (information and communication technology).

The talk, moderated by Leah Cowen (pictured below), U of T’s vice-president, research and innovation, and strategic initiatives, also included contributions from Garth Gibson, president and CEO of the Vector Institute for Artificial Intelligence; Elissa Strome, executive director of Pan-Canadian AI Strategy at CIFAR; and Professor Lisa Austin, chair in law and technology at U of T’s Faculty of Law and associate director at the Schwartz Reisman Institute for Technology and Society.

Attendees watched demonstrations by U of T professors and graduate students from the U of T Robotics Institute, as well as presentations by South Korean companies, including Samsung and LG – both of which have expanded their presence and connections with Toronto and U of T in recent years – and was also used to announce a new U of T exchange program with the South Korean government’s Institute for Information & Communication Technology Planning & evaluation (IITP). 

On the subject of AI, Hinton said he believes the deep learning revolution is just getting underway and that he expects tremendous growth in the years ahead.

“We now know that if you take a neural net and you just make it bigger and give it more data and more computing power, it’ll work better. So even with no new scientific insights, things are going to improve,” Hinton said during the roundtable discussion. “But we also know there are tens of thousands of brilliant young minds now thinking about how to make these networks better, so there will be many new scientific insights.”

In the long-term, Hinton (pictured at the lecture above) said he envisions a revolution in AI hardware led by advancements in “neuromorphic hardware” – computers and hardware that model artificial neural networks.

“I think Korea may have a big role to play in this,” Hinton said, noting one of the world’s leading experts in this area is Sebastian Seung, Samsung’s president and head of research – who attended the Simcoe Hall event.

When asked to share his thoughts on how Canada achieved its leadership position in AI, Hinton cited three foundational factors: a tolerant, liberal society that encourages leading researchers to settle here; the federal government’s funding for curiosity-driven basic research; and CIFAR’s funding, in 2004, of the Neural Computation and Adaptive Perception program, which is credited with kickstarting the revolution in deep learning.

Following the discussion, event attendees, including U of T students, watched presentations on avenues for AI research and collaboration by representatives of five South Korean companies: LG, Samsung, Naver, KT (formerly Korea Telecom), and SK Telecom.

Alex Mihailidis, U of T’s associate vice-president, of international partnerships, then announced that U of T had signed a memorandum of understanding with IITP, based in Seoul, to launch a bi-national education program in AI.

“We expect that in the fall of 2023, we will be accepting 30 students from Korea who will be going through a custom-made program around AI and its applications,” Mihailidis said. “This is a groundbreaking program that we expect will not only flourish here in Toronto but will grow – hopefully across our two great countries and around the world.”

Earlier, Mihailidis and President Gertler led President Yoon and Fedeli through four demonstrations showcasing some of the cutting-edge technologies being developed by U of T professors and their graduate students.

The technologies included: a wearable robotic exoskeleton for walking assistance and rehab demonstrated by Mihailidis and post-doctoral researcher Brokoslaw Laschowski; a sensory soft robotic hand for human-robot interaction demonstrated by Professor Xinyu Liu of the department of mechanical and industrial engineering in the Faculty of Applied Science & Engineering, graduate student Zhanfeng Zhou and post-doctoral researcher Peng Pan; a multimodal perception system for autonomous vehicles showcased by Jiachen (Jason) Zhou, a graduate student in robotics and aerospace engineering; and a nanorobot for precision manipulation under an electron microscope that was demonstrated by Yu Sun, professor in the department of mechanical and industrial engineering and director of the U of T Robotics Institute.

DART pummels asteroid in first-ever planetary defense test to improve supercomputer modeling

After 10 months of flying in space, NASA’s Double Asteroid Redirection Test (DART) – the world’s first planetary defense technology demonstration – successfully impacted its asteroid target on Monday, the agency’s first attempt to move an asteroid in space. Asteroid moonlet Dimorphos as seen by the DART spacecraft 11 seconds before impact. DART’s on board DRACO imager captured this image from a distance of 42 miles (68 kilometers). This image was the last to contain all of Dimorphos in the field of view. Dimorphos is roughly 525 feet (160 meters) in length. Dimorphos’ north is toward the top of the image. Credits: NASA/Johns Hopkins APL

Mission control at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, announced the successful impact at 7:14 p.m. EDT. 

As a part of NASA’s overall planetary defense strategy, DART’s impact with the asteroid Dimorphos demonstrates a viable mitigation technique for protecting the planet from an Earth-bound asteroid or comet, if one were discovered.

“At its core, DART represents an unprecedented success for planetary defense, but it is also a mission of unity with a real benefit for all humanity,” said NASA Administrator Bill Nelson. “As NASA studies the cosmos and our home planet, we’re also working to protect that home, and this international collaboration turned science fiction into science fact, demonstrating one way to protect Earth.”

DART targeted the asteroid moonlet Dimorphos, a small body just 530 feet (160 meters) in diameter. It orbits a larger, 2,560-foot (780-meter) asteroid called Didymos. Neither asteroid poses a threat to Earth. 

The mission’s one-way trip confirmed NASA can successfully navigate a spacecraft to intentionally collide with an asteroid to deflect it, a technique known as kinetic impact. 

The investigation team will now observe Dimorphos using ground-based telescopes to confirm that DART’s impact altered the asteroid’s orbit around Didymos. Researchers expect the impact to shorten Dimorphos’ orbit by about 1%, or roughly 10 minutes; precisely measuring how much the asteroid was deflected is one of the primary purposes of the full-scale test.

{media id=288,layout=solo}

“Planetary Defense is a globally unifying effort that affects everyone living on Earth,” said Thomas Zurbuchen, associate administrator for the Science Mission Directorate at NASA Headquarters in Washington. “Now we know we can aim a spacecraft with the precision needed to impact even a small body in space. Just a small change in its speed is all we need to make a significant difference in the path an asteroid travels.”

The spacecraft’s sole instrument, the Didymos Reconnaissance and Asteroid Camera for Optical navigation (DRACO), together with sophisticated guidance, navigation, and control system that works in tandem with Small-body Maneuvering Autonomous Real-Time Navigation (SMART Nav) algorithms, enabled DART to identify and distinguish between the two asteroids, targeting the smaller body.

These systems guided the 1,260-pound (570-kilogram) box-shaped spacecraft through the final 56,000 miles (90,000 kilometers) of space into Dimorphos, intentionally crashing into it at roughly 14,000 miles (22,530 kilometers) per hour to slightly slow the asteroid’s orbital speed. DRACO’s final images, obtained by the spacecraft seconds before impact, revealed the surface of Dimorphos in close-up detail.

Fifteen days before impact, DART’s CubeSat companion Light Italian CubeSat for Imaging of Asteroids (LICIACube), provided by the Italian Space Agency, deployed from the spacecraft to capture images of DART’s impact and the asteroid’s resulting cloud of ejected matter. In tandem with the images returned by DRACO, LICIACube’s images are intended to provide a view of the collision’s effects to help researchers better characterize the effectiveness of kinetic impact in deflecting an asteroid. Because LICIACube doesn’t carry a large antenna, images will be downlinked to Earth one by one in the coming weeks.

“DART’s success provides a significant addition to the essential toolbox we must have to protect Earth from a devastating impact by an asteroid,” said Lindley Johnson, NASA’s Planetary Defense Officer. “This demonstrates we are no longer powerless to prevent this type of natural disaster. Coupled with enhanced capabilities to accelerate finding the remaining hazardous asteroid population by our next Planetary Defense mission, the Near-Earth Object (NEO) Surveyor, a DART successor could provide what we need to save the day.”

With the asteroid pair within 7 million miles (11 million kilometers) of Earth, a global team is using dozens of telescopes stationed around the world and in space to observe the asteroid system. Over the coming weeks, they will characterize the ejecta produced and precisely measure Dimorphos’ orbital change to determine how effectively DART deflected the asteroid. The results will help validate and improve scientific supercomputer models critical to predicting the effectiveness of this technique as a reliable method for asteroid deflection.

“This first-of-its-kind mission required incredible preparation and precision, and the team exceeded expectations on all counts,” said APL Director Ralph Semmel. “Beyond the truly exciting success of the technology demonstration, capabilities based on DART could one day be used to change the course of an asteroid to protect our planet and preserve life on Earth as we know it.”

Roughly four years from now, the European Space Agency’s Hera project will conduct detailed surveys of both Dimorphos and Didymos, with a particular focus on the crater left by DART’s collision and precise measurement of Dimorphos’ mass.

Johns Hopkins APL manages the DART mission for NASA's Planetary Defense Coordination Office as a project of the agency's Planetary Missions Program Office.

Oxford prof Wooldridge proposes physical training is the next hurdle for AI

Let a million monkeys clack on a million typewriters for a million years and, the adage goes, they’ll reproduce the works of Shakespeare. Give infinite monkeys infinite time, and they still will not appreciate the bard’s poetic turn-of-phrase, even if they can type out the words. The same holds for artificial intelligence (AI), according to Michael Woolridge, professor of computer science at the University of Oxford. The issue, he said, is not the processing power, but rather a lack of experience.

“Over the past 15 years, the speed of progress in AI in general, and machine learning (ML) in particular, has repeatedly taken seasoned AI commentators like myself by surprise: we have had to continually recalibrate our expectations as to what is going to be possible and when,” Wooldridge said. “For all that their achievements are to be lauded, I think there is one crucial respect in which most large ML models have greatly restricted: the world and the fact that the models simply have no experience of it.” 

Most ML models are built in virtual worlds, such as video games. They can train on massive datasets, but for physical applications, they are missing vital information. Wooldridge pointed to the AI underpinning autonomous vehicles as an example. 

“Letting driverless cars loose on the roads to learn for themselves is a non-starter, so for this and other reasons, researchers choose to build their models in virtual worlds,” Wooldridge said. “And in this way, we are getting excited about a generation of AI systems that simply have no ability to operate in the single most important environment of all: our world.”

Language AI models, on the other hand, are developed without a pretense of a world at all — but still suffer from the same limitations. They have evolved, so to speak, from laughably terrible predictive texts to Google’s LAMDA, which made headlines earlier this year when a now-former Google engineer claimed the AI was sentient.

“Whatever the validity of [the engineer’s] conclusions, it was clear that he was deeply impressed by LAMDA’s ability to converse — and with good reason,” Wooldridge said, noting that he does not personally believe LAMDA is sentient, nor is AI near such a milestone. “These foundational models demonstrate unprecedented capabilities in natural language generation, producing extended pieces of natural-sounding text. They also seem to have acquired some competence in common-sense reasoning, one of the holy grails of AI research over the past 60 years.” 

Such models are neural networks, feeding on enormous datasets and training to understand them. For example, GPT-3, a predecessor to LAMDA, trained on all of the English text available on the internet. The amount of training data combined with significant supercomputing power makes the models akin to human brains, where they move past narrow tasks to begin recognizing patterns and make connections seemingly unrelated to the primary task. 

“The bet with foundation models is that their extensive and broad training leads to useful competencies across a range of areas, which can then be specialized for specific applications,” Wooldridge said. “While symbolic AI was predicated on the assumption that intelligence is primarily a problem of knowledge, foundation models are predicated on the assumption that intelligence is primarily a problem of data. To simplify, but not by much, throw enough training data at big models, and hopefully, competence will arise.”

This “might is right” approach scales the models larger to produce smarter AI, Wooldridge argued, but this ignores the physical know-how needed to truly advance AI. 

“To be fair, there are some signs that this is changing,” Wooldridge said, pointing to the Gato system. Announced in May by DeepMind, the foundation model, trained on large language sets and robotic data, could operate in a simple but physical environment. “It is wonderful to see the first baby steps taken into the physical world by foundation models. But they are just baby steps: the challenges to overcome in making AI work in our world are at least as large — and probably larger — than those faced by making AI work in simulated environments.”