Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.
Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.

NTNU's SIMLab supercomputing shows how to create buildings that can withstand the most extreme stress loads

In an explosion, fragments, and debris can be ejected at great speed and strike the surroundings. Then comes the shock wave. It's a scary combination.

Combined ballistic impacts pose a major challenge for engineers who build structures that must withstand extreme stresses. The combination of blast pressure and impact at high speed increases the chances of greater damage. Ph.D. candidate at the Norwegian University of Science and Technology (NTNU), Benjamin Stavnar Elveli describes it as the scariest stress there is.

“These combined impacts work in the same way as shrapnel bombs,” he says.

Infrastructure shift from massive and military to light and civilian

In the past, protective structures have involved massive concrete military buildings. In recent decades, new threats have emerged, and the need to protect civilian buildings and structures in urban areas has increased.

This has fuelled interest in lighter, thin-walled solutions that can withstand large deformations without cracking and collapsing.

The regulations have not followed this same development. No standards address this type of load yet, and research in the field is very limited.

Elveli has investigated how different types of thin steel plates behave when exposed to such extreme stress loads. His work can help to establish guidelines for how resistant, lightweight structures should be designed.

Initial projectiles do the most damage

Whether they occur in accidents or on purpose: explosions can cause massive damage. Debris and fragments can be torn loose from parts of buildings, cars, gravel, or stones. When they hit, they can act like projectiles.

Elveli says that any buildings, cars, or other objects in the vicinity would be exposed to a load that is more serious than if either stress load occurred alone. The damage is believed to be greatest when fragments hit first.

“That’s because the structure already has a defect or weakness from the projectile and then has to withstand the shock wave itself,” he says. Most often, cracking and destruction starts in the weak spots.”

Safer structures, safer society

Elveli's Ph.D. is based on more than 80 small-scale explosion tests on three different types of steel. By combining physical experiments with theory and mathematical modeling, he has recreated explosive loads in supercomputer simulations. The aim is to gain as much control as possible over how structures react to such loads.

The more scientists understand the actual physics of these loads, the more accurate, safe, and sustainable solutions the construction engineers of the future can deliver.

The danger of overestimating the strength

A shock wave can last for several milliseconds and cause great destruction over a large area. A fragment moves even faster and produces concentrated damage. Simulating this combined effect means that you have to describe two completely different phenomena in one and the same model. It's complicated.

“Often you’ll end up with some sort of trade-off. In order to capture the local weaknesses that occur during the explosion, we need to determine how accurate the descriptions of the impact of the fragments should be. If we don’t achieve full control of this, the model could overestimate the strength of the building to withstand the stress,” says Elveli.

Need solutions that can be trusted

Overestimating strength can have fatal consequences. The solutions that construction engineers deliver have to be dependable. A large part of Elveli’s doctoral work has been to investigate how accurate the models need to be to ensure reliable buildings.

A common approach has been to assume that the fragments hit before the shock wave happens. The physical experiments than have to be divided into two different sequences that follow each other. Often such studies use a simplified approach, where the test pieces have holes milled out by a machine to mimic damage from real fragments.

Overestimating resilience

Elveli has compared the behavior of machined plates with plates hit with real projectiles. Real projectiles created small petal-like cracks and deformation around the points of impact, whereas the pre-formed defects had perfectly even edges.

The explosion tests showed that the destruction started in the cracks and spread outwards. The researcher thus shows that the simplified approach has weaknesses.

“Idealized defects, like in the machined plates, are easier to test and simulate. But because they lack the deformations and damage that occur in real explosions, there’s a risk of exaggerating the strength of the materials in these models,” he says.

Great need for supercomputer simulations

Understanding the need to develop accurate supercomputer simulations is easy enough. Researchers who work with strength calculations cannot blow up actual buildings to test their resilience.

Elveli has put a lot of work into designing controlled and reliable small-scale explosion tests. He believes that his research will be useful for other researchers in the military and civilian arenas. For industrial use, precise and reliable simulations are currently expensive and time-consuming.

The many tests have produced large amounts of data that may interest the research and development departments of large companies. Elveli’s work makes it possible to simulate how structures behave when they are bent, stretched, or otherwise deformed.

In total, he has carried out 110 tests, of which 82 were explosion experiments. High-speed cameras filming 37 000 frames per second have captured the details as the steel plates are damaged. Elveli obtained his doctorate at NTNU’s SIMLab/Department of Structural Engineering

UZH prof Schwank develops AI that improves the efficiency of genome editing

Researchers at the University of Zurich have developed a new tool that uses artificial intelligence to predict the efficacy of various genome-editing repair options. Unintentional errors in the correction of DNA mutations of genetic diseases can thus be reduced.

Genome editing technologies offer great opportunities for treating genetic diseases. Methods such as the widely used CRISPR/Cas9 gene scissors directly address the cause of the disease in the DNA. The scissors are used in the laboratory to make targeted modifications to the genetic material in cell lines and model organisms and to study biological processes.

Further development of this classic CRISPR/Cas9 method is called prime editing. Unlike conventional gene scissors, which create a break in both strands of the DNA molecule, prime editing cuts and repairs DNA on a single strand only. The prime editing guide RNA (pegRNA) precisely targets the relevant site in the genome and provides the new genetic information, which is then transcribed by a “translation enzyme” and incorporated into the DNA.

Finding the most efficient DNA repair options
Prime editing promises to be an effective method of repairing disease-causing mutations in patients’ genomes. However, when it comes to applying it successfully, it is important to minimize unintended side effects such as errors in DNA correction or alteration of DNA elsewhere in the genome. According to initial studies, prime editing leads to a significantly lower number of unintended changes than conventional CRISPR/Cas9 approaches.

However, researchers currently still have to spend a significant amount of time optimizing the pegRNA for a specific target in the genome. “There are over 200 repair possibilities per mutation. In theory, we would have to test every single design option each time to find the most efficient and accurate pegRNA,” says Gerald Schwank, professor at the Institute of Pharmacology and Toxicology at the University of Zurich (UZH).

Analyzing a large data set with AI
Schwank and his research group needed to find an easier solution. Together with Michael Krauthammer, UZH professor at the Department of Quantitative Biomedicine, and his team, they developed a method that can predict the efficiency of pegRNAs. By testing over 100,000 different pegRNAs in human cells, they were able to generate a comprehensive prime editing data set. This enabled them to determine which properties of a pegRNA – such as the length of the DNA sequence, the sequence of DNA building blocks or the shape of the DNA molecule – positively or negatively influence the prime editing process.

Subsequently, the team developed an AI-based algorithm to recognize patterns in the pegRNA of relevance for efficiency. Based on these patterns, the trained tool can predict both the effectiveness and accuracy of genome editing with a particular pegRNA. “In other words, the algorithm can determine the most efficient pegRNA for correcting a particular mutation,” says Michael Krauthammer. The tool has already been successfully tested in human and mouse cells and is freely available to researchers.

Long-term goal: repairing hereditary diseases
Further pre-clinical studies are still needed before the new prime editing tool can be used in humans. However, the researchers are confident that in the foreseeable future, it will be possible to use prime editing to repair the DNA mutations of common inherited diseases such as sickle cell anemia, cystic fibrosis, or metabolic diseases.

The tool can be accessed by researchers at https://pridict.it. The study was supported by the University of Zurich Research Priority Program Human Reproduction Reloaded and the Swiss National Science Foundation.

 

A visualization of a cascaded-mode resonator, where a supermode resonance is created by reflecting the light back in a different mode at each reflection. (Photo credit: Capasso Lab, Harvard SEAS)
A visualization of a cascaded-mode resonator, where a supermode resonance is created by reflecting the light back in a different mode at each reflection. (Photo credit: Capasso Lab, Harvard SEAS)

Capasso lab creates supermode optical resonator at Harvard SEAS

What does it take for scientists to push beyond the current limits of knowledge? Researchers in Federico Capasso’s group at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed an effective formula.

“Dream big, question everything we know, question the textbooks,” says Vincent Ginis, a visiting professor at SEAS and first author of a new paper reporting a breakthrough in optical resonator technology. “That’s how Federico asks our lab team to work together. He challenges us to rethink all the classical rules to see if we can make devices do things better and in novel ways.”

That approach led to the team’s latest result, an optical resonator capable of manipulating light in never-before-observed ways. The breakthrough could influence how resonators are understood and open doors for new capabilities.

“This is an advance that alters fundamentally the design of resonators by using reflectors that convert light from one pattern to another as it bounces back and forth,” says Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS.

Optical resonators play a key role in many aspects of modern life.

“Resonators are central components in most applications of optics, lasers, microscopy, sensing—they appear in all of these technologies as essential building blocks,” says Ginis, who is also an assistant professor of mathematics and physics at the Vrije Universiteit Brussel. “They consist of two reflectors that bounce light back and forth, concentrating light in lasers for example, or filtering out frequencies of light such as in fiber optics and telecommunications.”

Optical resonators are key to telecommunications transmissions, encoding images and audio through frequencies of light.

“Each message, to keep separate from the others, is encoded on its specific frequency,” Ginis says. “Resonators allow us to ‘tape off’ exact, unique frequencies to allow many different messages to be transmitted simultaneously.”

Until now, resonators and the two reflective mirrors inside them controlled the intensity and frequency of light, but not the mode of light, which determines the shape and manner in which photons flow through space and time. We often think of light as moving in a beam like a straight line, but beams of light are also capable of traveling in other modes, like spirals. The new optical resonator developed by Capasso’s team is the first such device that gives scientists precise control over the mode of light, and even more importantly, enables multi-mode coupled light to exist within the resonator.

The team achieved this by etching a new type of pattern on the surface of the reflectors at each end of the resonator device.

“We realized that we could test our novel resonator concept in an integrated photonics platform, and chose silicon-on-insulator, which is used by many scientists and companies for applications such as sensing or communications,” says Cristina Benea-Chelmus, a research associate in the Capasso group and assistant professor of microengineering at the EPFL Institute of Electro and Microengineering, who spearheaded the experimental part of the work.

The etchings, about 300-600 nanometers in size, gave the team control over the shape of light beams inside the resonator. Using reflectors with different patterns on either end of the resonator unlocked their ability to change the shape of light as it moves.

“We can make these light modes play with each other, turning one mode into another, and then back into the first mode, creating loops of different light modes moving through the same space,” Ginis says. “When we saw this, we realized we were in ‘terra incognita’ here.”

Combining more than one mode of light creates what the researchers called a “supermode.”

“In traditional resonators, as light moves back and forth, the mode is always the same—the properties of light are always symmetric,” he says. “In ours, as the light goes from left to right or right to left, the modes are different. We’ve figured out how to break symmetry inside a resonator.”

“Having multimode control of light will have a huge impact on the bandwidth of information that can be transmitted using light,” he says. “It opens up many channels of transmission that we haven’t been able to access simultaneously until now.”

The Capasso team’s optical resonator provides a new tool to conduct fundamental physics experiments, including optomechanics, using light to make things move.

“By placing an object inside a resonator, you can manipulate materials like tiny atoms, molecules, and strands of DNA,” Ginis says. The new device, with its supermode capabilities, could unlock new degrees of freedom for researchers to manipulate minuscule materials with different shapes of light beams.

“By questioning the foundations of textbook resonator theory, we have discovered completely new and counterintuitive properties of light not found in traditional resonators,” Capasso says. These properties, including “mode-independent resonances and directionally dependent propagation,” unlock unforeseen opportunities for photonics, acoustics, and beyond, he adds.

Harvard’s Office of Technology Development has protected the intellectual property arising from the Capasso Lab’s optical resonator innovations and is exploring commercialization opportunities.

Additional authors include postdoctoral fellow Jinsheng Lu and research associate Marco Piccardo.

This work was supported by the Air Force Office of Scientific Research (grants FA550-19-1-0352 and FA95550-19-1-0135), the Research Foundation Flanders, and the Hans Eggenberger Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI), which is supported by the National Science Foundation under NSF Award no. 1541959.

Courtesy of NASA/JPL/SSI/SwRI
Courtesy of NASA/JPL/SSI/SwRI

SwRI investigations reveal more evidence that Mimas is a stealth ocean world

When a Southwest Research Institute scientist discovered surprising evidence that Saturn’s smallest, the innermost moon could generate the right amount of heat to support a liquid internal ocean, colleagues began studying Mimas’ surface to understand how its interior may have evolved. Numerical simulations of the moon’s Herschel impact basin, the most striking feature on its heavily cratered surface, determined that the basin’s structure and the lack of tectonics on Mimas are compatible with a thinning ice shell and geologically young ocean.

“In the waning days of NASA’s Cassini mission to Saturn, the spacecraft identified a curious libration, or oscillation, in Mimas’ rotation, which often points to a geologically active body able to support an internal ocean,” said SwRI’s Dr. Alyssa Rhoden, a specialist in the geophysics of icy satellites, particularly those containing oceans, and the evolution of giant planet satellite systems. She is the second author of a new Geophysical Research Letters paper on the subject. “Mimas seemed like an unlikely candidate, with its icy, heavily cratered surface marked by one giant impact crater that makes the small moon look much like the Death Star from Star Wars. If Mimas has an ocean, it represents a new class of small, ‘stealth’ ocean worlds with surfaces that do not betray the ocean’s existence.”

Rhoden worked with Purdue graduate student Adeene Denton to better understand how a heavily cratered moon like Mimas could possess an internal ocean. Denton modeled the formation of the Hershel impact basin using iSALE-2D simulation software. The models showed that Mimas’ ice shell had to be at least 34 miles (55 km) thick at the time of the Herschel-forming impact. In contrast, observations of Mimas and models of its internal heating limit the present-day ice shell thickness to less than 19 miles (30 km) thick, if it currently harbors an ocean. These results imply that a present-day ocean within Mimas must have been warming and expanding since the basin formed. It is also possible that Mimas was entirely frozen both at the time of the Herschel impact and present. However, Denton found that including an interior ocean in impact, models helped produce the shape of the basin.

“We found that Herschel could not have formed in an ice shell at the present-day thickness without obliterating the ice shell at the impact site,” said Denton, who is now a postdoctoral researcher at the University of Arizona. “If Mimas has an ocean today, the ice shell has been thinning since the formation of Herschel, which could also explain the lack of fractures on Mimas. If Mimas is an emerging ocean world, that places important constraints on the formation, evolution, and habitability of all of the mid-sized moons of Saturn.”

“Although our results support a present-day ocean within Mimas, it is challenging to reconcile the moon’s orbital and geologic characteristics with our current understanding of its thermal-orbital evolution,” Rhoden said. “Evaluating Mimas’ status as an ocean moon would benchmark models of its formation and evolution. This would help us better understand Saturn’s rings and mid-sized moons as well as the prevalence of potentially habitable ocean moons, particularly at Uranus. Mimas is a compelling target for continued investigation.”

A process of continual learning for a synthetic multi-label dataset   The figure shows how new information is learned each time a data distribution is input, while retaining information learned in the past.
A process of continual learning for a synthetic multi-label dataset The figure shows how new information is learned each time a data distribution is input, while retaining information learned in the past.

Osaka Metro prof Masuyama proposes new data learning methods for AI

Advances in information technology have made it possible for us to easily and continually obtain large amounts of diverse data. Artificial intelligence technology is gaining attention as a tool to put this big data to use.

Conventional machine learning mainly deals with single-label classification problems, in which data and corresponding phenomena or objects (label information) are in a one-to-one relationship. However, in the real world, data, and label information rarely have a one-to-one relationship. In recent years, therefore, attention has focused on the multi-label classification problem, which deals with data that has a one-to-many relationship between data and label information. For example, a single landscape photo may include multiple labels for elements such as sky, mountains, and clouds. In addition, to efficiently learn from big data that is obtained continually, the ability to learn over time without destroying things that were learned previously is also required.

A research group led by Associate Professor Naoki Masuyama and Professor Yusuke Nojima of the Osaka Metropolitan University Graduate School of Informatics has developed a new method that combines classification performance for data with multiple labels, with the ability to continually learn with data. Numerical experiments on real-world multi-label datasets showed that the proposed method outperforms conventional methods.

The simplicity of this new algorithm makes it easy to devise an evolved version that can be integrated with other algorithms. Since the underlying clustering method groups data based on the similarity between data entries, it is expected to be a useful tool for continual big data preprocessing. In addition, the label information assigned to each cluster is learned continually, using a method based on the Bayesian approach. By learning the data and learning the label information corresponding to the data separately and continually, both high classification performance and continual learning capability are achieved.

“We believe that our method is capable of continual learning from multi-label data and has capabilities required for artificial intelligence in a future big data society,” Professor Masuyama concluded.