Danes minimize phase noise using ML to improve optical systems

Ultra-precise lasers can be used for optical atomic clocks, quantum supercomputers, power cable monitoring, and much more. But all lasers make noise, which researchers from DTU Fotonik want to minimize using machine learning. Professor Darko Zibar lab

The perfect laser does not exist. There will always be a bit of phase noise because the laser light frequency moves back and forth a little. Phase noise prevents the laser from producing light waves with the perfect steadiness that is otherwise a characteristic feature of the laser.

Most of the lasers we use daily do not need to be completely precise. For example, it is of no importance whether the frequency of the red laser light in the supermarket barcode scanners varies slightly when reading the barcodes. But for certain applications—for example in optical atomic clocks and optical measuring instruments—the laser must be stable so that the light frequency does not vary.

One way of getting closer to an ultra-precise laser is if you can determine the phase noise. This may enable you to find a way of compensating for it so that the result becomes a purer and more accurate laser beam.

This is precisely what Professor Darko Zibar from DTU Fotonik is working on. He heads a research group called Machine Learning in Photonic Systems, where the goal is to develop and utilize machine learning to improve optical systems. Most recently, researchers from the group have characterized the noise from a laser system from the Danish company NKT Photonics with unprecedented precision.

“The question is how to measure that noise, and here we’ve developed the most accurate method available. We can measure much more precisely than others—our method has record-high sensitivity,” says Darko Zibar.

He has developed an algorithm that can analyze and find laser light patterns using machine learning, where a model for the noise is constantly being improved. On this basis, the group of researchers hopes to be able to develop a form of intelligent filter that continuously cleans the laser beam of noise.

Quantum mechanics set the limit

This is something that NKT Photonics can utilize in their optical measuring instruments, says Senior Researcher Poul Varming and his colleague Jens E. Pedersen, who has worked with the DTU researchers:

“We work with fiber lasers that emit constant light, and where the noise level is particularly low. Our most important task is to limit the noise, and—in terms of measuring technology—we had difficulty measuring noise at very high frequencies,” says Poul Varming and continues:

“But then we got in touch with Darko Zibar and his group, and we produced some lasers for them. The researchers were able to measure the noise up to very high frequencies, and the results actually contradict the established understanding of laser noise.”

With the new, improved measuring method, the researchers could thus show that the theoretical basis for calculating the noise was not quite in place. With the more detailed knowledge of the noise, engineers can better identify the parts of the laser system from which the noise emanates so that they know where to make improvements. The hope is that the machine learning system can also be used to attenuate the noise in real-time.

You cannot eliminate noise, because the laws of quantum mechanics set a very fundamental limit to how good a laser can be. Quantum noise is impossible to get rid of, but now it can at least be measured, says Darko Zibar:

“We can measure in the frequencies in which quantum noise is dominant. In this way, we can determine the fundamental noise and find out how much it contributes to the total noise. Once we know the fundamental limit for how good the laser can be, we can then figure out how to suppress the rest of the noise.”

“This is our next project—how we first identify and then suppress the noise, to obtain a laser that is only limited by quantum noise. This will enable us to produce some of the best lasers in the world.”

Optical cable feels vibrations

When the laser noise is known, it can be combated according to roughly the same principle used in noise-reducing headphones. Here, microphones pick up sound from the surroundings, and a signal is then sent in counter phase to the speakers so that the noise and the new signal eliminate each other, and the result is silence.

If the technique can be used to improve lasers by eliminating a large part of the noise so that the light frequency virtually does not vary, optical measuring instruments can have greater sensitivity and a longer range. At NKT Photonics, the technology can initially be used for distributed acoustic sensing, where a fiber optic cable is used as a sensor for measuring tiny vibrations. Distributed acoustic sensing can be used for various forms of monitoring. For example, an optical fiber can be laid along an oil or gas pipeline to ensure ultrafast detection of any ruptures. Or the technology can be used to monitor the fence around an airport or at a border—if a hole is cut in the fence, or someone tries to climb over it—the technology can not only signal what has happened but also pinpoint where it has occurred.

Such an optical monitoring system functions by a laser beam being sent into the optical fiber. During the process, a bit of the light is reflected by tiny impurities in the fiber. However, if the fiber is affected along the way, the properties of the reflected light also change, which is measurable. Even very faint vibrations can be picked up and located with great accuracy.

Monitoring of cables to the energy islands

If the new technology from DTU provides more effective laser light noise attenuation, distributed acoustic sensing can be used over somewhat longer distances than today. Both the sensitivity and the range of distributed acoustic sensing can be increased with the more precise lasers, and this may—for example—be needed when electricity is to be transported from the coming energy islands in the North Sea to the mainland. Here, the power cables can be monitored using the technology, so that any ruptures can be detected and repaired quickly. Today, it is a challenge that the range of the current systems is limited to a maximum of 50 km, and the distance to the energy island will be somewhat longer.

Poul Varming also mentions that several quantum technologies require extremely precise lasers. With noise-attenuated lasers, it becomes easier to develop ultra-precise optical atomic clocks and certain types of quantum computers, where lasers are used to cool individual atoms to close to absolute zero. The new generation of laser systems that may be the result of the researchers’ and engineers’ work thus offers great potential.

German researchers investigate the isotopic composition of rocky planets in the inner Solar System

Earth and Mars were formed from a material that largely originated in the inner Solar System; only a few percent of the building blocks of these two planets originated beyond Jupiter's orbit. A group of researchers led by the University of Münster (Germany) report these findings today in the journal “Science Advances”. They present the most comprehensive comparison to date of the isotopic composition of Earth, Mars, and pristine building material from the inner and outer Solar System. The Martian Meteorite Elephant Moraine (EETA) 79001. The scientists examined these and other Martian meteorites in the study. © NASA/JSC Some of this material is today still found largely unaltered in meteorites. The results of the study have far-reaching consequences for our understanding of the process that formed the planets Mercury, Venus, Earth, and Mars. The theory postulating that the four rocky planets grew to their present size by accumulating millimeter-sized dust pebbles from the outer Solar System is not tenable.

Approximately 4.6 billion years ago in the early days of our Solar System, a disk of dust and gases orbited the young Sun. Two theories describe how in millions of years the inner rocky planets formed from this original building material. According to the older theory, the dust in the inner Solar System agglomerated to ever-larger chunks gradually reaching approximately the size of our Moon. Collisions of these planetary embryos finally produced the inner planets Mercury, Venus, Earth, and Mars. A newer theory, however, prefers a different growth process: millimeter-sized dust “pebbles” migrated from the outer Solar System towards the Sun. On their way, they were accreted onto the planetary embryos of the inner Solar System, and step by step enlarged them to their present size.

Both theories are based on theoretical models and supercomputer simulations aimed at reconstructing the conditions and dynamics in the early Solar System; both describe a possible path of planet formation. But which one is right? Which process took place? To answer these questions, in their current study researchers from the University of Münster (Germany), the Observatoire de la Cote d’Azur (France), the California Institute of Technology (USA), the Natural History Museum Berlin (Germany), and the Free University of Berlin (Germany) determined the exact composition of the rocky planets Earth and Mars. “We wanted to find out whether the building blocks of Earth and Mars originated in the outer or inner Solar System”, says Dr. Christoph Burkhardt of the University of Münster, the study’s first author. To this end, the isotopes of the rare metals titanium, zirconium, and molybdenum found in minute traces in the outer, silicate-rich layers of both planets provide crucial clues. Isotopes are different varieties of the same element, which differ only in the weight of their atomic nucleus.

Meteorites as a reference

Scientists assume that in the early Solar System these and other metal isotopes were not evenly distributed. Rather, their abundance depended on the distance from the Sun. They, therefore, hold valuable information about where in the early Solar System a certain body’s building blocks originated.

As a reference for the original isotopic inventory of the outer and inner Solar System, the researchers used two types of meteorites. These chunks of rock generally found their way to Earth from the asteroid belt, the region between the orbits of Mars and Jupiter. They are considered to be largely pristine material from the beginnings of the Solar System. While so-called carbonaceous chondrites, which can contain up to a few percent carbons, originated beyond Jupiter's orbit and only later relocated to the asteroid belt due to the influence of the growing gas giants, their more carbon-depleted cousins, the non-carbonaceous chondrites, are true children of the inner Solar System.

The precise isotopic composition of Earth's accessible outer rock layers and that of both types of meteorites have been studied for some time; however, there have been no comparably comprehensive analyses of Martian rocks. In their current study, the researchers now examined samples from a total of 17 Martian meteorites, which can be assigned to six typical types of Martian rock. In addition, the scientists for the first time investigated the abundances of three different metal isotopes.

The samples of Martian meteorites were first powdered and subjected to complex chemical pretreatment. Using a multi-collector plasma mass spectrometer at the Institute of Planetology at the University of Münster, the researchers were then able to detect tiny amounts of titanium, zirconium, and molybdenum isotopes. They then performed supercomputer simulations to calculate the ratio in which building material found today in carbonaceous and non-carbonaceous chondrites must have been incorporated into Earth and Mars to reproduce their measured compositions. In doing so, they considered two different phases of accretion to account for the different history of the titanium and zirconium isotopes as well as of the molybdenum isotopes, respectively. Unlike titanium and zirconium, molybdenum accumulates mainly in the metallic planetary core. The tiny amounts still found today in the silicate-rich outer layers can therefore only have been added during the very last phase of the planet’s growth.

The researchers' results show that the outer rock layers of Earth and Mars have little in common with the carbonaceous chondrites of the outer Solar System. They account for only about four percent of both planets’ original building blocks. "If early Earth and Mars had mainly accreted dust grains from the outer Solar System, this value should be almost ten times higher," says Prof. Dr. Thorsten Kleine of the University of Münster, who is also a director at the Max Planck Institute for Solar System Research in Göttingen. "We thus cannot confirm this theory of the formation of the inner planets," he adds.

Lost building material

But the composition of Earth and Mars does not exactly match the material of the non-carbonaceous chondrites either. The supercomputer simulations suggest that another, different kind of building material must also have been in play. "The isotopic composition of this third type of building material as inferred by our supercomputer simulations implies it must have originated in the innermost region of the Solar System”, explains Christoph Burkhardt. Since bodies from such proximity to the Sun were rarely scattered into the asteroid belt, this material was almost completely absorbed into the inner planets and thus does not occur in meteorites. "It is, so to speak, 'lost building material' to which we no longer have direct access today," says Thorsten Kleine.

The surprising find does not change the consequences of the study for a theory of planet formation. "The fact that Earth and Mars apparently contain mainly material from the inner Solar System fits well with planet formation from the collisions of large bodies in the inner Solar System," concludes Christoph Burkhardt.

Natural History Museum combines supercomputer modeling, paleontology to study Earth's first giant

The two-meter skull of a newly discovered species of giant ichthyosaur, the earliest known, is shedding new light on the marine reptiles’ rapid growth into behemoths of the Dinosaurian oceans, and helping us better understand the journey of modern cetaceans (whales and dolphins) to becoming the largest animals to ever inhabit the Earth. A life recreation of "C. youngorum" stalking the Nevadan oceans of the Late Triassic 246 million years ago.  CREDIT Illustration by Stephanie Abramowicz, courtesy of the Natural History Museum of Los Angeles County (NHM).

While dinosaurs ruled the land, ichthyosaurs and other aquatic reptiles (that were emphatically not dinosaurs) ruled the waves, reaching similarly gargantuan sizes and species diversity. Evolving fins and hydrodynamic body-shapes seen in both fish and whales, ichthyosaurs swam the ancient oceans for nearly the entirety of the Age of Dinosaurs. 

“Ichthyosaurs derive from an as yet unknown group of land-living reptiles and were air-breathing themselves,” says lead author Dr. Martin Sander, a paleontologist at the University of Bonn and Research Associate with the Dinosaur Institute at the Natural History Museum of Los Angeles County (NHM). “From the first skeleton discoveries in southern England and Germany over 250 years ago, these ‘fish-saurians’ were among the first large fossil reptiles known to science, long before the dinosaurs, and they have captured the popular imagination ever since.”

Excavated from a rock unit called the Fossil Hill Member in the Augusta Mountains of Nevada, the well-preserved skull, along with part of the backbone, shoulder, and fore fin, date back to the Middle Triassic (247.2-237 million years ago), representing the earliest case of an ichthyosaur reaching epic proportions. As big as a large sperm whale at more than 17 meters (55.78 feet) long, the newly named Cymbospondylus youngorum is the largest animal yet discovered from that time, on land or in the sea. It was the first giant creature to ever inhabit the Earth that we know of.

The importance of the find was not immediately apparent,” notes Dr. Sander, ”because only a few vertebrae were exposed on the side of the canyon. However, the anatomy of the vertebrae suggested that the front end of the animal might still be hidden in the rocks. Then, one cold September day in 2011, the crew needed a warm-up and tested this suggestion by excavation, finding the skull, forelimbs, and chest region.”

The new name for the species, Cyoungorum, honors a happy coincidence, the sponsoring of the fieldwork by Great Basin Brewery of Reno, owned and operated by Tom and Bonda Young, the inventors of the locally famous Icky beer which features an ichthyosaur on its label.

In other mountain ranges of Nevada, paleontologists have been recovering fossils from the Fossil Hill Member’s limestone, shale, and siltstone since 1902, opening a window into the Triassic. The mountains connect our present to ancient oceans and have produced many species of ammonites, shelled ancestors of modern cephalopods like cuttlefish and octopuses, as well as marine reptiles. All these animal specimens are collectively known as the Fossil Hill Fauna, representing many of C. youngorum’s prey and competitors.

C. youngorum stalked the oceans some 246 million years ago, or only about three million years after the first ichthyosaurs got their fins wet, an amazingly short time to get this big. The elongated snout and conical teeth suggest that C. youngorum preyed on squid and fish, but its size meant that it could have hunted smaller and juvenile marine reptiles as well. 

The giant predator probably had some hefty competition. Through sophisticated super computational modeling, the authors examined the likely energy running through the Fossil Hill Fauna’s food web, recreating the ancient environment through data, finding that marine food webs were able to support a few more colossal meat-eating ichthyosaurs. Ichthyosaurs of different sizes and survival strategies proliferated, comparable to modern cetaceans’— from relatively small dolphins to massive filter-feeding baleen whales, and giant squid-hunting sperm whales.

Co-author and ecological modeler Dr. Eva Maria Griebeler from the University of Mainz in Germany notes, “due to their large size and resulting energy demands, the densities of the largest ichthyosaurs from the Fossil Hill Fauna including C. youngourum must have been substantially lower than suggested by our field census. The ecological functioning of this food web from ecological modeling was very exciting as modern highly productive primary producers were absent in Mesozoic food webs and were an important driver in the size evolution of whales.”

Whales and ichthyosaurs share more than a size range. They have similar body plans, and both initially arose after mass extinctions. These similarities make them scientifically valuable for comparative study. The authors' combined supercomputer modeling and traditional paleontology show how these marine animals reached record-setting sizes independently. The skull of the first giant creature to ever inhabit the Earth, the ichthyosaur "Cymbospondylus youngorum" currently on display at the Natural History Museum of Los Angeles County (NHM).  CREDIT Photo by Natalja Kent, courtesy of the Natural History Museum of Los Angeles County (NHM).

“One rather unique aspect of this project is the integrative nature of our approach. We first had to describe the anatomy of the giant skull in detail and determine how this animal is related to other ichthyosaurs,” says senior author Dr. Lars Schmitz, Associate Professor of Biology at Scripps College and Dinosaur Institute Research Associate. “We did not stop there, as we wanted to understand the significance of the discovery in the context of the large-scale evolutionary pattern of ichthyosaur and whale body sizes, and how the fossil ecosystem of the Fossil Hill Fauna may have functioned. Both the evolutionary and ecological analyses required a substantial amount of computation, ultimately leading to a confluence of modeling with traditional paleontology.”

They found that while both cetaceans and ichthyosaurs evolved very large body sizes, their respective evolutionary trajectories toward gigantism were different. Ichthyosaurs had an initial boom in size, becoming giants early on in their evolutionary history, while whales took much longer to reach the outer limits of huge. They found a connection between large size and raptorial hunting—think of a sperm whale diving down to hunt giant squid—and a connection between large size and a loss of teeth—think of the giant filter-feeding whales that are the largest animals ever to live on Earth.

Ichthyosaurs' initial foray into gigantism was likely thanks to the boom in ammonites and jawless eel-like conodonts filling the ecological void following the end-Permian mass extinction. While their evolutionary routes were different, both whales and ichthyosaurs relied on exploiting niches in the food chain to make it big.

“As researchers, we often talk about similarities between ichthyosaurs and cetaceans, but rarely dive into the details. That’s one way this study stands out, as it allowed us to explore and gain some additional insight into body size evolution within these groups of marine tetrapods,” says NHM’s Associate Curator of Mammalogy (Marine Mammals), Dr. Jorge Velez-Juarbe. “Another interesting aspect is that Cymbospondylus youngorum and the rest of the Fossil Hill Fauna are a testament to the resilience of life in the oceans after the worst mass extinction in Earth's history. You can say this is the first big splash for tetrapods in the oceans.”

C. youngorum will be permanently housed at the Natural History Museum of Los Angeles County, where it is currently on view. Visit NHM.ORG/ichthyosaur to learn more.

The researchers published their findings in Science.

UTokyo's quantum ML method allows for efficient, accurate verification

Technologies that take advantage of novel quantum mechanical behaviors are likely to become commonplace in the near future. These may include devices that use quantum information as input and output data, which require careful verification due to inherent uncertainties. The verification is more challenging if the device is time-dependent when the output depends on past inputs. For the first time, researchers using machine learning dramatically improved the efficiency of verification for time-dependent quantum devices by incorporating a certain memory effect present in these systems. B and F represent the input and output states, respectively, of a quantum system. E is an auxiliary system necessary to pass the sequence of input states B to the quantum reservoir S. S can then be read to emulate F without disrupting the system.

Quantum computers make headlines in the press, but these machines are in their infancy. A quantum internet, however, maybe a little closer to the present. This would offer significant security advantages over our current internet, amongst other things. But even this will rely on technologies that have yet to see the light of day outside the lab. While many fundamentals of the devices that can create our quantum internet may have been worked out, there are many engineering challenges in order to realize these as products. But much research is underway to create tools for the design of quantum devices.

Postdoctoral researcher Quoc Hoan Tran and Associate Professor Kohei Nakajima from the Graduate School of Information Science and Technology at the University of Tokyo have pioneered just such a tool, which they think could make verifying the behavior of quantum devices a more efficient and precise undertaking than it is at present. Their contribution is an algorithm that can reconstruct the workings of a time-dependent quantum device by simply learning the relationship between the quantum inputs and outputs. This approach is actually commonplace when exploring a classical physical system, but quantum information is generally tricky to store, which usually makes it impossible.

“The technique to describe a quantum system based on its inputs and outputs is called quantum process tomography,” said Tran. “However, many researchers now report that their quantum systems exhibit some kind of memory effect where present states are affected by previous ones. This means that a simple inspection of input and output states cannot describe the time-dependent nature of the system. You could model the system repeatedly after every change in time, but this would be extremely computationally inefficient. Our aim was to embrace this memory effect and use it to our advantage rather than use brute force to overcome it.”

Tran and Nakajima turned to machine learning and a technique called quantum reservoir supercomputing to build their novel algorithm. This learns patterns of inputs and outputs that change over time in a quantum system and effectively guesses how these patterns will change, even in situations the algorithm has not yet witnessed. As it does not need to know the inner workings of a quantum system as a more empirical method might, but only the inputs and outputs, the team’s algorithm can be simpler and produce results faster as well.

“At present, our algorithm can emulate a certain kind of quantum system, but hypothetical devices may vary widely in their processing ability and have different memory effects. So the next stage of research will be to broaden the capabilities of our algorithms, essentially making something more general-purpose and thus more useful,” said Tran. “I am excited by what quantum machine learning methods could do, by the hypothetical devices they might lead to.”

Japanese modelers show Antarctic ice sheet melting could cause a multi-meter rise in sea levels by the end of the millennium

Scientists predict that continued global warming under current trends could lead to an elevation of the sea level by as much as five meters by the year 3000 CE.

One of the many effects of global warming is sea-level rise due to the melting and retreat of the Earth’s ice sheets and glaciers as well as other sources. As the sea level rises, large areas of densely populated coastal land could ultimately become uninhabitable without extensive coastal modification. It is therefore vital to understand the impact of different pathways of future climate change on changes in sea level caused by ice sheets and glaciers. Simulated mass loss of the Antarctic ice sheet from 1990 until 3000 expressed as sea-level contribution: Fourteen experiments for the unabated warming pathway (RCP8.5, SSP5-8.5), three experiments for the reduced emissions pathway (RCP2.6, SSP1-2.6), a historical run (‘hist’) for 1990–2015 and a control run for a constant 1995–2014 climate (‘ctrl_proj’) under which the ice sheet is essentially stable. The red and blue boxes to the right show the means for RCP8.5/SSP5-8.5 and RCP2.6/SSP1-2.6, respectively; the whiskers show the full ranges. Phase 1 is the original ISMIP6 period until 2100. Phases 2-4 are valid for RCP8.5/SSP5-8.5 and show an accelerated mass loss (phase 2), the main instability of the West Antarctic ice sheet (phase 3) and a final phase 4 where the mass loss levels out. Map-view plots below are ice surface elevation differences relative to 2015 (in metres; blue means thickening, red/brown means thinning) for the simulation forced by MIROC-ESM-CHEM/RCP8.5 (Christopher Chambers et al. Journal of Glaciology. December 22, 2021).  CREDIT Christopher Chambers et al. Journal of Glaciology. December 22, 2021

A team of researchers from Hokkaido University, The University of Tokyo, and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) explored the long-term perspective for the Antarctic ice sheet beyond the 21st century under global-warming conditions, assuming late 21st-century climatic conditions remain constant. Their models and conclusions were published in the Journal of Glaciology.

The Ice Sheet Model Intercomparison Project for the Coupled Model Intercomparison Project Phase 6 (ISMIP6) was a major international effort that used the latest generation of models to estimate the impact of global warming on the ice sheets of Antarctica and Greenland. The objective was to provide input for the recently published Sixth Assessment Report (AR6) of the Intergovernmental Panel on Climate Change (IPCC). The contribution of the Antarctic ice sheet to sea-level rise by 2100 was assessed to be in the range between −7.8 and 30.0 centimeters under unabated warming and between 0 and 3 centimeters under reduced emissions of greenhouse gases.

The team used the ice-sheet model SICOPOLIS (SImulation COde for POLythermal Ice Sheets) to extend the whole ISMIP6 ensemble of fourteen experiments for the unabated warming pathway and three for the reduced emissions pathway. Until the year 2100, the set-up was the same as in the original ISMIP6 experiments. For the time beyond 2100, it was assumed that the late 21st-century climatic conditions remain constant—no further climate trend was applied. The team analyzed the results of the simulations for the total mass change of the ice sheet, regional changes in West Antarctica, East Antarctica, and the Antarctic Peninsula, and also the different contributors to mass change.

The simulations of mass loss of the Antarctic ice sheet show that, by the year 3000, the unabated warming pathway produces a sea-level equivalent (SLE) of as much as 1.5 to 5.4 meters, while for the reduced emissions pathway the SLE would be only 0.13 to 0.32 meters. The main reason for the decay under the unabated warming pathway is the collapse of the West Antarctic ice sheet, made possible by the fact that the West Antarctic ice sheet is grounded on a bed that is mostly well below sea level.

“This study demonstrates clearly that the impact of 21st-century climate change on the Antarctic ice sheet extends well beyond the 21st century itself, and the most severe consequences — multi-meter contribution to sea-level rise — will likely only be seen later,” says Dr. Christopher Chambers of Hokkaido University’s Institute of Low-Temperature Science and lead author of the paper. “Future work will include basing simulations on more realistic future climate scenarios, as well as using other ice-sheet models to model the outcomes.”