Moons may yield clues to what makes planets habitable

In the search for Earth-like planets, University of Rochester scientist Miki Nakajima turns to supercomputer simulations of moon formations.

Earth’s moon is vitally important in making Earth the planet we know today: the moon controls the length of the day and ocean tides, which affect the biological cycles of lifeforms on our planet. The moon also contributes to Earth’s climate by stabilizing Earth’s spin axis, offering an ideal environment for life to develop and evolve.

Because the moon is so important to life on Earth, scientists conjecture that a moon may be a potentially beneficial feature in harboring life on other planets. Most planets have moons, but Earth’s moon is distinct in that it is large compared to the size of Earth; the moon’s radius is larger than a quarter of Earth’s radius, a much larger ratio than most moons to their planets.

Miki Nakajima, an assistant professor of earth and environmental sciences at the University of Rochester, finds that distinction significant. And in a new study that she led, she and her colleagues at the Tokyo Institute of Technology and the University of Arizona examine moon formations and conclude that only certain types of planets can form moons that are large in respect to their host planets.

“By understanding moon formations, we have a better constraint on what to look for when searching for Earth-like planets,” Nakajima says. “We expect that exomoons [moons orbiting planets outside our solar system] should be everywhere, but so far we haven’t confirmed any. Our constraints will be helpful for future observations.”

The origin of Earth’s moon

Many scientists have historically believed Earth’s large moon was generated by a collision between proto-Earth—Earth at its early stages of development—and a large, Mars-sized impactor, approximately 4.5 billion years ago. The collision resulted in the formation of a partially vaporized disk around Earth, which eventually formed into the moon.

To find out whether other planets can form similarly large moons, Nakajima and her colleagues conducted impact simulations on the computer, with several hypothetical Earth-like rocky planets and icy planets of varying masses. They hoped to identify whether the simulated impacts would result in partially vaporized disks, like the disk that formed Earth’s moon.

The researchers found that rocky planets larger than six times the mass of Earth (6M) and icy planets larger than one Earth mass (1M) produce fully—rather than partially—vaporized disks, and these fully-vaporized disks are not capable of forming fractionally large moons.

“We found that if the planet is too massive, these impacts produce completely vapor disks because impacts between massive planets are generally more energetic than those between small planets,” Nakajima says.

After an impact that results in a vaporized disk, over time, the disk cools and liquid moonlets—a moon’s building blocks—emerge. In a fully-vaporized disk, the growing moonlets in the disk experience strong gas drag from vapor, falling onto the planet very quickly. In contrast, if the disk is only partially vaporized, moonlets do not feel such strong gas drag.

“As a result, we conclude that a complete vapor disk is not capable of forming fractionally large moons,” Nakajima says. “Planetary masses need to be smaller than those thresholds we identified to produce such moons.”

The search for Earth-like planets

The constraints outlined by Nakajima and her colleagues are important for astronomers investigating our universe; researchers have detected thousands of exoplanets and possible exomoons, but have yet to definitively spot a moon orbiting a planet outside our solar system.

This research may give them a better idea of where to look.

As Nakajima says: “The exoplanet search has typically been focused on planets larger than six earth masses. We are proposing that instead, we should look at smaller planets because they are probably better candidates to host fractionally large moons.”

QuSoft, IoP researchers propose a new method for quantum supercomputing in trapped ions

Physicists from the University of Amsterdam have proposed a new architecture for a scalable quantum supercomputer. Making use of the collective motion of the constituent particles, they were able to construct new building blocks for quantum supercomputing that pose fewer technical difficulties than current state-of-the-art methods. The results were recently published in Physical Review Letters. Two trapped ions (in blue) are selected by optical tweezers (in red). A quantum gate between the ions can be implemented using electric fields.

The researchers work at QuSoft and the Institute of Physics in the groups of Rene Gerritsma and Arghavan Safavi-Naini. The effort, which was led by the Ph.D. candidate Matteo Mazzanti, combines two important ingredients. One is a so-called trapped-ion platform, one of the most promising candidates for quantum supercomputing that makes use of ions – atoms that have either a surplus or a shortage of electrons and as a result are electrically charged. The other is the use of a clever method to control the ions supplied by optical tweezers and oscillating electric fields.

As the name suggests, trapped-ion quantum computers use a crystal of trapped ions. These ions can move individually, but more importantly, also as a whole. As it turns out, the possible collective motions of the ions facilitate the interactions between individual pairs of ions. In the proposal, this idea is made concrete by applying a uniform electric field to the whole crystal, to mediate interactions between two specific ions in that crystal. The two ions are selected by applying tweezer potentials on them – see the image above. The homogeneity of the electric field assures that it will only allow the two ions to move together with all other ions in the crystal. As a result, the interaction strength between the two selected ions is fixed, regardless of how far apart the two ions are.

A quantum computer consists of ‘gates’, small computational building blocks that perform quantum analogs of operations like ‘and’ and ‘or’ that we know from ordinary computers. In trapped-ion quantum computers, these gates act on the ions, and their operation depends on the interactions between these particles. In the above setup, the fact that those interactions do not depend on the distance means that also the duration of operation of a gate is independent of that distance. As a result, this scheme for quantum computing is inherently scalable, and compared to other state-of-the-art quantum computing schemes poses fewer technical challenges for achieving comparably well-operating quantum supercomputers.

Rice chemists build machine learning that fine-tunes flash graphene

Rice University scientists are using machine-learning techniques to streamline the process of synthesizing graphene from waste through flash Joule heating.  Machine learning is fine-tuning Rice University’s flash Joule heating method for making graphene from a variety of carbon sources, including waste materials.  CREDIT Jacob Beckham/Rice University

The process discovered two years ago by the Rice lab of chemist James Tour has expanded beyond making graphene from various carbon sources to extracting other materials like metals from urban waste, with the promise of more environmentally friendly recycling to come. 

The technique is the same for all of the above: blasting a jolt of high energy through the source material to eliminate all but the desired product. But the details for flashing each feedstock are different. 

The researchers describe in Advanced Materials how machine-learning models that adapt to variables and show them how to optimize procedures are helping them push forward.

“Machine-learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene product’s properties,” Tour said.  

“In the coming years, the flash parameters can vary depending on the feedstock, whether it’s petroleum-based, coal, plastic, household waste, or anything else,” he said. “Depending on the type of graphene we want -- small flake, large flake, high turbostratic, level of purity -- the machine can discern by itself what parameters to change.”

Because flashing makes graphene in hundreds of milliseconds, it’s difficult to tease out the details of the chemical process. So Tour and company took a clue from materials scientists who have worked machine learning into their everyday process of discovery.

“It turned out that machine learning and flash Joule heating had really good synergy,” said Rice graduate student and lead author Jacob Beckham. “Flash Joule heating is a really powerful technique, but it’s difficult to control some of the variables involved, like the rate of current discharge during a reaction. And that’s where machine learning can shine. It’s a great tool for finding relationships between multiple variables, even when it’s impossible to do a complete search of the parameter space. Rice University chemists are employing machine learning to fine-tune its flash Joule heating process to make graphene.  CREDIT Jeff Fitlow/Rice University

“That synergy made it possible to synthesize graphene from scrap material based entirely on the models’ understanding of the Joule heating process,” he said. “All we had to do was carry out the reaction -- which can eventually be automated.”

The lab used its custom optimization model to improve graphene crystallization from four starting materials -- carbon black, plastic pyrolysis ash, pyrolyzed rubber tires, and coke -- over 173 trials, using Raman spectroscopy to characterize the starting materials and graphene products. 

The researchers then fed more than 20,000 spectroscopy results to the model and asked it to predict which starting materials would provide the best yield of graphene. The model also took the effects of charge density, sample mass, and material type into account in their calculations. 

Co-authors are Rice graduate students Kevin Wyss, Emily McHugh, Paul Advincula, and Weiyin Chen; Rice alumnus John Li; and postdoctoral researchers Yunchao Xie and Jian Lin, an associate professor of mechanical and aerospace engineering, of the University of Missouri. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and materials science and nanoengineering.

Swiss researchers demo carbon nanomaterials for future quantum supercomputer technologies

An exceptionally large grant will allow a team of Empa researchers to work on an ambitious project over the next ten years: The Werner Siemens Foundation (WSS) is supporting Empa's CarboQuant project with 15 million Swiss francs. The project aims to lay the foundations for novel quantum technologies that may even operate at room temperature – in contrast to current technologies, most of which require cooling to near absolute zero. "With this project, we are taking a big step into the unknown," says Oliver Gröning who coordinates the project. "Thanks to the partnership with the Werner Siemens Foundation, we can now move much further away from the safe shore of existing knowledge than would be possible in our 'normal' day-to-day research. We feel a little like Christopher Columbus and are now looking beyond the horizon for something completely new." Gian Vaitl / Empa Roman Fasel, head of Empa's nanotech@surfaces laboratory, standing behind a scanning tunneling microscope in his lab.

The expedition into the unknown now being undertaken by Empa researchers Pascal Ruffieux, Oliver Gröning, and Gabriela Borin-Barin under the lead of Roman Fasel was preceded by twelve years of intensive research activity. The researchers from Empa's nanotech@surfaces laboratory, headed by Fasel, regularly published their work in renowned journals such as Science.

In 2010, the team succeeded in synthesizing graphene strips, so-called nanoribbons, from smaller precursor molecules for the first time. With their novel synthesis approach, the Empa team can now produce carbon nanomaterials with atomic precision, thereby precisely defining their quantum properties. Graphene is considered a possible building material for computers of the future; it is made of carbon and resembles the familiar graphite. The material is, however, just one atomic layer thin and promises faster, more powerful computer architectures than the semiconductor materials known today. Back in 2017, the research team, in collaboration with colleagues from the University of California, Berkeley, built the first transistor from graphene nanoribbons.

A first milestone: magnetic carbon

But then the researchers realized an effect that had previously only been predicted theoretically and seemed even more interesting: Their tiny, tailor-made carbon nanomaterials exhibited properties of magnetism. In 2020, they first reported on the effect they had discovered – and followed up with a more refined paper in October 2021: Now, using their carbon nanomaterials, they had demonstrated for the first time a physical effect that the future Nobel Prize winner in physics F.D.M. Haldane had predicted nearly 40 years ago: spin fractionalization. This fractionalization only forms when many spins (i.e., fundamental quantum magnets) can be brought into a common, coherent quantum superposition. Empa researchers have achieved just that in their precisely synthesized molecular chains.

CarboQuant is intended to build on these special spin effects in graphene nanoribbons. Gröning says, "So far, we see spin states at very specific locations in the nanoribbons, which we can generate and detect. The next step will be to manipulate these spin states deliberately, for example, to reverse the spin at one end of the nanoribbon and thus elicit a corresponding reaction at the other end." This would give Empa researchers something very unique to work with: a quantum effect that is stable and can be manipulated even at room temperature or requiring just moderate cooling. That could be a silver bullet for building entirely new kinds of quantum supercomputers. Werner Siemens Foundation (WSS) Empa scientist Oliver Gröning is coordinating project CarboQuant.

0 and 1 at the same time

But why is it that quantum supercomputers can calculate faster than conventional computers? Classical computing machines calculate in bits. Each component can have one of two possible states: 0 or 1. In the quantum world, however, these states can be superimposed: 0 or 1 or both states at the same time are possible. That's why circuits in a quantum computer, known as qubits, can perform not just one computational operation after another, but multiple ones simultaneously. Gröning is already looking forward to the experiment: "If we manage to control the spin states in our nanoribbons, we can use them for quantum electronic devices."

While one part of the team continues to study spin effects in a high vacuum, other team members will focus on the everyday suitability of the graphene nanoribbons. "We have to get the components out of the protected environment of the high vacuum and prepare them in such a way that even in ambient air and at room temperature, they do not disintegrate. Only then can we equip the nanoribbons with contacts – which is the prerequisite for practical applications without the need of an elaborate infrastructure," Gröning says.

High-frequency radiation and intense laser pulses

The journey into this unknown, new world will in any case be very demanding. Already in the initial phase – the entry ticket, if you wish –, the control and time-resolved measurement of spin states require a completely new set of equipment that the researchers will have to develop and build. "We need to combine the scanning tunneling microscope (STM), in which we synthesize the nanoribbons and look at their structure, with ultra-fast measurements of their electronic and magnetic properties," Gröning explains. That can be done by high-frequency electrical signals at high magnetic fields and by irradiation with very short, extremely intense laser pulses.

To achieve this, two new measurement systems are being set up at Empa, which will also play key roles in the team's other research projects and which are co-funded by the Swiss National Science Foundation (SNSF) and the European Research Council (ERC). "This shows that synergies always emerge from different projects," says Gröning, "but also that ambitious goals can only be achieved with the support of different players at multiple levels." The researchers estimate that it will take two to three years just to set up these new analytical instruments and to carry out the first test runs. 

A very distinct projectimage_8_aa706_6c4ec.jpg

CarboQuant is a very special project thanks to its long-term and generous funding, says Oliver Gröning. The researchers at Empa's nanotech@surfaces lab now have extraordinarily great and long-term creative freedom on the way to their ambitious goal: a possible building material for next-generation quantum supercomputers. "We don't yet see the island that might be out there. But we can guess it, and if there is something out there, we are confident that we will find it, thanks to the support of the Werner Siemens Foundation and our national and international research partners," says Gröning.

NYU Tandon cybersecurity expert wins NSF CAREER Award for improving software vulnerability testing, education

Brendan Dolan-Gavitt is laying the groundwork for more efficient, less costly vulnerability testing. Brendan Dolan-Gavitt, assistant professor in the Department of Computer Science and Engineering

The National Science Foundation (NSF) has selected an NYU Tandon School of Engineering researcher who is developing better ways to assess vulnerability discovery tools – thus allowing cybersecurity professionals to better understand what techniques are most effective and ultimately leading to safer software – to receive its most prestigious award for promising young academics. 

Brendan Dolan-Gavitt, an assistant professor in the Department of Computer Science and Engineering and a faculty member of NYU’s Center for Cybersecurity, received a 2022 NSF Faculty Early Career Development Award, more widely known as a CAREER Award, which supports early-career faculty who have the potential to serve as academic role models in research and education.

A five-year, $500,000 grant will support a project that aims to create techniques for automatically generating benchmark corpora of software vulnerabilities that can be used to rigorously assess newly developed and existing tools used to root out dangerous programming bugs.

Software vulnerabilities pose a major threat to the safety and security of computer systems, and while there is a large body of research on how to find vulnerabilities in programs, the large, empirically tested corpora of vulnerabilities required to rigorously test that research are difficult and expensive to assemble. 

Although researchers have discovered ways to automatically generate vulnerabilities and inject them into software, the vulnerabilities created in that way are unrealistic (containing artifacts that made them easier to discover than real vulnerabilities inadvertently created by human programmers) and not varied enough.

Dolan-Gavitt intends to address those shortcomings by employing large language models trained on code to synthesize vulnerabilities that are both realistic and diverse, placing vulnerabilities in hard-to-discover paths, allowing new vulnerability classes to be added quickly with a customized domain-specific language, and automatically generating exploits for each vulnerability. The end result will be a limitless supply of highly realistic vulnerability corpora that can be generated cheaply, at scale, and on-demand, giving researchers valuable benchmarks in measuring the efficacy of their cybersecurity tools.  

In addition to his work’s benefit to cybersecurity researchers and industry professionals, it is also expected to be a boon to educators. Since joining NYU Tandon in 2015, Dolan-Gavitt has been involved in CSAW, the most comprehensive student-run cybersecurity event in the world, and among the most popular offerings at the annual event is a “capture the flag” competition that challenges students to find vulnerabilities in a software program. “These types of competitions are an extremely popular and effective means of teaching a variety of cybersecurity skills, but they require large amounts of time, money, and expertise to create and manage,” he explains. “If the creation of the challenges can be partially or wholly automated, it could bring new educational opportunities within reach of a broader and more diverse population of students by dramatically lowering costs and reducing the time and effort needed.” 

“Brendan Dolan-Gavitt is helping place the field of vulnerability finding on solid scientific footing, allowing for repeatable and reproducible experiments and facilitating comparative evaluations of the cyber tools meant to protect us,” said NYU Tandon Dean Jelena Kovačević. “His work has the potential to make a major impact on cybersecurity education, broadening access and helping to build the next generation of security researchers. We’re proud that his techniques will be employed right here in our own cybersecurity courses and at CSAW and pleased that the NSF has chosen him to receive this much-deserved CAREER Award.”

Dolan-Gavitt joins the over 50% of NYU Tandon’s engineering junior faculty members who hold CAREER Awards or similar young-investigator honors, including 10 since 2019 alone.

His award reflects the NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.