Michigan physicist suggests a fix to the cosmological cornerstone Hubble constant

More than 90 years ago, astronomer Edwin Hubble observed the first hint of the rate at which the universe expands, called the Hubble constant.

Almost immediately, astronomers began arguing about the actual value of this constant, and over time, realized that there was a discrepancy in this number between early universe observations and late universe observations.

Early in the universe's existence, the light moved through plasma--there were no stars yet--and from oscillations similar to sound waves created by this, scientists deduced that the Hubble constant was about 67. This means the universe expands about 67 kilometers per second faster every 3.26 million light-years. Pictured is the supernova of the type Ia star 1994D, in galaxy NGC 4526. The supernova is the bright spot in the lower left corner of the image.

But this observation differs when scientists look at the universe's later life after stars were born and galaxies formed. The gravity of these objects causes what's called gravitational lensing, which distorts light between a distant source and its observer.

Other phenomena in this late universe include extreme explosions and events related to the end of a star's life. Based on these later life observations, scientists calculated a different value, around 74. This discrepancy is called the Hubble tension.

Now, an international team including a University of Michigan physicist has analyzed a database of more than 1,000 supernovae explosions, supporting the idea that the Hubble constant might not actually be constant.

Instead, it may change based on the expansion of the universe, growing as the universe expands. This explanation likely requires new physics to explain the increasing rate of expansion, such as a modified version of Einstein's gravity.

The team's results are published in the Astrophysical Journal.

"The point is that there seems to be a tension between the larger values for late universe observations and lower values for early universe observation," said Enrico Rinaldi, a research fellow in the U-M Department of Physics. "The question we asked in this paper is: What if the Hubble constant is not constant? What if it actually changes?"

The researchers used a dataset of supernovae--spectacular explosions that mark the final stage of a star's life. When they shine, they emit a specific type of light. Specifically, the researchers were looking at Type Ia supernovae.

These types of supernovae stars were used to discover that the universe was expanding and accelerating, Rinaldi said, and they are known as "standard candles," like a series of lighthouses with the same lightbulb. If scientists know their luminosity, they can calculate their distance by observing their intensity in the sky.

Next, the astronomers use what's called the "redshift" to calculate how the universe's rate of expansion might have increased over time. Redshift is the name of the phenomenon that occurs when light stretches as the universe expands.

The essence of Hubble's original observation is that the further away from the observer, the more wavelength becomes lengthened--like you tacked a Slinky to a wall and walked away from it, holding one end in your hands. Redshift and distance are related.

In Rinaldi's team's study, each bin of stars has a fixed reference value of redshift. By comparing the redshift of each bin of stars, the researchers can extract the Hubble constant for each of the different bins.

In their analysis, the researchers separated these stars based on intervals of redshift. They placed the stars at one interval of distance in one "bin," than an equal number of stars at the next interval of distance in another bin, and so on. The closer the bin to Earth, the younger the stars are.

"If it's a constant, then it should not be different when we extract it from bins of different distances. But our main result is that it actually changes with distance," Rinaldi said. "The tension of the Hubble constant can be explained by some intrinsic dependence of this constant on the distance of the objects that you use."

Additionally, the researchers found that their analysis of the Hubble constant changing with redshift allows them to smoothly "connect" the value of constant from the early universe probes and the value from the late universe probes, Rinaldi said.

"The extracted parameters are still compatible with the standard cosmological understanding that we have," he said. "But this time they just shift a little bit as we change the distance, and this small shift is enough to explain why we have this tension."

The researchers say there are several possible explanations for this apparent change in the Hubble constant--one being the possibility of observational biases in the data sample. To help correct for potential biases, astronomers are using Hyper Suprime-Cam on the Subaru Telescope to observe fainter supernovae over a wide area. Data from this instrument will increase the sample of observed supernovae from remote regions and reduce the uncertainty in the data.

Texas A&M researchers develop Computational Fluid Dynamics-Discrete Element Methods model for studying the flow in the next-generation reactors to improve safety

The model can better predict the physical phenomenon inside of very-high-temperature pebble-bed reactors

When one of the largest modern earthquakes struck Japan on March 11, 2011, the nuclear reactors at Fukushima-Daiichi automatically shut down, as designed. The emergency systems, which would have helped maintain the necessary cooling of the core, were destroyed by the subsequent tsunami. Because the reactor could no longer cool itself, the core overheated, resulting in a severe nuclear meltdown, the likes of which haven't been seen since the Chernobyl disaster in 1986.

Since then, reactors have improved exponentially in terms of safety, sustainability, and efficiency. Unlike the light-water reactors at Fukushima, which had liquid coolant and uranium fuel, the current generation of reactors has a variety of coolant options, including molten-salt mixtures, supercritical water, and even gases like helium.

Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales from the Department of Nuclear Engineering at Texas A&M University have been studying a new fourth-generation reactor, pebble-bed reactors. Pebble-bed reactors use spherical fuel elements (known as pebbles) and a fluid coolant (usually a gas). Pebble-bed reactors use passive natural circulation to cool down, making it theoretically impossible for a core meltdown to occur.  CREDIT Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales/Texas A&M University Engineering

"There are about 40,000 fuel pebbles in such a reactor," said Ragusa. "Think of the reactor as a really big bucket with 40,000 tennis balls inside."

During an accident, as the gas in the reactor core begins to heat up, the cold air from below begins to rise, a process known as natural convection cooling. Additionally, the fuel pebbles are made from pyrolytic carbon and tristructural-isotropic particles, making them resistant to temperatures as high as 3,000 degrees Fahrenheit. As a very high-temperature reactor (VHTR), pebble-bed reactors can be cooled down by passive natural circulation, making it theoretically impossible for an accident like Fukushima to occur.

However, during normal operation, a high-speed flow cools the pebbles. This flow creates movement around and between the fuel pebbles, similar to the way a gust of wind changes the trajectory of a tennis ball. How do you account for the friction between the pebbles and the influence of that friction in the cooling process?

This is the question that Ragusa and Tano aimed to answer in their most recent publication in the journal Nuclear Technology titled "Coupled Computational Fluid Dynamics-Discrete Element Method Study of Bypass Flows in a Pebble-Bed Reactor."

"We solved for the location of these 'tennis balls' using the Discrete Element Method, where we account for the flow-induced motion and friction between all the tennis balls," said Tano. "The coupled model is then tested against thermal measurements in the SANA experiment."

The SANA experiment was conducted in the early 1990s and measured how the mechanisms in a reactor interchange when transmitting heat from the center of the cylinder to the outer part. This experiment allowed Tano and Ragusa to have a standard to which they could validate their models.

As a result, their teams developed a coupled Computational Fluid Dynamics-Discrete Element Methods model for studying the flow over a pebble bed. This model can now be applied to all high-temperature pebble-bed reactors and is the first computational model of its kind to do so. It's very high-accuracy tools such as this that allow vendors to develop better reactors.

"The computational models we create help us more accurately assess different physical phenomena in the reactor," said Tano. "As a result, reactors can operate at a higher margin, theoretically producing more power while increasing the safety of the reactor. We do the same thing with our models for molten-salt reactors for the Department of Energy."

As artificial intelligence continues to advance, its applications to super computational modeling and simulation grow. "We're in a very exciting time for the field," said Ragusa. "And we encourage any prospective students who are interested in computational modeling to reach out because this field will hopefully be around for a long time."

ALMA discovers the most ancient galaxy with spiral morphology

Analyzing data obtained with the Atacama Large Millimeter/submillimeter Array (ALMA), researchers found a galaxy with a spiral morphology by only 1.4 billion years after the Big Bang. This is the most ancient galaxy of its kind ever observed. The discovery of a galaxy with a spiral structure at such an early stage is an important clue to solving the classic questions of astronomy: "How and when did spiral galaxies form?"

"I was excited because I had never seen such clear evidence of a rotating disk, spiral structure, and centralized mass structure in a distant galaxy in any previous literature," says Takafumi Tsukui, a graduate student at SOKENDAI and the lead author of the research paper published in the journal Science. "The quality of the ALMA data was so good that I was able to see so much detail that I thought it was a nearby galaxy."

The Milky Way Galaxy, where we live, is a spiral galaxy. Spiral galaxies are fundamental objects in the Universe, accounting for as much as 70% of the total number of galaxies. However, other studies have shown that the proportion of spiral galaxies declines rapidly as we look back through the history of the Universe. So, when were the spiral galaxies formed? ALMA detected emissions from carbon ions in the galaxy. Spiral arms are visible on both sides of the compact, bright area in the center of the galaxy.  CREDIT ALMA (ESO/NAOJ/NRAO), T. Tsukui & S. Iguchi

Tsukui and his supervisor Satoru Iguchi, a professor at SOKENDAI and the National Astronomical Observatory of Japan, noticed a galaxy called BRI 1335-0417 in the ALMA Science Archive. The galaxy existed 12.4 billion years ago and contained a large amount of dust, which obscures the starlight. This makes it difficult to study this galaxy in detail with visible light. On the other hand, ALMA can detect radio emissions from carbon ions in the galaxy, which enables us to investigate what is going on in the galaxy.

The researchers found a spiral structure extending 15,000 light-years from the center of the galaxy. This is one-third of the size of the Milky Way Galaxy. The estimated total mass of the stars and interstellar matter in BRI 1335-0417 is roughly equal to that of the Milky Way.

"As BRI 1335-0417 is a very distant object, we might not be able to see the true edge of the galaxy in this observation," comments Tsukui. "For a galaxy that existed in the early Universe, BRI 1335-0417 was a giant."

Then the question becomes, how was this distinct spiral structure formed in only 1.4 billion years after the Big Bang? The researchers considered multiple possible causes and suggested that it could be due to an interaction with a small galaxy. BRI 1335-0417 is actively forming stars and the researchers found that the gas in the outer part of the galaxy is gravitationally unstable, which is conducive to star formation. This situation is likely to occur when a large amount of gas is supplied from outside, possibly due to collisions with smaller galaxies.

The fate of BRI 1335-0417 is also shrouded in mystery. Galaxies that contain large amounts of dust and actively produce stars in the ancient Universe are thought to be the ancestors of the giant elliptical galaxies in the present Universe. In that case, BRI 1335-0417 changes its shape from a disk galaxy to an elliptical one in the future. Or, contrary to the conventional view, the galaxy may remain a spiral galaxy for a long time. BRI 1335-0417 will play an important role in the study of galaxy shape evolution over the long history of the Universe.

"Our Solar System is located in one of the spiral arms of the Milky Way," explains Iguchi. "Tracing the roots of spiral structure will provide us with clues to the environment in which the Solar System was born. I hope that this research will further advance our understanding of the formation history of galaxies."

These research results are presented in T. Tsukui & S. Iguchi "Spiral morphology in an intensely star-forming disk galaxy more than 12 billion years ago" published online by the journal Science on Thursday, 20 May 2021.

MD Anderson's Chen develops an AI tool for finding rare cell populations in large single-cell datasets

The super computational approach enables analysis of meaningful data that otherwise may be lost in the noise

Researchers at The University of Texas MD Anderson Cancer Center have developed a first-of-its-kind artificial intelligence (AI)-based tool that can accurately identify rare groups of biologically important cells from single-cell datasets, which often contain gene or protein expression data from thousands of cells.

This super computational tool, called SCMER (Single-Cell Manifold presERving feature selection), can help researchers sort through the noise of complex datasets to study cells that would likely not be identifiable otherwise.

SCMER may be used broadly for many applications in oncology and beyond, explained senior author Ken Chen, Ph.D., associate professor of Bioinformatics & Computational Biology, including the study of minimal residual disease, drug resistance, and distinct populations of immune cells. Ken Chen, Ph.D.

"Modern techniques can generate lots of data, but it has become harder to determine which genes or proteins actually are important in those contexts," Chen said. "Small groups of cells can have important features that may play a role in drug resistance, for example, but those features may not be sufficient to distinguish them from more common cells. It's become very important in analyzing single-cell datasets to be able to detect these rare cells and their unique molecular features."

Developing methods to effectively study small or rare cell populations in cancer research is a direct response to one of the provocative questions posed by the National Cancer Institute (NCI) in 2020, designating this an important and underexplored research area. SCMER was designed to address the issue and to enable researchers to get the most out of increasingly complex datasets.

Rather than the traditional approach of sorting cells into clusters based on all data contained in a dataset, SCMER takes an unbiased look to detect the most meaningful distinguishing features that define unique groups of cells. This allows researchers not only to detect rare cell populations but to generate a compact set of genes or proteins that can be used to detect those cells among many others. To highlight the utility of SCMER, the research team applied it to analyze several published single-cell datasets and found it compared favorably to currently available computational approaches.

In a reanalysis of more than 4,500 melanoma cells, SCMER was able to distinguish the cell types present using the expression of just 75 genes. The results also pointed to a number of genes involved in tumor development and drug resistance that were not identified as meaningful in the original study.

In a complex dataset of nearly 40,000 gastrointestinal immune cells, SCMER separated cells using only 250 distinct features. This analysis identified all the original cell types detected in the original study, but in many cases further defined subgroups of rare cells that were not previously identified.

Finally, the research team applied SCMER to study more than 1,400 lung cancer cells taken at various points in time after drug treatment. Using just 80 genes, the tool was able to accurately distinguish cells based on treatment responses and pointed to possible novel drivers of therapeutic resistance.

"Using state-of-the-art AI techniques, we have developed an efficient and user-friendly tool capable of uncovering new biological insights from rare cell populations," Chen said. "SCMER offers researchers the ability to reduce high dimensional, complex datasets into a compact set of actionable features with biological significance."

Brown researchers use holey math, machine learning to study cellular self-assembly

The field of mathematical topology is often described in terms of donuts and pretzels.

To most of us, the two differ in the way they taste or in their compatibility with morning coffee. But to a topologist, the only difference between the two is that one has a single hole and the other has three. There's no way to stretch or contort a donut to make it look like a pretzel -- at least not without ripping it or pasting different parts together, both of which are verboten in topology. The different number of holes make two shapes that are fundamentally, inexorably different.

In recent years, researchers have drawn on the mathematical topology to help explain a range of phenomena like phase transitions in the matter, aspects of Earth's climate, and even how zebrafish form their iconic stripes. Now, a Brown University research team is working to use topology in yet another realm: training computers to classify how human cells organize into tissue-like architectures.

In a study published in the May 7 issue of the journal Soft Matter, the researchers demonstrate a machine learning technique that measures the topological traits of cell clusters. They showed that the system can accurately categorize cell clusters and infer the motility and adhesion of the cells that comprise them. 

"You can think of this as topology-informed machine learning," said Dhananjay Bhaskar, a recent Ph.D. graduate who led the work. "The hope is that this can help us to avoid some of the pitfalls that affect the accuracy of machine learning algorithms." Topology-based machine learning classifies how human cells organize into spatial patterns based on the presence of persistent topological loops around empty regions, which can be used to infer cellular behaviors such as adhesion and migration.

Bhaskar developed the algorithm with Ian Y. Wong, an assistant professor in Brown's School of Engineering, and William Zhang, a Brown undergraduate.

There's been a significant amount of work in recent years to use artificial intelligence as a means of analyzing big data with spatial information, such as medical imaging of patient tissues. Progress has been made in training these systems to classify accurately, "but how they work is opaque and a little finicky," Wong said. "Just like people, sometimes computers hallucinate. You can have a few pixels in the wrong place, and it can confuse the algorithm. So Dhananjay has been thinking about ways we might be able to make those analyses a little more robust."

In developing this new system, Bhaskar took inspiration from modern art, specifically Pablo Picasso's "Bull." The series of 11 lithographs starts with a bull depicted in full detail. Each successive frame strips away a bit of detail, ending in a simple drawing capturing only the animal's fundamental attributes. By employing topology, Bhaskar thought he might be able to do something similar to understand the underlying form of tissue-like architectures.

The way in which cells migrate and interact depends on the physiology of the cells involved. For example, healthy tissues contain higher numbers of stationary epithelial cells. Processes like wound repair or cancer, however, often involve more mobile mesenchymal cells. Differences in physiology between the two cell types cause them to cluster together differently. Epithelial cells tend to aggregate into larger, more closely packed clusters. Mesenchymal cells tend to be more dispersed, with groups of cells branching off in different directions. But when assemblages contain a mix of both kinds of cells, it can be difficult to accurately analyze them.

The new algorithm uses a mathematical framework called persistent homology to examine microscope images of cell assemblages. Specifically, it looks at the topological patterns -- loops or holes -- that the cells form collectively. By looking at which patterns persist across different spatial resolutions, the algorithm determines which patterns are intrinsic to the image.

It starts by looking at the cells in their finest detail, determining which cells seem to be part of topological loops. Then it blurs the detail a bit by drawing a circle around each cell -- effectively making each cell a little larger -- to see which loops persist at that more coarse-grained scale and which get blurred out. The process is repeated until all the topological features eventually disappear. In the end, the algorithm produces a sort of bar code showing which loops persist across spatial scales. Those that are most persistent are stored as a simplified representation of the overall shape.

As it turns out, those persistent topological objects can be used to categorize clusters of different types of cells. After training their algorithm on computer-simulated cells programmed to behave like different types of cells, the team turned it loose on real experimental images of migratory cells. Those cells had been exposed to varying biochemical treatments so that some were more epithelial, some were more mesenchymal, and some were somewhere in between. The study showed that the topological algorithm was able to correctly classify different spatial patterns according to which biochemical treatment the cells had received.

"It was able to pull out all of these experimental treatments just by identifying these persistent topological loops," Wong said. "We were kind of amazed at how well it did."

The team hopes that one day the algorithm could be used in laboratory experiments to test drugs, helping to determine how different drugs can alter cell migration and adhesion. Eventually, it may also be used on medical images of tumors, potentially helping doctors to determine how malignant those tumors may be.

"We're looking for ways to catch subtleties that might not be apparent to the human eye," Wong said. "We hope that this might be a human interpretable approach that complements existing machine learning approaches."