Italian scientists develop computational model revealing the mechanism of replication of prions in mad cow disease

The study, from the Dulbecco Telethon Institute and the University of Trento in collaboration with the Italian National Institute of Nuclear Physics, will open up new research avenues to design drugs against incurable neurodegenerative disorders

The study was carried out in the Dulbecco Telethon Laboratory of Prions & Amyloids at CIBIO, lead by Emiliano Biasini, University of Trento and involved the team led by Prof. Pietro Faccioli, a physicist from the same university and affiliated to the Italian National Institute of Nuclear Physics.

Prions are unusual infectious agents made by aberrantly folded forms of a physiological protein called the cellular prion protein, or PrPC. These pathogens are known to replicate in absence of genetic material by recruiting normal PrPC molecules at the surface of cells and forcing them to change conformation and become infectious themselves. The resulting accumulation of prion particles in the nervous system lies at the root of neurodegenerative conditions known as transmissible spongiform encephalopathies, including Creutzfeldt-Jakob disease, fatal familial insomnia and Gerstmann-Sträussler-Scheinker in human, but also a variety of other pathologies in mammals such as the famous mad cow disease, which in the nineties caused a large epidemics in UK and Europe and several cases of cross-species transmission to human caused by the ingestion of infected meat. {module In-article}

Even though we know the existence of prions since 1982, thanks to the work of Nobel Laureate Stanley Prusiner, direct information regarding the structure of these non-canonical infectious agents is still lacking,” says Emiliano Biasini, Assistant Telethon Scientist and Associate Professor at the Department CIBIO, University of Trento. ”In fact, their insoluble and aggregated nature hampers the use of classical high resolution techniques for studying protein structures such as X-ray crystallography or nuclear magnetic resonance. However, such information is instrumental to rationally design drugs against these agents. In an attempt to fill this gap, we found unexpected help from a discipline usually considered far away from biology or chemistry, that is particles physics.”

Telethon researchers revised previous models of prion structure and proposed a novel architecture consistent with recent experimental data. This new model allowed Pietro Faccioli's group to apply their innovative algorithms for the reliable prediction of protein conformational transitions to the prion replication mechanism ”Cross-disciplinarity has been the key,” explains Giovanni Spagnolli, Ph.D. student at the Department CIBIO, University of Trento and first author of the paper. ”Without the contribution of the colleagues from physics we would have never been able to afford the kind of calculation required to simulate such complex systems. For the first time we reconstructed a physically-plausible mechanism of prion replication, which now allow us to formulate new hypotheses and design new drug discovery schemes to tackle the neurodegenerative processes unleashed by these infectious agents.

“The calculation algorithms that allowed the reconstruction of prion replication are derived from mathematical methods of theoretical physics, originally formulated to study phenomena of the subatomic world, such as the quantum tunneling effect. These mathematical methods have been adapted here to allow the simulation of complex biomolecular processes such as the folding and aggregation of proteins,” said Pietro Faccioli, Associate Professor at the Department of Physics, University of Trento and affiliated to the Italian National Institute for Nuclear Physics.

Penn State researchers create artificial intelligence tool to detect discrimination

A new artificial intelligence (AI) tool for detecting unfair discrimination--such as on the basis of race or gender--has been created by researchers at Penn State and Columbia University.

Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains--including policing, consumer finance, higher education and business. {module In-article}

"Artificial intelligence systems--such as those involved in selecting candidates for a job or for admission to a university--are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State. "But if these data are biased, they can affect the recommendations of AI systems."

For example, he said, if a company historically has never hired a woman for a particular type of job, then an AI system trained on this historical data will not recommend a woman for a new job.

"There's nothing wrong with the machine learning algorithm itself," said Honavar. "It's doing what it's supposed to do, which is to identify good job candidates based on certain desirable characteristics. But since it was trained on historical, biased data it has the potential to make unfair recommendations."

The team created an AI tool for detecting discrimination with respect to a protected attribute, such as race or gender, by human decision makers or AI systems that is based on the concept of causality in which one thing--a cause--causes another thing--an effect.

"For example, the question, 'Is there gender-based discrimination in salaries?' can be reframed as, 'Does gender have a causal effect on salary?,' or in other words, 'Would a woman be paid more if she was a man?' said Aria Khademi, graduate student in information sciences and technology, Penn State.

Since it is not possible to directly know the answer to such a hypothetical question, the team's tool uses sophisticated counterfactual inference algorithms to arrive at a best guess.

"For instance," said Khademi, "one intuitive way of arriving at a best guess as to what a fair salary would be for a female employee is to find a male employee who is similar to the woman with respect to qualifications, productivity and experience. We can minimize gender-based discrimination in salary if we ensure that similar men and women receive similar salaries."

The researchers tested their method using various types of available data, such as income data from the U.S. Census Bureau to determine whether there is gender-based discrimination in salaries. They also tested their method using the New York City Police Department's stop-and-frisk program data to determine whether there is discrimination against people of color in arrests made after stops. The results appeared in May in Proceedings of The Web Conference 2019.

"We analyzed an adult income data set containing salary, demographic and employment-related information for close to 50,000 individuals," said Honavar. "We found evidence of gender-based discrimination in salary. Specifically, we found that the odds of a woman having a salary greater than $50,000 per year is only one-third that for a man. This would suggest that employers should look for and correct, when appropriate, gender bias in salaries."

Although the team's analysis of the New York stop-and-frisk dataset--which contains demographic and other information about drivers stopped by the New York City police force--revealed evidence of possible racial bias against Hispanics and African American individuals, it found no evidence of discrimination against them on average as a group.

"You cannot correct for a problem if you don't know that the problem exists," said Honavar. "To avoid discrimination on the basis of race, gender or other attributes you need effective tools for detecting discrimination. Our tool can help with that."

Honavar added that as data-driven artificial intelligence systems increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups for criminal activity, how banks decide who gets a loan, who employers decide to hire, and how colleges and universities decide who gets admitted or receives financial aid, there is an urgent need for tools such as the one he and his colleagues developed.

"Our tool," he said, "can help ensure that such systems do not become instruments of discrimination, barriers to equality, threats to social justice and sources of unfairness."

International research team develops new method that translates results from climate supercomputer models to probabilities of an ice-free Arctic

Scientists from South Korea, Australia and the USA used results from climate models and a new statistical approach to calculate the likelihood for Arctic sea ice to disappear at different warming levels

Research published in this week's issue of Nature Communications reveals a considerable chance for an ice-free Arctic Ocean at global warming limits stipulated in the Paris Agreement. Scientists from South Korea, Australia and the USA used results from climate models and a new statistical approach to calculate the likelihood for Arctic sea ice to disappear at different warming levels. 

Future climate projections are usually obtained from global climate supercomputer models. These models are based on several hundred thousand lines of computer code, developed to solve the physical equations of the atmosphere, ocean, sea-ice and other climate components. Applying future greenhouse gas concentrations, each computer model produces a version of what the future of the Earth's climate might look like. Transforming this information into practical decisions is not easy, because of the remaining uncertainties in the magnitude of future climate change on regional scales. Decision making in a warming world requires an understanding of the probabilities of certain climatic events to occur.  {module In-article}

Up to now, it has been difficult to extract meaningful probabilities from climate models, because these models sometimes share common computer code or make similar assumptions regarding the implementation of less well understood processes, such as clouds or vegetation. To obtain more accurate probability estimates for future climate change in the Arctic region, the research team has developed a novel statistical method which translates results from a suite of climate supercomputer model simulations to probabilities. This method ranks the models in terms of how well they agree with present-day observations and accounts also for inter-dependencies amongst the models.

"Translating model dependence into mathematical equations has been a long-standing issue in climate science. It is exciting to see that our method can provide a general framework to solve this problem," said coauthor Won Chang, assistant professor in the department of Mathematical Sciences at the University of Cincinnati, USA. 

The researchers applied the new statistical method to climate model projections of the 21st century. Using 31 different climate models, which exhibit considerable inter-dependence, the authors find that there is at least a 6% probability that summer sea ice in the Arctic Ocean will disappear at 1.5 °C warming above preindustrial levels - a lower limit recommended by the Paris Agreement of the United Nations Framework Convention on Climate Change (Figure 1). For a 2°C warming, the probability for losing the ice rises to at least 28%. Most likely we will see a sea ice-free summer Arctic Ocean for the first time at 2 to 2.5°C warming. 

"Our work provides a new statistical and mathematical framework to calculate climate change and impact probabilities," commented Jason Evans, professor at the Climate Change Research Center in UNSW Australia in Sydney.

"Up to now, there was no established mathematical framework to assign probabilities on non-exclusive theories. While we only tested the new approach on climate models, we are eager to see if the technique can be applied to other fields, such as stock market predictions, plane accident investigations, or in medical research", says Roman Olson, the lead author and researcher at the Institute for Basic Science, Center for Climate Physics (ICCP) in South Korea.