Australian made AI may improve suicide prevention in the future

The loss of any life can be devastating, but the loss of life from suicide is especially tragic. 

Around nine Australians take their own life each day, and it is the leading cause of death for Australians aged 15–44. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths.

“Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends, and communities,” says Karen Kusuma, a UNSW Sydney Ph.D. candidate in psychiatry at the Black Dog Institute, who investigates suicide prevention in adolescents.

Ms. Kusuma and a team of researchers from the Black Dog Institute and the Centre for Big Data Research in Health recently investigated the evidence base of machine learning models and their ability to predict future suicidal behaviors and thoughts. They evaluated the performance of 54 machine learning algorithms previously developed by researchers to predict suicide-related outcomes of ideation, attempt, and death.

The meta-analysis, published in the Journal of Psychiatric Research, found machine learning models outperformed traditional risk prediction models in predicting suicide-related outcomes, which have traditionally performed poorly.  

“Overall, the findings show there is a preliminary but compelling evidence base that machine learning can be used to predict future suicide-related outcomes with very good performance,” Ms. Kusuma says. 

Traditional suicide risk assessment models 

Identifying individuals at risk of suicide is essential for preventing and managing suicidal behaviors. However, risk prediction is difficult.

In emergency departments (EDs), risk assessment tools such as questionnaires and rating scales are commonly used by clinicians to identify patients at elevated risk of suicide. However, evidence suggests they are ineffective in accurately predicting suicide risk in practice.

“While there are some common factors shown to be associated with suicide attempts, what the risks look like for one person may look very different in another,” Ms. Kusuma says. “But suicide is complex, with many dynamic factors that make it difficult to assess a risk profile using this assessment process.”

A post-mortem analysis of people who died by suicide in Queensland found, of those who received a formal suicide risk assessment, 75 percent were classified as low risk, and none was classified as high risk. Previous research examining the past 50 years of quantitative suicide risk prediction models also found they were only slightly better than chance in predicting future suicide risk

“Suicide is a leading cause of years of life lost in many parts of the world, including Australia. But the way suicide risk assessment is done hasn’t developed recently, and we haven’t seen substantial decreases in suicide deaths. In some years, we’ve seen increases,” Ms Kusuma says. 

Despite the shortage of evidence in favor of traditional suicide risk assessments, their administration remains a standard practice in healthcare settings to determine a patient’s level of care and support. Those identified as having a high risk typically receive the highest level of care, while those identified as low risk are discharged. 

“Using this approach, unfortunately, the high-level interventions aren’t being given to the people who really need help. So we must look to reform the process and explore ways we can improve suicide prevention,” Ms. Kusuma says. 

Machine learning suicide screening 

Ms. Kusuma says there is a need for more innovation in suicidology and a re-evaluation of standard suicide risk prediction models. Efforts to improve risk prediction have led to her research using artificial intelligence (AI) to develop suicide risk algorithms. 

“Having AI that could take in a lot more data than a clinician would be able to better recognize which patterns are associated with suicide risk,” Ms. Kusuma says. 

In the meta-analysis study, machine learning models outperformed the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 percent of people who would experience a suicide outcome and correctly predicted 87 percent of people who would not experience a suicide outcome. 

“Machine learning models can predict suicide deaths well relative to traditional prediction models and could become an efficient and effective alternative to conventional risk assessments,” Ms. Kusuma says. 

The strict assumptions of traditional statistical models do not bind machine learning models. Instead, they can be flexibly applied to large datasets to model complex relationships between many risk factors and suicidal outcomes. They can also incorporate responsive data sources, including social media, to identify peaks of suicide risk and flag times where interventions are most needed. 

“Over time, machine learning models could be configured to take in more complex and larger data to better identify patterns associated with suicide risk,” Ms. Kusuma says. 

The use of machine learning algorithms to predict suicide-related outcomes is still an emerging research area, with 80 percent of the identified studies published in the past five years. Ms. Kusuma says future research will also help address the risk of aggregation bias found in algorithmic models to date.

“More research is necessary to improve and validate these algorithms, which will then help progress the application of machine learning in suicidology,” Ms. Kusuma says. “While we’re still a way off implementation in a clinical setting, research suggests this is a promising avenue for improving suicide risk screening accuracy in the future.” 

Korean Artificial Sun discovers new high-temperature plasma operating mode for fusion energy

Plasma configuration of a FIRE mode in Korea Superconducting Tokamak Advanced Research(KSTAR). The colour of lines indicates the ion temperature in keV, where 10 keV corresponds to ~120 million Kelvin.'FIRE mode' expected to resolve operational difficulties of commercial fusion reactors in the future

Korea Institute of Fusion Energy (KFE) and the Seoul National University (SNU) research team announced that they have discovered a new plasma operating mode that can improve plasma performance for fusion energy based on an analysis of plasma operations with ultra-high temperatures over 100 million degrees (Celsius) at the Korea Superconducting Tokamak Advanced Research (KSTAR).

To generate energy through a fusion reaction as occurs in the sun, it is essential to confine super hot and dense plasma in a fusion reactor stably and for long. To secure such a technology, worldwide fusion energy researchers have worked to find the most efficient plasma operating mode through theoretical and experimental studies.

One of the most widely known operating modes is H-mode (High confinement mode). It has been considered the primary plasma operating method for fusion reactors, thereby serving as a benchmark for developing next-generation operating modes.

The main downside of this H-mode, however, is the appearance of plasma instability, the so-called edge-localized mode (ELM) in which the pressure at the edge plasma exceeds the threshold, bursting the plasma like a balloon. Since this can cause damage to the inner walls of a reactor, researchers have been exploring various ways to control the ELM, while trying to develop a more stable plasma operating mode.

By analyzing experimental data of KSTAR operations and analyzing them through simulations, KFE and SNU researchers found that the fast ions, or the high-energy particles generated due to external heating, stabilize the turbulences inside the plasma, resulting in a dramatical increase in the plasma temperature. This newfound plasma regime has been coined “Fast Ion Regulated Enhancement (FIRE) mode“.

Since FIRE mode can improve the plasma performance compared to the H-mode while generating no ELM and providing easier operational control, it expects to open up new possibilities in developing plasma operation technology for commercial fusion reactors down the road, as well as contribute to the operation of the International Thermonuclear Experimental Reactor (ITER).

Japanese built models reveal the determinants of persistent, severe COVID-19

Left. Proportion of DCs in healthy individuals, during acute COVID-19 infection, and 7 months after infection based on simulations and clinical observations (Obs). Right. Comparison of viral loads between the baseline model and the severe symptom models with varying conditions of antigen-reporting DC function (APC) or interferon levels.As COVID-19 wreaks havoc across the globe, one characteristic of the infection has not gone unnoticed. The disease is heterogeneous in nature with symptoms and severity of the condition spanning a wide range. The medical community now believes this is attributed to variations in the human hosts’ biology and has little to do with the virus per se. Shedding some light on this conundrum is Associate Professor SUMI Tomonari from Okayama University, Research Institute for Interdisciplinary Science (RIIS), and Associate Professor Kouji Harada from the Toyohashi University of Technology, the Center for IT-based Education (CITE). The duo recently reported their findings on imbalances in the host immune system that facilitate persistent or severe forms of the disease in some patients.

The researchers commenced their study by supercomputer simulations with models based on a host’s immune system and its natural response to SARS-CoV-2 exposure. Mathematical equations for the dynamics of cells infected by SARS-CoV-2 were plugged in to predict their behavior. Now, the immune system has messenger cells known as dendritic cells (DCs). These cells report information (in the form of antigens) about the invaders to the warriors, or T cells, of the immune system. The model showed that at the onset of infection, DCs from infected tissues were activated, and then antibodies to neutralize SARS-CoV-2 gradually started building.

To investigate long-term COVID-19, the behavior of DCs 7 months after infection was evaluated by the supercomputer simulation. the baseline model simulation revealed that DCs drastically decreased during the peak of infection and slowly built up again. However, they tended to remain below pre-infection levels. These observations were similar to those seen in clinical patient samples. It seemed like low DC levels were associated with tenacious long-term infection.

The subsequent step was to understand if DC function contributed to disease severity. It was found that a deficiency of the antigen-reporting function of DCs and lowered levels of chemicals known as interferons released by them were related to severe symptoms. A decrease in both these functions resulted in higher amounts of virus in the blood (viral load). What’s more, the researchers also found two factors that affected the virus’s ability to replicate in the host, namely, antigen-reporting DCs and the presence of antibodies against the virus. Anomalies in these functions could hamper viral clearance, enabling it to stay in the body longer than expected, whereas a high ability of these immune functions suppresses viral replication and yields prompt viral clearance.

Components of immune signaling that directly affect the outcome of COVID-19 infection were revealed in this study. “ Our mathematical model predicted the persistent DC reduction and showed that certain patients with severe and even mild symptoms could not effectively eliminate the virus and could potentially develop long COVID,” concludes the duo. A better understanding of these immune responses could help shape the prognosis of and therapeutic interventions against COVID-19.

NNSA fails to implement cybersecurity measures

A recent document acquired by Atlas VPN reveals that a federal watchdog chastised the US agency in charge of maintaining and modernizing the country's nuclear arsenal for lax cybersecurity procedures that jeopardize both IT and operational technology networks.

The United States Government Accountability Office (GAO) issued an 81-page report on September 24th, 2022, outlining the National Nuclear Security Administration's (NNSA) cybersecurity failings.

The NNSA is a separate agency within the Department of Energy (DOE) tasked with managing U.S. nuclear weapons at eight laboratory and production sites across the country.

According to the GAO, the NNSA and its contractors have not completely adopted six legally mandated cybersecurity standards, including basic risk management techniques and others.

NNSA failed to fully implement two out of six mandatory cybersecurity measures, including the development and maintenance of an organization-wide continuous monitoring strategy as well as the documentation of cybersecurity policies and plans.

NNSA contractors responsible for the management and operational activities have to adhere to the same strict standards, but they failed on multiple fronts as well. Most notably, they were unable to implement the same organization-wide monitoring strategy that NNSA struggled with.

Out of seven M&O (management and operating) contractors, four implemented the monitoring policy substantially, one partially, and two barely improved the cybersecurity measure.

Unlike NNSA, all contractors were able to document and maintain cybersecurity policies and plans according to the outlined standards.

However, four contractors were assigned most, but not all, cybersecurity management roles and responsibilities. One M&O partner assigned only about half of the roles and duties. 

The last area where some M&O contractors struggled was the establishment and maintenance of a cybersecurity strategy for the organization. Two partners implemented the measure substantially, while one only partially, which is around 50%.

Why GAO did this study

NNSA and its site contractors incorporate information systems into nuclear weapons, automate production equipment, and develop warheads using supercomputer modeling.

To read the full article, head over to:

https://atlasvpn.com/blog/report-us-nuclear-security-body-failed-to-implement-cybersecurity-measures

Brazilian researchers develop tool that encodes patient data as DNA sequences to integrate databases for epidemiological analysis

Brazilian researchers have created an innovative and agile computational tool to link and analyze different health databases with millions of patient records. Called Tucuxi-BLAST, the platform encodes identification records in a database, such as patient name, mother’s name, and place of birth, using letters that represent the nucleotides in a DNA sequence (A, T, C, or G). This “conversion” of individuals to DNA enables accurate record linkage across databases despite typographical errors and other inconsistencies. The tool can be used in research, epidemiological analysis, and public policy formulation.

For example, people who have been vaccinated by the SUS, Brazil’s national health service, can be cross-referenced to other datasets to find vaccinated patients with a specific disease. Even if a vaccination record contains errors or uncompleted fields, Tucuxi-BLAST is able to link it to the same patient in another database because it treats inconsistencies as if they were DNA mutations. Genomics tools routinely need to compare fragments in order to decide whether they are more similar than different and whether to link the base pairs in question. If each individual corresponds to a sequence of letters, data from different repositories can be cross-referenced and linked by the tool.

“The SUS is a valuable source of information for medical and epidemiological research because it stores health data for millions of patients. However, records relating to diseases and other types of data are stored in different databases that don’t always talk to each other. The method we’ve developed is able to effect record linkage accurately and at great speed,” Helder Nakaya, corresponding author of an article on the study published in the journal PeerJ, told Agência FAPESP

Nakaya is an immunologist affiliated with the University of São Paulo’s School of Pharmaceutical Sciences (FCF-USP), the Albert Einstein Jewish Hospital (HIAE), the Scientific Platform Pasteur-USP, and Todos pela Saúde institute. He also belongs to the Center for Research on Inflammatory Diseases (CRID), one of the Research, Innovation, and Dissemination Centers (RIDCs) funded by FAPESP. 

The study was also supported by FAPESP via two other projects (18/14933-2 and 19/27139-5).

Using the tool in practice

Even before the article was published, Tucuxi-BLAST began to be deployed in practice. It was used, for example, to cross-reference four years of data from the Ministry of Health’s Malaria Surveillance System with clinical data from the Dr. Heitor Vieira Dourado Tropical Medicine Foundation (in Manaus, Amazonas state), a branch of Oswaldo Cruz Foundation (Fiocruz), another arm of the ministry. 

The result showed that being HIV positive is a risk for Plasmodium vivax malaria patients, representing an additional challenge for public policy. Given the lack of single identifiers, Tucuxi-BLAST used the patient's name, mother’s name, and date of birth. The findings were described in an article published in May 2022 in Scientific Reports

The study was led by researchers at Amazonas State University (UEA). Nakaya and FCF-USP’s José Deney Alves Araújo, the first author of the PeerJ article, also participated. Araújo named the tool Tucuxi in honor of Sotalia fluviatilis, a freshwater dolphin that inhabits the rivers of the Amazon Basin.

BLAST (Basic Local Alignment Search Tool) refers to a suite of programs used in bioinformatics to generate alignments between nucleotides or protein sequences across large databases.

How it works

To develop the new method, the scientists translated patient data into DNA sequences using a codon wheel that changed dynamically over different runs without impairing the efficiency of the process. Codons are sequences of three nucleotides that code for a specific amino acid in a DNA or RNA molecule. Codon wheels are used to identify the amino acids encoded by any DNA or RNA codon.

This encoding scheme enabled real-time data encryption, thus providing an additional layer of privacy during the linking process. “It used DNA to encrypt the information and guarantee privacy,” Nakaya said.

The DNA-encoded identification fields were compared using BLAST, and machine learning algorithms automatically classified the final results. 

As in comparative genomics, where genes from different genomes are compared to determine common and unique sequences, Tucuxi-BLAST also permits the simultaneous integration of data from multiple administrative databases without the need for complex data pre-processing. 

In the study, the group used Tucuxi-BLAST to test and compare a simulated database containing 300 million records, as well as four large administrative databases containing data for real cases of patients infected with different pathogens.

The conclusion was that Tucuxi-BLAST successfully processed record linkages for the largest dataset (200,000 records), despite misspellings and other errors and omissions, in a fifth of the time: 23 hours, compared with 127 hours (five days and seven hours) for the state-of-the-art method.

The researchers set up a website where users can translate words, phrases, and names into DNA.

Several countries, such as the UK, Canada, and Australia, have invested in successful initiatives to integrate databases and develop novel data analysis strategies, Nakaya noted.

A Brazilian example is the Center for Health Data and Knowledge Integration (CIDACS/Fiocruz), which has integrated administrative and health databases to assemble records for 114 million people.