Study shows smoking inhibits cancer-fighting proteins, increasing cancer risk, complicating treatment

Scientists at the Ontario Institute for Cancer Research (OICR) have conducted a recent study that reveals a concerning link between smoking and the inhibition of cancer-fighting proteins. The findings, which are published in the journal Science Advances, suggest that smoking not only increases the risk of developing cancer but also makes it more difficult to treat.

The research team, led by OICR investigator Dr. Jüri Reimand and University of Toronto PhD student Nina Adler, analyzed DNA samples from over 12,000 tumor samples across 18 different types of cancer. Their study found a significant correlation between tobacco smoking and harmful changes in DNA that prevent the formation of certain proteins that are vital for preventing abnormal cell growth.

The study revealed that these harmful changes in DNA, known as "stop-gain mutations," were particularly prevalent in genes called "tumor-suppressors," which play an essential role in inhibiting the growth of abnormal cells. According to Adler, without these tumor suppressors, abnormal cells can continue to grow unchecked, increasing the risk of developing cancer.

Using computational tools, the researchers also found a clear connection between lung cancer and the distinct genetic footprint that smoking leaves in DNA. Intriguingly, the amount of tobacco smoked was directly proportional to the frequency of these harmful mutations. This suggests that the more a person smokes, the more complex and difficult the cancer becomes to treat.

Dr. Reimand emphasized the damaging effects of tobacco smoking on DNA, stating that it compromises our long-term health by deactivating critical proteins that are the building blocks of our cells.

The study also identified other factors and processes that contribute to the development of stop-gain mutations, such as natural enzymes called APOBEC, which have been strongly associated with breast cancer and other cancer types. Unhealthy diet and alcohol consumption were suggested to have similar damaging effects on DNA, although further research is required to understand these mechanisms fully.

Adler stressed the importance of the study's findings in understanding the molecular-level impacts of smoking on cancer development. "While it is widely known that smoking can cause cancer, elucidating one of the molecular mechanisms through which this occurs is a significant step towards comprehending how our lifestyle choices influence cancer risk," she commented.

Dr. Laszlo Radvanyi, President of OICR, urged individuals to consider the implications of smoking on their well-being. "This study provides further evidence of the immense harm smoking inflicts upon our bodies and reinforces the fact that quitting smoking is always the right choice," he stated.

Ashish Venkat, an assistant professor of computer science and expert in cybersecurity, received an NSF CAREER Award to develop a hardware and software system for rapid and secure mitigation of cyberattacks, including zero-day events.
Ashish Venkat, an assistant professor of computer science and expert in cybersecurity, received an NSF CAREER Award to develop a hardware and software system for rapid and secure mitigation of cyberattacks, including zero-day events.

UVA engineering researcher receives Career Award for a groundbreaking plan to defeat next big cyberattack

Dr. Ashish Venkat is developing a decoupled security response system to mitigate zero-day attacks faster and protect computer programs

Dr. Ashish Venkat, a researcher at UVA Engineering, has received the prestigious CAREER Award from the National Science Foundation for his innovative approach to combat the next major cyberattack. With the rise of zero-day attacks, which exploit previously unknown vulnerabilities and catch victims off guard, Dr. Venkat is determined to revolutionize cybersecurity defense.

The digital landscape is increasingly threatened by zero-day attacks, with a new attack discovered approximately every 17 days. These attacks pose significant challenges to cybersecurity experts, as developers have zero days to fix the flaw before it is exploited. The average response time to patch these vulnerabilities is 15 days, which leads to substantial costs for companies and individuals. Additionally, the process of patching these vulnerabilities often introduces new vulnerabilities, leaving systems exposed.

Dr. Venkat's solution aims to reduce attack response time and protect programs from emerging cyber threats. He plans to develop a "decoupled" security response system, combining hardware and software components to create a holistic security-centric stack. This approach will enable technicians to fix vulnerabilities promptly through a separate security entrance, even while the system is under attack.

By creating a dedicated security tunnel within the system, Dr. Venkat's decoupled approach ensures that technicians can rapidly locate and fix vulnerable components without opening new entry points for bad actors. With the implementation of this innovative solution, he aims to stop emerging zero-day cyberattacks within 24 to 48 hours, significantly faster than the current average response time. Moreover, Dr. Venkat's system could dramatically reduce the time and financial costs associated with frequent patching, redeployment, and hardware upgrades.

Sandhya Dwarkadas, the Walter N. Munster Professor and chair of computer science at UVA, expressed her excitement about Dr. Venkat's project, stating that "Ashish's proposed stack is an innovative use of integrated hardware and software components dedicated to security functions. His project addresses a critical need, and I look forward to following his progress."

In addition to his research, Dr. Venkat aims to improve cybersecurity curricula and awareness among high school, vocational, and college students. His team will establish a mentorship program for undergraduate students, including those traditionally underrepresented in engineering and computer science, contributing to the development of a skilled cybersecurity workforce for the future.

Dr. Venkat's innovative approach doesn't stop at cybersecurity defense; he is also using offensive tactics, known as ethical or "white hat" hacking. By ethically hacking systems, his team aims to identify potential vulnerabilities and strengthen the security of modern systems.

Dr. Venkat's previous research has gained significant attention, including the discovery of a security vulnerability that impacted millions of computers with Intel and AMD processors. His work on hardware Trojan attacks has also been nominated for a best paper award at the DATE 2023 conference.

Global threats such as the WannaCry ransomware attack in 2017 have highlighted the urgency of developing effective cybersecurity measures. Dr. Venkat emphasizes that these attacks can impact anyone, not just large corporations. Small businesses and individuals are also at risk, and the repercussions can be devastating.

Dr. Venkat's ultimate goal is to create cost-effective cybersecurity solutions that are accessible to individuals and small businesses alike. He believes that it is essential to design systems that prioritize security from the outset and to enhance the security of existing vulnerable systems.

In conclusion, Dr. Ashish Venkat's receipt of the CAREER Award recognizes his pioneering work in combating zero-day attacks and improving cybersecurity. With the development of his decoupled security response system, he aims to provide faster mitigation for emerging cyber threats while reducing costs and protecting vital computer programs. His dedication to building a skilled cybersecurity workforce and his use of ethical hacking tactics demonstrate his commitment to ensuring the safety and security of individuals and businesses in an increasingly digital world.

Spanish researchers build a comprehensive database for studying protein aggregation

Protein aggregation is a phenomenon associated with aging and several pathologies like Parkinson's disease, Alzheimer's disease, and amyotrophic lateral sclerosis. This has been a subject of intensive research for several years. To gain a better understanding of it, a team of researchers at the Institut de Biotecnologia i de Biomedicina of the Universitat Autònoma de Barcelona (IBB-UAB) has developed a comprehensive database called A3D-MOBD. The new resource brings together the proteomes of twelve model organisms, including over half a million predictions of protein regions that have a propensity to form aggregates.

The protein folding and computational diseases group at IBB-UAB, led by Professor Salvador Ventura in collaboration with scientists from the University of Warsaw, developed the new database, which was recently published in the journal Nucleic Acids Research. A3D-MOBD provides pre-calculated aggregation propensity analyses and tools for studying this phenomenon on a proteomic scale, as well as evolutionary comparisons between different species.

The A3D-MOBD expands on a method that the same group designed in 2015, Aggrescan 3D, but with significantly expanded data. It contains more than 500,000 structural predictions for over 160,000 proteins from twelve model organisms, including humans, rats, mice, zebrafish, fruit flies, nematode worms, bacterium, and the COVID-19 causative virus SARS-CoV-2. The new resource's adaptive architecture allows for future additions of other organisms relevant to various sectors, including medical, biological, agricultural, and industrial.

The A3D-MOBD tool provides results on protein solubility and stability and includes additional information to contextualize the aggregation process. Several computational sources, such as artificial intelligence-based protein structure modeling algorithm AlphaFold and TOPCONS for the prediction of protein interaction with lipid membranes, were used to develop the database. The researchers linked A3D-MOBD to organism-specific gold-reference databases such as the Human Protein Atlas or Wormbase.

Professor Salvador Ventura expressed his anticipation that A3D-MOBD will offer solutions to a much wider audience of researchers, not only because of the large collection of structures but also because of its integration with databases from different biological fields. He is confident that the new database will set a new standard in protein aggregation research and that it will become a basic resource in this field.

In conclusion, the A3D-MOBD database developed by researchers at IBB-UAB is the most comprehensive database available for studying protein aggregation. It brings together proteomes of twelve of the most widely studied model organisms and provides pre-calculated aggregation propensity analyses and tools for studying the phenomenon on a proteomic scale. This database expands scientists' understanding of the basis of protein aggregation and offers researchers insights into why certain diseases develop in some species and not others.

To access the A3D-MOBD database, please visit http://biocomp.chem.uw.edu.pl/A3D2/MODB .

Matt Artz, Unsplash
Matt Artz, Unsplash

New research conducted by Concordia University suggests that better wind speed predictions could be beneficial for urban power generation

In today's world, where renewable energy is gaining importance, wind-generated electricity is expected to play a vital role in powering our cities. However, accurately predicting wind speed has always been a challenge. Researchers at Concordia University have developed a hybrid method that integrates multiple models, which has improved the accuracy of wind speed forecasts. This groundbreaking research has wide-ranging implications for urban power generation and the transition towards sustainable energy sources.

Challenges in Predicting Wind Speed:

Wind speed is a critical parameter in estimating wind energy potentials in a given location. Reliable wind speed forecasts are essential for utilities to effectively harness wind power and balance the grid. Although several models exist to predict wind speed, they vary in accuracy and reliability. These models often struggle to capture the stochastic behavior and fluctuations of renewables, making it challenging for utilities to design and operate microgrids efficiently.

The Concordia Hybrid Method:

The Department of Building, Civil, and Environmental Engineering at the Gina Cody School of Engineering and Computer Science led the Concordia study, which proposes a hybrid method that combines the strengths of different models. The researchers integrate data analysis and outputs from a Weibull probability distribution and a numerical weather prediction (NWP) model.

The Weibull distribution predicts wind speed probabilities based on historical data and other variables, while the NWP uses physical principles and a complex algorithm to simulate future behavior. By combining these models, the researchers were able to significantly improve the accuracy of wind speed forecasts, reducing forecasting errors by up to 30%.

The Impact and Findings:

Initially, the study fused Weibull probabilities into a Long Short-Term Memory (LTSM) model, a powerful recurrent neural network suitable for time-series analysis. The results were already promising, but the addition of data from the NWP further enhanced the predictive capabilities of the model. Compared to non-hybridized LTSM predictions, errors in wind speed forecasting over a 48-hour horizon were reduced by 32%.

Looking Ahead:

As wind power continues to grow globally, accurate wind speed prediction is crucial for achieving sustainable energy goals. According to the International Energy Agency, generating 7,400 TWh from wind alone by the end of this decade is necessary to reach the net-zero emissions target by 2050. Therefore, investing in advancements like the Concordia hybrid method is vital to meet these targets and ensure a smooth transition to renewable energy sources.

Concordia's Commitment to Decarbonization:

The research carried out by the Concordia team aligns with the university's commitment to decarbonization and its goal of achieving Net Zero Emissions by 2050. By diversifying energy sources and establishing local capacities, Concordia aims to reduce reliance on the vulnerable existing grid and enhance operational efficiency during power outages.

Conclusion:

Wind speed prediction is a crucial aspect of harnessing wind energy for urban power generation. The innovative hybrid method developed by Concordia researchers offers a significant advancement in accurately forecasting wind speed. By integrating the strengths of different models, the Concordia team was able to improve forecasting accuracy by up to 30%. As renewable energy becomes increasingly important in addressing climate change, research like this plays a vital role in making our energy systems more sustainable and efficient. Concordia's commitment to decarbonization and the development of innovative solutions positions the university as a Canadian leader in the transition toward a cleaner and greener future.

AWS launches Amazon EC2 Capacity Blocks for ML workloads

Amazon Web Services Inc. (AWS) has launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for Machine Learning (ML) workloads, which is now available to the public. This new offering enables customers to reserve high-performance Amazon EC2 UltraClusters of NVIDIA GPUs for their generative AI development projects. Amplify Partners, Canva, LeonardoAi, and OctoML are some of the customers who are excited to use Amazon EC2 Capacity Blocks for ML.

AWS and NVIDIA have been collaborating for over 12 years to provide scalable, high-performance GPU solutions. This partnership has enabled customers to develop remarkable generative AI applications that are revolutionizing various industries. David Brown, Vice President of Compute and Networking at AWS, stated that "AWS has unparalleled expertise in providing NVIDIA GPU-based computing in the cloud, and we also offer our own Trainium and Inferentia chips." With the introduction of Amazon EC2 Capacity Blocks, businesses and startups can now predictably acquire NVIDIA GPU capacity to build, train, and deploy their generative AI applications, without having to make any long-term capital commitments. This is one of the ways AWS is innovating to expand access to generative AI capabilities.

The new consumption model is the first of its kind in the industry, which allows customers to access highly demanded GPU compute capacity to run their short-duration ML workloads. With EC2 Capacity Blocks, customers can reserve hundreds of NVIDIA GPUs colocated in Amazon EC2 UltraClusters specially designed to handle high-performance ML workloads.

Previously, traditional ML workloads required substantial supercomputing capacity. With the advent of generative AI, even higher computing capacity is now required to process the vast datasets necessary to train foundation models (FMs) and large language models (LLMs). Clusters of GPUs, with their combined parallel processing capabilities, offer the required acceleration in the training and inference processes. However, with more organizations recognizing the transformative power of generative AI, demand for GPUs has outpaced supply.

Customers who want to leverage the latest ML technologies, especially those whose capacity needs fluctuate depending on where they are in the adoption phase, may face challenges accessing clusters of GPUs necessary to run their ML workloads. Alternatively, customers may commit to purchasing large amounts of GPU capacity for long durations only to have it sit idle when they are not actively using it. The EC2 Capacity Blocks will help ensure customers have reliable, predictable, and uninterrupted access to the GPU compute capacity required for their critical ML projects.

With EC2 Capacity Blocks, customers can reserve the amount of GPU capacity they need for short durations to run their ML workloads. This eliminates the need to hold onto GPU capacity when not in use. EC2 Capacity Blocks are deployed in EC2 UltraClusters interconnected with second-generation Elastic Fabric Adapter (EFA) petabit-scale networking. This delivers low-latency, high-throughput connectivity and enables customers to scale up to hundreds of GPUs. Clients can reserve EC2 UltraClusters of P5 instances powered by NVIDIA GPUs for a duration between one to 14 days at a future start date up to eight weeks in advance.

Once an EC2 Capacity Block is scheduled, customers can plan for their ML workload deployments with certainty, knowing they will have the GPU capacity when they need it. Customers only pay for the time they reserve, and EC2 Capacity Blocks are available in the AWS US East Ohio Region, with availability planned for additional AWS Regions and Local Zones.

With the new EC2 Capacity Blocks for ML, AI companies worldwide can rent not just one server at a time but at a dedicated scale uniquely available on AWS. This enables them to quickly and cost-efficiently train large language models and run inference in the cloud exactly when they need it.

Overall, the EC2 Capacity Blocks innovation provides predictability and timely access to GPU compute capacity at an affordable cost. This breakthrough innovation will undoubtedly accelerate the adoption of generative AI for businesses that may face challenges accessing GPU-intensive supercomputing solutions.