Agnostiq, Mila partner to bridge quantum supercomputing, ML

The collaboration will enable both organizations to develop and apply advances at the intersection of their respective technologies to solve some of the world's most critical and challenging business and societal issues.

Agnostiq, Inc. has formed a strategic partnership with Montreal-based Mila to bridge the gap between the quantum supercomputing and machine learning communities.

"Quantum computing will have a tremendous impact on many fields and machine learning is no exception," says Oktay Goktas, CEO of Agnostiq. "A partnership with Mila brings us access to a world-class research community that comes with decades of experience in machine learning, which will, in turn, help us design better tools for emergent quantum machine learning use cases."

The new partnership gives Mila access to Agnostiq's quantum researchers, who are working on classes of machine learning problems that are specific to quantum computing, and Agnostiq access to Mila's AI/ML researchers and partner network. Partnering with Mila will help Agnostiq remain at the forefront and be among the first to discover compelling new use cases for quantum machine learning.

"Agnostiq offers an exciting opportunity to explore ML challenges specific to quantum computing, as our strategic alliance with this promising startup will allow us to combine our expertise," says Stéphane Létourneau, Executive Vice President of Mila. "Mila's research community works daily toward improving the democratization of machine learning, developing new algorithms, and advancing deep learning capabilities. We are thrilled to work closely with Agnostiq to continue these important missions."

The partnership will also support Agnostiq's talent attraction and retention efforts, encouraging potential candidates to apply, as they will have the opportunity to collaborate with Mila's world-renowned researchers. Finally, the collaboration further validates Canada's position as a global leader in quantum supercomputing and machine learning research. 

Japanese astrophysicists show how gas giants form from dust to planet

Gas giant planets, such as Jupiter, can form rapidly by incorporating nearby icy bodies made from drifting pebbles born in the outer parts of young planetary systems – all in about 200,000 years. This finding has implications for understanding how habitable planets are created; not just in our solar system, but in others too.  Result of dust-to-planet simulation: Mass distribution of bodies from dust to planets at about 200,000 years. (Credit: Hiroshi Kobayashi)

Gas giants are made of a massive solid core surrounded by an even larger mass of helium and hydrogen. But even though these planets are quite common in the Universe, scientists still don’t fully understand how they form. Now, astrophysicists Hiroshi Kobayashi of Nagoya University and Hidekazu Tanaka of Tohoku University have developed supercomputer simulations that simultaneously use multiple types of celestial matter to gain a more comprehensive understanding of how these colossal planets grow from specks of dust. Their findings were published in The Astrophysical Journal.

“We already know quite a bit about how planets are made,” says Kobayashi. “Dust lying within the far-reaching ‘protoplanetary disks’ surrounding newly formed stars collides and coagulates to make celestial bodies called planetesimals. These then amass together to form planets. Despite everything we know, the formation of gas giants, like Jupiter and Saturn, has long baffled scientists.”

This is a problem because gas giants play huge roles in the formation of potentially habitable planets within planetary systems.

For gas giants to form, they must first develop solid cores that have enough mass, about ten times that of Earth, to pull in the huge amounts of gas for which they are named. Scientists have long struggled to understand how these cores grow. The problem is two-fold. First, core growth from the simple amassing of nearby planetesimals would take longer than the several million years during which the dust-containing protoplanetary disks survive. Second, forming planetary cores interact with the protoplanetary disk, causing them to migrate inward towards the central star. This makes conditions impossible for gas accumulation.

To tackle this problem, Kobayashi and Tanaka used state-of-the-art computer technologies to develop simulations that can model how dust lying within the protoplanetary disk can collide and grow to form the solid core necessary for gas accumulation. A major issue with current programs was that they could only simulate planetesimal or pebble collisions separately. “The new program can handle celestial bodies of all sizes and simulate their evolution via collisions,” explains Kobayashi.

The simulations showed that pebbles from the outer parts of the protoplanetary disk drift inwards to grow into icy planetesimals at about 10 astronomical units (au) from the central star. A single astronomical unit represents the mean distance between the Earth and the Sun. Jupiter and Saturn are about 5.2au and 9.5au away from the Sun, respectively. Pebble growth into icy planetesimals increases their numbers in the region of the developing planetary system that is about 6-9 au from the central star. This encourages high core growth rates, resulting in the formation of solid cores massive enough to accumulate gas and develop into gas giants in about 200,000 years.

“We expect our research will help lead to the full elucidation of the origin of habitable planets, not only in the solar system but also in other planetary systems around stars,” says Kobayashi.

Edge processing research takes Surrey discovery closer to use in AI networks

Researchers at the University of Surrey have successfully demonstrated proof-of-concept of using their multimodal transistor (MMT) in artificial neural networks, which mimic the human brain. This is an important step towards using thin-film transistors as artificial intelligence hardware and moves edge computing forward, with the prospect of reducing power needs and improving efficiency, rather than relying solely on computer chips.

The MMT, first reported by Surrey researchers in 2020, overcomes long-standing challenges associated with transistors and can perform the same operations as more complex circuits. This latest research, published in the peer-reviewed journal Scientific Reports, uses mathematical modeling to prove the concept of using MMTs in artificial intelligence systems, which is a vital step towards manufacturing.

Using measured and simulated transistor data, the researchers show that well-designed multimodal transistors could operate robustly as rectified linear unit-type (ReLU) activations in artificial neural networks, achieving practically identical classification accuracy as pure ReLU implementations. They used both measured and simulated MMT data to train an artificial neural network to identify handwritten numbers and compared the results with the built-in ReLU of the software. The results confirmed the potential of MMT devices for thin-film decision and classification circuits. The same approach could be used in more complex AI systems.

Unusually, the research was led by Surrey undergraduate Isin Pesch, who worked on the project during the final year research module of her BEng (Hons) in Electronic Engineering with Nanotechnology. Covid meant she had to study remotely from her home in Turkey, but she still managed to spearhead the development, complemented by an international research team, which also included collaborators in the University of Rennes, France, and UCL, London.

Isin Pesch, the lead author of the paper, which was written before she graduated in July 2021, said: “There is a great need for technological improvements to support the growth of low cost, large-area electronics which were shown to be used in artificial intelligence applications. Thin-film transistors have a role to play in enabling high processing power with low resource use. We can now see that MMTs, a unique type of thin-film transistor, invented at the University of Surrey, have the reliability and uniformity needed to fulfill this role.”

Dr. Radu Sporea, Senior Lecturer at the University of Surrey’s Advanced Technology Institute, said: “These findings are a reminder of how Surrey is a leader in AI research. Many of my colleagues focus on people-centred AI and how best to maximize the benefits for humans, including how to apply these new concepts ethically. Our research at the Advanced Technology Institute takes forward the physical implementation, as a stepping stone towards powerful yet affordable next-generation hardware. It’s fantastic that collaboration is resulting in such successes with researchers involved at all levels, from undergraduates like Isin when she led this research, to seasoned experts.”

A major project brings together Finnish industry, research for quantum technology development

A new research project has been launched to accelerate the progress of Finnish quantum technology. The QuTI project, coordinated by VTT Technical Research Centre of Finland, will develop new components, manufacturing and testing solutions, and algorithms for the needs of quantum technology. The QuTI consortium, partly financed by Business Finland, consists of 12 partners and has a total budget of around EUR 10 million.

Quantum technology is developing into a wide field in the industry. This quantum wave is motivated by the unprecedented performance improvements and paradigm shifts that the utilization of quantum phenomena can provide for computing, communication, and sensing applications. The Quantum Technologies Industrial (QuTI) ecosystem project, coordinated by VTT, brings together the expertise of Finnish industry and research organizations to find new quantum technology solutions.

The QuTI project covers the full value chain of the quantum industry from materials and hardware to software and system-level solutions. The project involves 12 organizations: the research partners are VTT, Aalto University, Tampere University, and CSC – IT Center for Science, and the industrial partners are Bluefors, Afore, Picosun, IQM Quantum Computers, Rockley Photonics, Quantastica, Saab, and Vexlum.

“Quantum technology is a multidisciplinary and rapidly advancing field. The QuTI consortium provides an ideal starting point for strengthening the international competitiveness of Finnish technology and industry in this fast-growing field,” says QuTI project’s coordinator, Professor Mika Prunnila from VTT.

The quantum computing, communication, and sensing devices to be developed in the QuTI project are largely based on expertise in microsystems, photonics, electronics, and cryogenics. The project develops customized software and algorithms hand in hand with the hardware, strengthening the Finnish quantum computing infrastructure. In addition, new tools will be created for quantum technology product development that will serve the needs of the QuTI project as well as the entire field of quantum technology.

The three-year QuTI project will be implemented as a jointly funded project that is partly financed by Business Finland (EUR 5.6 million) the total budget being about EUR 10 million.

“Quantum technology offers great opportunities for Finnish industry, and we want to be involved in supporting this development. We see that the QuTI project is in many ways a concrete starting point for the Finnish quantum ecosystem,” says Kari Leino, Ecosystem Lead at Business Finland.

Cleanrooms are a prerequisite for quantum technology research and business

Like computer microprocessors, the fabrication of quantum technology components requires a cleanroom environment. The Micronova cleanroom facility in Espoo, Finland, operated jointly by VTT and Aalto University,  enables applied research and small-scale commercial manufacturing of quantum microsystems for the needs of quantum computing, communication, and sensing. Micronova, part of the national Otanano research infrastructure, plays a significant role in both the QuTI project and quantum technology R&D in Finland. QuTI will also utilize the complementary cleanroom of Tampere University focusing on optoelectronics fabrication.

University of Waterloo researchers use AI to analyze tweets debating vaccination, climate change

Using artificial intelligence (AI) researchers have found that between 2007 and 2016 online sentiments around climate change were uniforms, but this was not the case with vaccination. 

Climate change and vaccinations might share many of the same social and environmental elements, but that doesn’t mean the debates are divided along with the same demographics.

A research team from the University of Waterloo and the University of Guelph trained a machine-learning algorithm to analyze a massive number of tweets about climate change and vaccination.

The researchers found that climate change sentiment was overwhelmingly on the pro side of those that believe climate change is because of human activity and requires action. There was also a significant amount of interaction between users with opposite sentiments about climate change.

However, in the snapshot of the timeframe of the dataset, vaccine sentiment was nowhere near so uniform. Only some 15 or 20 percent of users expressed a pro-vaccine sentiment, while around 70 percent expressed no strong sentiment. Perhaps more importantly, individuals and entire online communities with differing sentiments toward vaccination interacted much less than the climate change debate.

“It is an open question whether these differences in user sentiment and social media echo chambers concerning vaccines created the conditions for highly polarized vaccine sentiment when the COVID-19 vaccines began to roll out,” said Chris Bauch, professor of applied mathematics at the University of Waterloo. “If we were to do the same study today with data from the past two years, the results might be wildly different. Vaccination is a much hotter topic right now and appears to be much more polarized given the ongoing pandemic.”

The research goal was to learn how sentiments on climate change and vaccination may be related, how users form networks and share information, the relationship between online sentiments, and how people act and make decisions in daily life.

“There’s been some work done on the polarization of opinions in Twitter and other social media,” said Madhur Anand, professor of environmental sciences at the University of Guelph. “Most other research looks at these as isolated issues, but we wanted to look at these two issues of climate change and vaccination side-by-side. Both issues have social and environmental components, and there are lots to learn in this research pairing.”

The dataset for the project was drawn from a few sources, including some that were purchased from Twitter. In total, the analysis takes into consideration roughly 87 million tweets. The time range for the tweets is between 2007 and 2016.

This means that the data precedes COVID-19 and offers a snapshot of vaccine sentiment in the years leading up to the pandemic.

The AI ranked the millions of tweets as either pro, anti or neutral sentiment on the issues and then classified users in pro, anti or neutral categories. It also analyzed the structure of online communities and the degree to which users with opposing sentiments interacted.

“We expected to find that user sentiment and how users formed networks and communities to be more or less the same for both issues,” said Bauch. “But actually, we found that the way climate change discourse and vaccine discourse worked on Twitter were quite different.”

Anand, Bauch, and team members Justin Schonfeld, Edward Qian, Jason Sinn, and Jeffrey Cheng published their findings, “Debates about vaccines and climate change on social media networks: a study in contrasts,” in the journal Humanities and Social Sciences Communications.