Artificial Intelligence as a Tool in Science

- EN - DE

ISTA researchers welcome AI use in science, but remain cautious

This is what AI thinks, AI research looks like.  DALL-E/ISTA
This is what AI thinks, AI research looks like. DALL-E/ISTA

From identifying complex morphologies in the brain to analyzing the properties of storm clouds, artificial intelligence (AI) algorithms are aiding several research projects employing deep learning and machine learning at the Institute of Science and Technology Austria (ISTA). Despite its many uses, scientists at ISTA emphasize that AI and its applications are still in their infancy. One needs to pay attention to usage and in extending theory - and to consider which information to "feed" to AI models.

AI algorithms are increasingly instrumental in a wide range of scientific endeavors across the globe. Also at the Institute of Science and Technology Austria (ISTA), several projects leverage the power of machine learning to analyze large datasets and extract valuable insights. By utilizing AI, researchers can automate tasks that would otherwise be extremely time-consuming, enabling scientists to ask advanced questions in their respective fields. However, while AI has proven to be immensely useful, scientists at the Institute also stress the need for caution and skepticism in light of the growing hype surrounding AI. They emphasize that AI is a field that continues to evolve and is still in its early stages.

The uses

The Siegert group at ISTA has employed machine learning (ML) to resolve the 3D complexity of microglia morphology-the important immune cells of the central nervous system-across different brain regions.

"AI/ML is a complementary strategy that allows us to identify structures within our dataset that we would otherwise miss," says Professor Sandra Siegert. While she is overall positive about employing AI assistance in her work, she is cognizant of the limitations of AI, specifically its tendency to multiply biases.

"We have been aware that the input of the data is critical. If certain data parameters are interconnected, this can bias the data outcome. Also, it is critical to annotate the experimental data with as much detail as possible. For example, one challenge is that studies often do not describe the ’sex’ of the animal or only give an approximation of the brain area where the sample has been taken from. However, these are biological critical parameters that will have an important impact on the data readout," Siegert notes.

In a completely different discipline, ISTA Assistant Professor Caroline Muller and her Atmosphere and Ocean Dynamics group are less concerned about introducing biases. The group investigates global high-resolution climate simulations.

They employ AI/ML tools to investigate these immense datasets and determine which aspects of a storm’s environment make it more elongated or more circular. "I think that there is great potential for AI/ML in the sciences since we often deal with a lot of data. Some of the research in Earth Sciences, for example, relies on large datasets from satellite observations. These approaches allow us to process a large amount of data very efficiently," Muller says.

"We use AI/ML to understand physical processes, for instance the evolution of clouds and storms. We do not use AI/ML for predictions, so we do not worry too much about introducing biases and errors. Our main focus is on interpreting the results from AI/ML and making sure that they can be understood from physical principles," she adds.

The hype

Like all aspects of AI, it is important to also understand its hype. AI is commonly thought to be a one-stop shop that can bring about a paradigm shift in everything we do. This is far from the truth. At best, current AI models can read a large amount of data and offer a probability distribution and propose something as the most likely explanation of what these data are suggesting. At worst the results are erroneous.

Along with the measured use of deep learning in several projects, there are also ongoing efforts undertaken by mathematicians and computer scientists on the ISTA campus to improve AI itself.

Professor Christoph Lampert, whose work focuses on machine learning , says that "AI has the potential to support and take over a lot of menial tasks, as well as act as a tool for increasing motivation and creativity." At the same time, he warns that "AI is not about solving problems." Rather, its role is in automatizing tasks, ideally such that the outcome is no worse or even better than if a human would do them.

He offers two famous examples: AlphaGo , which plays the game of Go better than any human, and AlphaFold , which predicts the three-dimensional structure of a protein, once it is given the information of the amino acid sequence that make up this protein.

In AlphaFold, we find a clear example of how AI can aid scientific pursuits, and also one that illustrates how one can only go so far with just one tool in the shed. AlphaFold only offers a hypothesis for protein folding structure, the proposed structure still needs to be experimentally verified.

According to Lampert, one area of concern where more research is needed is the tendency of AI to act as a force multiplier of existing biases. "This is the case, especially in the recently emerged large language models (LLMs), such as ChatGPT, which are trained predominantly on (often biased and incorrect) internet data," he says.

This is also one of the areas that ISTA’s newest assistant professor in the field, Francesco Locatello, targets. His research focuses on advancing AI and machine learning to understand cause-and-effect connections, marking the next step in their evolution: causal AI.

Up until now, AI technologies struggle to process causal relationships, cannot differentiate between co-occurance and correlation - and are not yet very trustworthy. Locatello and his research group aim to change that.

Assistant Professor Marco Mondelli, who leads the Data Science, Machine Learning, and Information Theory group , also observed that trustworthy AI and robustness of models is a big theme of research. Recent work from Mondelli’s group predicts the robustness of high-dimensional models (those with over millions of parameters).

This work has the potential to aid users in predicting which model is in theory most suitable. "The next generation of questions will involve high-dimensional problems: both models and datasets are becoming increasingly large and this size creates huge practical problems. Training large models (e.g. LLMs) is now something that only (very) few tech companies can do. I believe a theory can be helpful in this regard, providing precise guarantees for problems in high dimensions," Mondelli says.

Efforts at ISTA are ongoing to counter the problems associated with the largeness of models. The Alistarh group recently presented their work describing their SparseGPT pruning method that trims the size of large models without losing accuracy. While the model behind ChatGPT remains proprietary, other major language models like this have been made openly available - and the public is eager to experiment with them.

With a recently awarded ERC Proof of Concept Grant , Alistarh and his team now want to make their approaches available to more potential users.

"Our techniques reduce the overhead of distributed training of ML models, which can be very high for very large and accurate models. We are now bringing these methods closer to practitioners. We do this by building a software library that allows them to efficiently train large AI models on ’commodity’ computers." In that regard, Alistarh underlines the impact of his research on the ’democratization’ of AI.

The missing theory

Professor Herbert Edelsbrunner is a mathematician whose work in topological data analysis is intended to improve AI tools down the road. His is a niche area because the math Edelsbrunner and his group use is not typically a part of the background of the community of AI researchers.

In Edelsbrunner’s view, "the gaps in the theory are enormous. In short, we do not know why deep learning works as well as it does." "The current wave of AI is based on very successful experimental work built on a lot of theory," Edelsbrunner says. "But now, the theory is lagging behind, and the most urgent need is to advance it." According to him, a lot of the experimental work is currently out there finding applications in people’s daily lives, but this work is poorly understood and unpredictable.

Here he resonates with Lampert, who also believes that currently, the most pressing questions in AI research are to understand: how and why AI systems actually work, rather than just being able to build them; how to make more natural AI systems; how to make more efficient AI systems; and how to use AI systems for good.

ISTA researchers greatly appreciate the potential of AI but express their concern that AI is yet to be well defined. What is considered AI has changed dramatically during the last few decades. The question of a definition is usually not asked, neither in the popular imagination of AI nor in AI research. While the AI tools at hand seem to become more bountiful by the day, a careful and restrained approach to their reliability is the best prescription given the current scenario.