Elisabeth Lex’s research combines computer science methods with social science approaches, searching for clues to understand framing, polarisation and opinion clusters.More and more often, people are getting the feeling that society is increasingly polarised - regarding measures introduced during the Covid-19 pandemic, the question of vaccinations, elections, or environmental protection. TU Graz computer scientist Elisabeth Lex has been monitoring this trend over the years from a research perspective. Her focus is on different machine learning approaches and artificial intelligence, with the aim of better understanding human behaviour, and applying recommender systems designed to make life easier. Lex cooperates with social science researchers in a range of projects, who interpret the data she identifies. "That’s what’s exciting about this combination: learning algorithms enable us to recognise structures and patterns in large volumes of data that humans wouldn’t be able to get an overview of on their own. We can then interpret and analyse the patterns and structures found with the aid of social science models and theories."
One example is a joint project with the University of Graz, where Lex investigated whether statements made publicly on social media (in this case text-based platform Twitter) are consistent with answers given in a survey. "What comes into play here is what we call social desirability bias," Lex explains. "People have a tendency to adapt their statements to fit more closely with what the crowd is saying, and to provide answers which they believe would be supported by the majority of society." The study , which was concerned with measures introduced to combat the Covid-19 pandemic, yielded interesting results. Firstly, it was possible to identify a clear overlap between opinions expressed online and answers provided in the survey. "So we could see that our computer science methods definitely enable us to represent social phenomena." However, it was important to take into account that in a survey, responses are given to clearly defined questions and therefore reflect a person’s opinion clearly and directly. In a Twitter account, by contrast, all of the tweets on a particular topic need to be considered in order to derive a kind of median opinion. "We also saw that people who viewed the measures positively were much more likely to share their accounts with us and the research project." Due to social desirability bias, the picture could be completely different today, since it has become more socially acceptable to question and criticise Covid-19 restrictions.
Polarisation - a Difficult Concept to UnderstandPolarisation is one of the key focuses of Elisabeth Lex’s research. But it is an extremely complex topic to investigate using artificial intelligence, as Lex explains: "Polarisation is a concept that people understand very well. But - and we recognised this at the beginning of our research - different research communities do not have common definitions of terms like this." As a consequence it is very difficult to teach the concept to an algorithm. Algorithms frequently learn patterns and structures from clearly defined, large-scale examples. They can then apply what they have learned to new sets of data. In the case of polarisation, this process - called supervised learning - cannot be applied without some preparation. Instead, the researchers, supported by machine models, search for clusters - strongly linked accounts that frequently interact with one another and post on similar topics. "When we identify such clusters, we carry out content analyses - for example, a sentiment analysis - to understand whether people have a positive, negative or neutral attitude towards a topic."
Framing - Catastrophe or Warming?Then comes perhaps the most relevant question: why do people have such polarised opinions? "Framing plays a major role here," Lex explains. The same content can be packaged differently, depending on how it is actually expressed. For instance, when talking about climate change, people may use the term ’warming’ or alternatively ’catastrophe’. "Warming sounds positive to us. We like warmth, so it can’t be all that bad. But the word ’catastrophe’ triggers completely different mental images." In this area in particular, a lot has changed in recent years, as Lex discovered. In the past, media influenced by conspiracy theories could easily be distinguished from reputable, quality media by their choice of language and visual appearance. Now they are much more similar and use the same techniques (e.g. in the shape of neutrally-written articles including references to sources) but different framing. "For instance, in our analyses we saw that when discussing Covid-related topics, magazines that had an affinity for conspiracy theories very often used the frames of faith and religion for their arguments, while reliable media based their arguments much more on science."
Again, the media were examined using machine learning methods. Large language models, like those we are familiar with from ChatGPT, are working in the background. "It is remarkable how powerful these models are now and how easily they can navigate different languages," Lex points out. The researchers used language models that were already available and adapted them. Their most recent success was at this year’s edition of the internationally respected SemEval Challenge , which focused on detecting frames in texts in different languages, with very little training data or none at all - which is known as "few shot" or "zero shot" learning. "My team won first place for recognising frames in Spanish, and our approach was among the leaders in eight other languages," Lex reports.