End of this page section.

Begin of page section: Contents:

University of Graz is researching better algorithms for social media

Monday, 16 September 2024

Jana Lasser, a professor at the IDea_Lab of the University of Graz, is developing new algorithms for social media that promote constructive discussions as part of an ERC Starting Grant. With the help of ‘digital twins’ of existing social media platforms, these alternatives are to be tested to create the basis for deciding how to regulate the algorithms of Instagram, TikTok and others.

2024 is an important election year. Social media such as Instagram, TikTok or X will play a major role in this. Their algorithms often select from the flood of available content those that are exciting in order to keep their users on the platform longer. Many of these posts are not exactly true. How should we deal with this?

Jana Lasser, professor of data analysis at the University of Graz, is addressing this question. She is conducting research into how to improve social media and has been awarded an ERC Starting Grant for her work. ‘The goal is to develop algorithms that promote constructive discussions instead of stirring up excitement.’

Digital Twins

This is not a theoretical task. Lasser wants to develop and test real alternatives. To do this, she wants to build digital twins of social media such as X or Reddit. ‘We can test and optimise our ideas on these copies. Based on this, we can give politicians recommendations for adapting the algorithms.’

But why should social media providers comply? It's quite simple: with the Digital Services Act, which came into force at the beginning of the year, the EU has created a law that forces these companies to examine the risks of their algorithms for democracy and to change them if necessary. Lasser's research could be a basis for this.

‘But this is not about censorship,’ Lasser emphasises. ‘Currently, algorithms tend to ensure that the content that gets the most attention and causes the most outrage is the content that gets the most clicks.‘ A video promoting an extreme conspiracy theory will therefore tend to be shown to many more people than a well-researched news video. This is because the platforms’ monetary interests are currently the determining factor in the recommendation of content, and not the interests of society, e.g. in maintaining a healthy democracy.

Constructive discourse

‘One goal of research is to develop algorithms that are more likely to display content that is conducive to constructive discourse,’ says Lasser. One idea would be to use machine learning to check posts for hate speech or polarisation and to give them a little less reach accordingly, or to give a bonus to news sites that work according to journalistic standards. ’This doesn't delete posts, but rather redistributes attention.’

This approach could also solve another problem: current algorithms often don't recognise whether a post is art or satire. ‘Social media currently deletes anything that resembles a female breast, even if it's a famous work of art.’ Alternatively, you could program the algorithm so that this post is simply not shown as often. ‘This way, we avoid censorship while still taking action based on risk.‘ Posts that violate criminal law must, of course, continue to be removed immediately.’

Lasser emphasises: ‘There is no perfect solution. The world of social media is constantly changing.’ Political actors would quickly learn how to circumvent filtering. ‘Every algorithm has its limits here.’ In the end, real people are still needed to decide on critical cases.

Jana Lasser will be speaking about her research as a keynote speaker at the DELPHI conference from September 16 to 18 at the University of Graz and at the Technology Impact Summit on October 10 at IdeaLab, University of Graz.

End of this page section.

Begin of page section: Additional information:

End of this page section.