Menue_phone
26.06.2017

When robots form political opinion: A study on computational propaganda

An interview with researcher Lisa-Maria Nicola Neudert, Oxford Internet Institute, UK, by Jane Whyatt, ECPMF

The Computational Propaganda Research Project at the Oxford Internet Institute, University of Oxford, has researched the use of social media for public opinion manipulation. The team involved 12 researchers across nine countries who, altogether, interviewed 65 experts, analyzed tens of millions posts on seven different social media platforms during scores of elections, political crises, and national security incidents. Each case study analyzes qualitative, quantitative, and computational evidence collected between 2015 and 2017 from Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States.

In your nine-country study, Germany was found to be the only one with a pro-active approach to the spread of fake news in social media.What are the reasons for this difference?

Lisa-Maria Neudert Lisa-Maria Nicola Neudert, researcher at the Oxford Internet Institute, UK

The presumed election-hacking surrounding the US elections, as well as reports of social bots during Brexit, and more recently the bot amplified Macron leaks have sparked much public debate in Germany. All of the major German parties have publicly stated to refrain from using social bots, and in November Angela Merkel warned the Bundestag that formation of opinion occurs fundamentally different than it did in the past with algorithms, bots and fake news potentially having an impact. This has put computational propaganda, the deliberate manipulation of political opinions in the digital sphere through automated tools, not only on the political, but also on the public agenda. Adding to that, Germany has elections coming up in this year that could potentially be subject to manipulation. The sphere is still widely unregulated, private measures taken e.g. by social networks are often inadequate or lack legitimacy - I believe it is only natural that regulators perceive the need to intervene. 

Based on the findings from your study, how do people in Germany regard the prospect of a new law to punish internet companies for bad content that they disseminate?

The response has been very mixed. While some applaud the effort to hold social networking companies responsible and treat them as media companies, rather than tech companies, others have criticised that the Netzwerk-DG could pose a threat to freedom of speech and that social networking operators lack the legitimacy, competence and capacity to regulate content online. Political experts are sceptical too: An alliance of Bitkom, the Amadeu Antonio Foundation, Netzpolitik, D64 and others have initiated a „declaration for the freedom of speech“ in which they object Heiko Maas' bill. 

When your team of researchers visits the leaders of the big internet companies, what will be the message that you take, based on the latest research findings.

Our research has demonstrated that computational propaganda, first, is an international problem, and second, one that materialises on social networking platforms - Facebook, Twitter, Instagram among others. I believe that social network operators, just as much as regulators, hold responsibly in participating in an open dialogue about how to address these problems and to create countermeasures. Transparency about activity on platforms, data sharing, and a collaboration of research, policy and technology could immensely contribute to coming up with solutions. 

Germany, UK and Poland are the only  EU member states in your research project. To what extent do they show European attitudes and values towards media, social media and propaganda? Would it make sense to look for EU-initiated solutions?

For the UK we have been looking into computational propaganda activity during Brexit and the recent elections. Our German research serves as a benchmark for the upcoming elections, and furthermore, Germany is pioneering countermeasures. In Poland our research has investigated the political economy supporting computational propaganda internationally, thus the makers and vendors of bots, fake profiles, data brokerage etc. I think our EU analysis portrays the spectrum of computational propaganda: Poland as the creator of such tools, the UK as a sphere where we have found some activity during past elections and Germany that is concerned with shaping the countermeasures of the future.

What is at stake, with digital attacks on the formation of opinion is nothing less than the fundamental values of the European Union: democracy, freedom of speech, and open political discourse"

I believe that EU-initiated solutions could be one important lever in countering digital propaganda - and given that it is a transnational problem, possibly an institution like the EU is better equipped than nation states. I however believe that other actors have to become active, too: social networking companies, media multipliers and civil society need to think about regulatory, technical and educational solutions.



Get in Contact

fact finding mission analysis