Google against the European Union?  Giant explains before the elections

Grzegorz Kowalczyk, Business Insider Polska: Is Google a democracy?

Daniel Rzesa, head of Google News Lab in Poland and Central and Eastern Europe: Yes. We support democracy, and we support free elections. We try to make sure that information about it reaches as many people in the world as possible – in line with Google’s mission, which since the company’s inception has been to organize the world’s information resources so that they are universally available and useful.

In European Parliament elections, Eurosceptics have always fought against Eurosceptics. Both sides will likely accuse you of favoring the other. How do you see your role in such a campaign?

Of course, each of us can have our own opinion about whether Brexit is good or bad, and whether the EU is a good or bad organization. However, we as Google cannot have a say in this discussion. A simple example – if a Eurosceptic harshly evaluates the EU and supports his arguments, for example, with an economic analysis that most experts widely reject, what should we do about it? It is not our role to judge this. Our role is to provide different points of view and ensure the free circulation of information. As long as the person acts within the law and does not, for example, advocate violence or hatred.

Don’t you want to respond when someone questions the scientific consensus?

We act when someone calls for violence, threatens the health and life of others, or causes material damage, but should we resolve disputes between one professor and another? I do not think so. Because how? Should we consult a third professor to decide this? We believe that disagreement is part of public debate, and especially scientific debate has its own rules and needs freedom.

How do you view the issue of defending democracy? I ask because four years ago we faced an unprecedented lockdown at the time Twitter of current US President Donald TrumpWho questioned the results of the elections that he lost. Is this decision exaggerated or not?

I believe that this matter cannot be resolved unequivocally, and each side will find its own arguments. The then democratically elected president became a candidate again. I believe that despite the controversy, it is not our role to interfere in elections, regardless of anyone’s opinions. Any such blockage could have many unintended consequences.


Daniel Razasa.


|
Press materials

2024 has already been hailed as an election year – and this year’s elections concern countries with a total population of 4 billion. Is this reflected in the intensity of online disinformation campaigns?

It is true that election campaigns always require an increased confrontation of misinformation – and this is when many suspicious accounts and false narratives are created. Therefore, during election campaigns, we form special groups to deal with them. However, it is difficult to generalize that we are dealing with mass disinformation this year, because every country is different and has different specificities. Therefore, we treat individual cases separately.

It is commonly said that 2016 was the beginning of the era of fake news due to the Brexit campaign, the elections in the United States, and the rivalry between Hillary Clinton and Donald Trump. Does 2024 have a chance of being another year like this?

It seems that awareness of threats is greater today and therefore easier to neutralize. There are definitely more announcements than last year, and we have more work to do. However, we have not seen any massive, coordinated disinformation activities on a global scale today – any greater than before. Maybe because it has been large since the Covid-19 pandemic and we have a “high base effect”. However, there will be time for summaries after the end of the year. Although, as we know, it can also be a problem. Because how do you define fake news today?

How does Google identify fake news?

We have very clear standards, although we know how difficult it can sometimes be to define them clearly. We separate facts from opinions because we know how important freedom of expression is. Therefore, we do not evaluate opinions, but we act when false data and information are presented, i.e. not consistent with the facts. We respond quickly, especially when matters involve a threat to human life, health or property that may be lost. We realize the importance of the problem, but at the same time we emphasize that we are not an editorial office and we do not create the content. We run away from current controversies, even though someone regularly demands: “Take it off,” referring to the materials of our political rivals. It is very important to us that our algorithms and teams work as independently as possible and not succumb to such pressures. Let’s imagine a situation where we download something, even in good faith, but it does not break the law, so what then? Who knows who will come to us later with a similar request? Setting such a precedent would be a double-edged sword.

Do Google’s tools allow you to estimate how misinformation evolves? Do you act proactively, for example when a migration package issue arises and the risk of misinformation about it increases significantly?

It’s not about being able to predict topics, it’s about knowing the techniques of misinformation and manipulation. In this regard, we already have tools, through knowledge of these technologies, to detect disinformation campaigns appropriately and early. Let me give you an example again, the issue of the presence of refugees from Ukraine in Poland. Immediately after the Russian aggression against Ukraine in 2022, there were very positive feelings For this category of newcomers, which is now weakening a little. We see, for example, messages that form part of a scapegoat strategy in various related topics – this was the case, for example, in the famous murder on Nowy Świat Street in Warsaw, the perpetrator of which was very quickly attributed on social media. social media account of a Ukrainian citizen, which later turned out to be completely false. How do we realize our mission in such situations? First of all, it is prebunking, i.e. instilling in people the recognition of these false techniques and messages. We started organizing such campaigns two years ago in Poland, the Czech Republic and Slovakia. We will continue them.

what are they?

It’s education. In the materials published on our platforms, we present, for example, the mechanism of searching for scapegoats and spreading fear. We show how emergent messages using highly ambiguous and unverified knowledge subsequently build attitudes and a sense of fear among entire social groups. Research conducted by Jigsaw – the division at Google that analyzes threats facing societies and the role of technology in developing solutions – in collaboration with the University of Cambridge and published in the prestigious scientific journal Science – showed that the percentage of people who were able to correctly identify the manipulation technique, increased by 5 percentage points in the middle. After watching a prior video. And not only in Poland, we are taking similar measures in various countries around the world.

What is the scale of such actions?

So far, the prebunking campaigns we’ve created have appeared on all our platforms, including YouTube. We also ran paid campaigns on the most popular social media platforms. In Poland, we are talking about a reach of about 15 million users, and for example, last year before the elections in Indonesia, we reached more than 57 million users, including more than half of voters aged 18-34. We also organized conferences with people who professionally verify information – journalists, fact-checkers, scientists, and various NGOs.

The results of the campaign, confirmed by research, are very promising. Therefore, on the occasion of the European Parliament elections in five countries, including Poland, we are implementing another such measure.

Can the development of artificial intelligence help combat disinformation?

Of course, thanks to it we can act faster, but now our users can use various tools to verify information. A simple example is image search, which allows you to check whether an image presented as evidence of a certain message actually relates to a specific event or is taken completely out of its context. Recently, our search engine also includes an “About this image” function, which allows you to quickly check the origin and context of images found online. Simply click on the three dots next to the image in Google Image search results to access this tool, or click “More about this page” in the “About this result” tool in search results.

This allows us to quickly check the history of the image, how other sites have used the image and its description, and the image’s metadata, if available. We also introduced SynthID, which includes a digital watermark invisible to the human eye in AI-generated images, audio, text and videos.

What about threats? AI also provides much greater opportunities for disinformation and the creation of credible deepfake information.

We’re all talking now about the huge threat posed by AI, but frankly, from a counter-fake news perspective, there are no major disinformation campaigns using AI today. Of course there are some isolated cases, but it is difficult to talk about a collective problem. In fact, we as a society continue to fall into the trap of frivolous mechanisms, and research shows that the biggest threat this year at least will not be deepfakes, but cheap fakes. For example, simple images taken out of context, fragments of text, drawings and rudimentarily created images. There is no need to use very complex methods to fool us. Just show some video clips of riots from wherever and whenever and add a made-up story to it. It still works, unfortunately. It creates divisions.

And you, as Google, want to fight divisions and polarization?

We believe that providing balanced, high-quality information is the best way to combat this. So in that regard, yes definitely.

How does Google Discover do this? How do you reconcile balanced content provision with the preferences of users who naturally want to stay in their own bubble?

Filter bubbles have been around even before the Internet. After all, these newspapers already existed when print newspapers were chosen because of the worldview they represented. We encourage you to build bridges, but we do so “quietly.” In Poland, for example, we supported the large project “Spiecie”, where five editorial offices with different worldviews published each other’s texts, presenting diametrically opposed points of view.

This was a great idea to get through the information bubbles. We have co-funded this work. This also supports qualitative circulation of information.

There is another topic here. What about so-called citizen journalism? Would you rather support its development and the fact that each of us can achieve massive circulation, or rather focus on cooperation with authoritative authoritative editorial offices that supposedly provide more reliable information?

These two trends should not be mutually exclusive. I don’t think an influencer, by definition, has less quality content to convey than a journalist, just that they don’t have a news desk or editor above them. The Internet has allowed for the democratization of freedom of expression, and that is a good thing. It is no longer a closed circle that includes only a few newspapers and television channels. On the other hand, we realize that it is worth supporting the business information ecosystem, because it also plays a very important role. We know how important their work is in terms of access to reliable and verified information. That is why we are happy to participate in joint projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Warnings: In 5 years we have become as indebted as in the past 29 years [OPINIA]

The text was created as part of the WP Opinie project. We…

Lidl's biggest hits are once again available at a very low price. From Monday

An amazing promotion will start at Lidl on Monday (11/03). Customers will…

Important changes to the traffic law from next year. Drivers will save money with it

In December 2020, the release of the release package began. These are…

Cyber ​​attack on a company protecting Polish critical infrastructure. Byrne explains

Natour Company announced this on Friday There was an accident in the…