Facebook found hosting masses of far right EU disinformation networks
A multi-month hunt for political disinformation spreading on Facebook in Europe suggests there are concerted efforts to use the platform to spread bogus far right propaganda to millions of voters ahead of a key EU vote which kicks off tomorrow.
Following the independent investigation, Facebook has taken down a total of 77 pages and 230 accounts from Germany, UK, France, Italy, Spain and Poland — which had been followed by an estimated 32 million people and generated 67 million ‘interactions’ (i.e. comments, likes, shares) in the last three months alone.
The bogus mainly far-right disinformation networks were not identified by Facebook — but had been reported to it by campaign group Avaaz — which says the fake pages had more Facebook followers and interactions than all the main EU far right and anti-EU parties combined.
“The results are overwhelming: the disinformation networks upon which Facebook acted had more interactions (13 million) in the past three months than the main party pages of the League, AfD, VOX, Brexit Party, Rassemblement National and PiS combined (9 million),” it writes in a new report.
“Although interactions is the figure that best illustrates the impact and reach of these networks, comparing the number of followers of the networks taken down reveals an even clearer image. The Facebook networks takedown had almost three times (5.9 million) the number of followers as AfD, VOX, Brexit Party, Rassemblement National and PiS’s main Facebook pages combined (2 million).”
Avaaz has previously found and announced far right disinformation networks operating in Spain, Italy and Poland — and a spokesman confirmed to us it’s re-reporting some of its findings now (such as the ~30 pages and groups in Spain that had racked up 1.7M followers and 7.4M interactions, which we covered last month) to highlight an overall total for the investigation.
“Our report contains new information for France, United Kingdom and Germany,” the spokesman added.
Examples of politically charged disinformation being spread via Facebook by the bogus networks it found include a fake viral video seen by 10 million people that supposedly shows migrants in Italy destroying a police car (but was actually from a movie; which Avaaz adds that this fake had been “debunked years ago”); a story in Poland claiming that migrant taxi drivers rape European women, including a fake image; and fake news about a child cancer center being closed down by Catalan separatists in Spain.
There’s lots more country-specific detail in its full report.
In all, Avaaz reported more than 500 suspicious pages and groups to Facebook related to the three-month investigation of Facebook disinformation networks in Europe. Though Facebook only took down a subset of the far right muck-spreaders — around 15% of the suspicious pages reported to it.
“The networks were either spreading disinformation or using tactics to amplify their mainly anti-immigration, anti-EU, or racist content, in a way that appears to breach Facebook’s own policies,” Avaaz writes of what it found.
It estimates that content posted by all the suspicious pages it reported had been viewed some 533 million times over the pre-election period. Albeit, there’s no way to know whether or not everything it judged suspicious actually was.
In a statement responding to Avaaz’s findings, Facebook told us:
We thank Avaaz for sharing their research for us to investigate. As we have said, we are focused on protecting the integrity of elections across the European Union and around the world. We have removed a number of fake and duplicate accounts that were violating our authenticity policies, as well as multiple Pages for name change and other violations. We also took action against some additional Pages that repeatedly posted misinformation. We will take further action if we find additional violations.
The company did not respond to our question asking why it failed to unearth this political disinformation itself.
Ahead of the EU parliament vote, which begins tomorrow, Facebook invited a select group of journalists to tour a new Dublin-based election security ‘war room’ — where it talked about a “five pillars of countering disinformation” strategy to prevent cynical attempts to manipulate voters’ views.
But as Avaaz’s investigation shows there’s plenty of political disinformation flying by entirely unchecked.
One major ongoing issue where political disinformation and Facebook’s platform is concerned is that how the company enforces its own rules remains entirely opaque.
We don’t get to see all the detail — so can’t judge and assess all its decisions. Yet Facebook has been known to shut down swathes of accounts deemed fake ahead of elections, while apparently failing entirely to find other fakes (such as in this case).
It’s a situation that does not look compatible with the continued functioning of democracy given Facebook’s massive reach and power to influence.
Nor is the company under an obligation to report every fake account it confirms. Instead, Facebook gets to control the timing and flow of any official announcements it chooses to make about “coordinated inauthentic behaviour” — dropping these self-selected disclosures as and when it sees fit, and making them sound as routine as possible by cloaking them in its standard, dryly worded newspeak.
Back in January, Facebook COO Sheryl Sandberg admitted publicly that the company is blocking more than 1M fake accounts every day. If Facebook was reporting every fake it finds it would therefore need to do so via a real-time dashboard — not sporadic newsroom blog posts that inherently play down the scale of what is clearly embedded into its platform, and may be so massive and ongoing that it’s not really possible to know where Facebook stops and ‘Fakebook’ starts.
The suspicious behaviours that Avaaz attached to the pages and groups it found that appeared to be in breach of Facebook’s stated rules include the use of fake accounts, spamming, misleading page name changes and suspected coordinated inauthentic behavior.
When Avaaz previously reported the Spanish far right networks Facebook subsequently told us it had removed “a number” of pages violating its “authenticity policies”, including one page for name change violations but claimed “we aren’t removing accounts or Pages for coordinated inauthentic behavior”.
So again, it’s worth emphasizing that Facebook gets to define what is and isn’t acceptable on its platform — including creating terms that seek to normalize its own inherently dysfunctional ‘rules’ and their ‘enforcement’.
Such as by creating terms like “coordinated inauthentic behavior”, which sets a threshold of Facebook’s own choosing for what it will and won’t judge political disinformation. It’s inherently self-serving.
Given that Facebook only acted on a small proportion of what Avaaz found and reported overall, we might posit that the company is setting a very high bar for acting against suspicious activity. And that plenty of election fiddling is free flowing under its feeble radar. (When we previously asked Facebook whether it was disputing Avaaz’s finding of coordinated inauthentic behaviour vis-a-vis the far right disinformation networks it reported in Spain the company did not respond to the question.)
Much of the publicity around Facebook’s self-styled “election security” efforts has also focused on how it’s enforcing new disclosure rules around political ads. But again political disinformation masquerading as organic content continues being spread across its platform — where it’s being shown to be racking up millions of interactions with people’s brains and eyeballs.
Plus, as we reported yesterday, research conducted by the Oxford Internet Institute into pre-EU election content sharing on Facebook has found that sources of disinformation-spreading ‘junk news’ generate far greater engagement on its platform than professional journalism.
So while Facebook’s platform is also clearly full of real people sharing actual news and views, the fake BS which Avaaz’s findings imply is also flooding the platform, gets spread around more, on a per unit basis. And it’s democracy that suffers — because vote manipulators are able to pass off manipulative propaganda and hate speech as bona fide news and views as a consequence of Facebook publishing the fake stuff alongside genuine opinions and professional journalism.
It does not have algorithms that can perfectly distinguish one from the other, and has suggested it never will.
The bottom line is that even if Facebook dedicates far more resource (human and AI) to rooting out ‘election interference’ the wider problem is that a commercial entity which benefits from engagement on an ad-funded platform is also the referee setting the rules.
Indeed, the whole loud Facebook publicity effort around “election security” looks like a cynical attempt to distract the rest of us from how broken its rules are. Or, in other words, a platform that accelerates propaganda is also seeking to manipulate and skew our views.
✍ Source : ☕ Social – TechCrunch
To continue reading click link or copy to web server. :
(✿◠‿◠)✌ Mukah Pages : 👍 Making Social Media Marketing Make Easy Through Internet Auto-Post System. Enjoy reading and don't forget to 👍 Like & 💕 Share!
Post a Comment