YouTube keeps on letting misinformation spread and reach millions of people

Dan Milmo, reporting for the Guardian, about [a letter signed by 80 groups, addressed to YouTube regarding misinformation:

YouTube is a major conduit of online disinformation and misinformation worldwide and is not doing enough to tackle the spread of falsehoods on its platform, according to a global coalition of factchecking organisations.

No shit, Sherlock. How did they not write their letter as “YouTube is still a major conduit of fake news” or “YouTube is maintaining its role as a major conduit of fake news.” As it is, it really sounds like it is something that was previously unknown to them, despite the fact that this story, like a Marvel production, is always the same and a new one comes along every three months or so. Sadly, it is because the problems remain, and empty promises keep on being given on a regular basis.

Everybody in the world is now more than familiar with the role of Facebook in the spread of fake news and misinformation, and the company — for several good reasons — has concentrated most of the criticism of platforms and reports in the media for the past couple of years. Meanwhile, YouTube is like Neo avoiding bullets in the Matrix, and gladly leaves Facebook alone in the spotlight, or so it seems from a public-eye distance.

YouTube as a platform, and arguably as a standalone search engine, should share the main role of the misinformation villain with Facebook. They both deserve it, but it seems unfair that only one of them is named on the poster. Just go on the homepage of YouTube via a private tab and you will see the crap the algorithm wants to serve you, without even searching for something.

YouTube spokesperson Elena Hernandez, answering to the letter:

We’ve seen important progress, with keeping consumption of recommended borderline misinformation significantly below 1% of all views on YouTube, and only about 0.21% of all views are of violative content that we later remove.

Clearly this is not a priority for YouTube, otherwise I doubt they would have used the word “important” to describe their progress. Progress is good, but I’d argue the results of their efforts are not good enough: one percent of all views on YouTube is obviously a huge number of views, and I’d be curious to know how many million views it represents in the course of a month, or a year. The most problematic word in the answer is, I think, the word “later”: violative content can indeed be removed, but only after it reached millions of people, only after the harm is done (also only after money was made on top of the video.)

I’m sure this is immensely difficult for Facebook, YouTube, Twitter and others to fight the spread of misinformation, and I’m not saying they are not trying. What I’m saying is they need to do more, and eventually succeed. The problem will not go away on its own, and that’s why it is crucial that groups like the ones who signed the letter to YouTube keep on pressuring these companies, that governments, media organisations, and journalists keep on reminding the platforms that this cannot continue, and that whistleblowers like Frances Haugen can safely speak out.