YouTube’s algorithm recommends videos that violate its guidelines


Research by Mozilla revealed that despite the platform’s measures to limit objectionable content, there is still a lot to do, especially if the algorithm itself recommends them.

(Photo: Unsplash)

The algorithm of YouTube regularly recommends videos that contain content that violates the platform’s policies. The non-profit organization discovered it Mozilla Foundation, using crowdsourced data from RegretsReport, a browser extension that users can install and use to report information about malicious videos, thus understanding where they come from. More than 30,000 YouTube users have used the extension to report this type of content.

According to Mozilla’s latest research, 3.362 video were flagged as “objectionable” between July 2020 and May 2021. The study found that 71% of videos deemed objectionable by volunteers were actively recommended by the YouTube algorithm. A video can be considered unpleasant because it is violent, spam, or hate speech.

Non-English speakers were much more likely to find videos they considered disturbing – the rate on YouTube was 60% higher in countries that do not have English as their primary language. This likely means YouTube spends more energy and resources checking English-language content, leaving its algorithms more leeway when it comes to other languages.

Quasi 200 of these videos recommended by the algorithm – about 9% of the total – have now been removed from YouTube, but had already racked up 160 million total views.

According to Brandi Geurkink, Mozilla Senior Manager of Advocacy, YouTube’s algorithm is designed to damage and misinform people. “Our research confirms that YouTube not only hosts, but actively recommends videos that violate its own policies “he said. Geurkink said these findings are just the tip of the iceberg of the problem, but he hoped that “convince the public and lawmakers of the urgent need for better transparency in YouTube’s AI”.

YouTube’s algorithm drives the 70% of the time of viewing on the platform, around 700 million hours every single day, recommending content that keeps people on the platform. This carries risks, especially with regard to disinformation.

In addition, Mozilla’s research also shows that in 43% of the cases of videos unwanted and unpleasant reported by users, they had nothing to do with what people had seen previously. However, the videos reported in the study seem to work well: they got 70% more views per day than other content watched by the volunteers.

YouTube over the years has taken steps to reduce the spread of harmful content with numerous changes and by eliminating the content deemed “borderline”, but the Mozilla study suggests there is still a lot of work to be done.

In a statement a The Next Web a spokesperson for the Google-owned platform said YouTube works to improve the user experience as well “In the last year alone we have introduced over 30 different changes to reduce recommendations on harmful content”. Thanks to this change, according to YouTube, “The limit content consumption resulting from our recommendations is now significantly less than 1%”.


Categories:   Internet

Comments