[ad_1]
YouTube has instituted many changes over the past year to limit the problematic videos it recommends to viewers. A new study suggests the repairs have a way to go.
Software nonprofit Mozilla Foundation found that YouTube’s powerful recommendation engine continues to direct viewers to videos that they say showed false claims and sexualized content, with the platform’s algorithms suggesting 71% of the videos that participants found objectionable.
The study highlights the continuing challenge Alphabet Inc. subsidiary YouTube faces as it tries to police the user-generated content that turned it into the world’s leading video service. It is emblematic of the struggle roiling platforms from Facebook Inc. to Twitter Inc., which soared to prominence by encouraging people to share information but which now face regulatory and social pressure to police divisive, misleading and dangerous content without censoring diverse points of view.
For YouTube, it also shows gaps in its efforts to steer users to videos that should be of interest based on viewership patterns, as opposed to those that are going viral for other reasons.
In the study, one of the largest of its kind, 37,000 volunteers used a browser extension that tracked their YouTube usage over a 10-month period ended in May. When a participant flagged a video as problematic, the extension was able to track whether it was recommended to the viewer or whether the person found it independently. The videos flagged by participants as objectionable included a sexualized parody of “Toy Story” and an election video falsely suggesting Microsoft Corp. founder Bill Gates hired students involved with Black Lives Matter to count ballots in battleground states.
[ad_2]
Source link