Instagram Reels Displays Explicit Content to Users Exclusively Following Children

Meta has faced a challenging few days. Initially accused of intentionally targeting children under 13 on its platforms, it later appeared to be turning away ads for period care products, deeming them “adult” and “political.” Now, there are claims that Instagram’s Reels algorithm presents explicit sexual content to accounts exclusively following children, alongside ads for major brands. All in all, it’s not a favorable situation.

Instagram Reels Displays Explicit Content to Users Exclusively Following Children
Instagram Reels Displays Explicit Content to Users Exclusively Following Children

Instagram Reels Displays Explicit Content to Users Exclusively Following Children

The Wall Street Journal conducted a new report, testing Instagram’s algorithm by setting up accounts that exclusively followed “young gymnasts, cheerleaders, and other teen and preteen influencers.” This content involving children had no sexual connotation. Despite this, the Journal’s experiment revealed that Meta’s TikTok competitor recommended sexual content to these test accounts, including both provocative adult videos and “risqué footage of children.

The Journal also discovered that accounts belonging to adult men often followed child users, like those followed by its test accounts. Following such accounts seemed to trigger Instagram’s algorithm to display “more-disturbing content.

Escalating Challenges for Meta: Advertisers Withdraw Amidst Content Concerns

The situation worsens for Meta as the report reveals that Instagram Reels showcased ads from companies like Disney, Walmart, Pizza Hut, Bumble, Match Group, and even the Journal itself alongside unsolicited sexual content delivered by the algorithm.

Bumble and Match Group, dating app companies, have both suspended advertising on Instagram in response, expressing objections to having their brands associated with inappropriate content.

Meta’s Samantha Stetson asserts that the Journal’s test results stem from a “manufactured experience” that doesn’t reflect what billions of people worldwide encounter. The Vice President of Client Council and Industry Trade Relations at Meta mentioned that over four million Reels are taken down monthly for policy violations. A Meta spokesperson also highlighted that instances of content breaching their policies are relatively low.

Stetson, in a statement to Mashable, conveyed, “We don’t want this kind of content on our platforms, and brands don’t want their ads to appear next to it. We continue to invest aggressively to stop it — and report every quarter on the prevalence of such content, which remains very low. Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security, and brand suitability solutions.

Meta introduced an AI tool earlier this year to assess whether content aligns with its monetization policies. The tool categorizes content based on suitability and disables ads if it doesn’t fit any of the categories. This capability was extended to Reels in October.

Brands attempting to advertise on social media have faced challenges in recent weeks. Earlier this month, major advertisers like Apple and IBM withdrew from Twitter/X when owner Elon Musk endorsed an anti-Semitic conspiracy. Additionally, a Media Matters report revealed that Twitter/X displayed ads alongside Nazi content.

Twitter/X previously made a similar argument to what Meta is currently asserting, stating that the tests leading to inappropriate content alongside advertisers were “manufactured.” However, as was the case with Twitter/X, the concern is not solely about the number of people who saw it or how it occurred but rather the fact that it could happen at all.

Unlike Twitter/X’s problem, Instagram Reels distinguishes itself in that Media Matters’ testing on Twitter/X involved following accounts that posted “extreme fringe content,” whereas the Journal only followed young athletes and influencers on Instagram Reels. The emergence of sexual content appears to be entirely attributed to inferences made by Instagram’s algorithm.

Therefore, it appears that significant adjustments to the algorithm are warranted.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here