Lingua-News Cyprus

Language Learning Through Current Events

Saturday, March 28, 2026
C1 Advanced ⚡ Cached
← Back to Headlines

Social Media Giants Accused of Prioritising Engagement Over Safety Amidst Surge in Harmful Content

New revelations are casting a stark spotlight on the content moderation practices of social media titans Meta and TikTok, with internal documents and whistleblower accounts suggesting a deliberate allowance for an increase in detrimental material to fuel user engagement. The allegations paint a disquieting picture of platforms seemingly willing to compromise user safety in a relentless pursuit of growth and favourable political standing.

At the heart of the controversy lies Meta, the parent company of Facebook and Instagram. Internal research conducted by Meta reportedly concluded that "outrage" was a significant driver of user interaction. This finding, according to sources close to the company, precipitated decisions to permit a greater volume of "borderline" harmful content to permeate user feeds. This strategy appears to have been particularly pronounced with the launch of Instagram Reels in 2020, Meta's direct riposte to the meteoric rise of TikTok.

Evidence suggests that the drive to compete with TikTok for audience attention was so potent that the development of Reels proceeded without the necessary safety protocols being fully implemented. Internal Meta research, subsequently brought to light, indicated a significantly higher incidence of bullying, harassment, hate speech, and outright violence on Reels compared to other sections of Instagram. Despite these alarming findings, the company is alleged to have prioritised the expansion of Reels' user base over bolstering the safety teams required to manage its content effectively. One Meta engineer reportedly alluded to the financial pressures, stating, "They sort of told us that it's because the stock price is down," as a justification for these decisions.

The accusations extend to TikTok as well. An insider from the platform has reportedly provided access to internal dashboards that reveal a disturbing trend: user complaints regarding harmful content were often superseded by cases involving political figures. This practice, the staffer explained, was undertaken to "maintain a strong relationship" with political entities, thereby mitigating the risk of regulatory intervention or outright bans. This suggests a calculated trade-off where the potential dangers faced by users were deemed secondary to the preservation of favourable political ties.

The cumulative impact of these alleged practices is a landscape where users are increasingly exposed to a litany of harmful material. From misogyny and conspiracy theories to more severe forms of abuse like sexual blackmail and the glorification of terrorism, the internal awareness of these algorithmic harms within Meta and TikTok is now being questioned. The revelations raise profound concerns about the efficacy of current content moderation policies at major social media corporations and the inherent conflict between their commercial imperatives and their responsibility to safeguard their vast user communities. The long-term implications for user well-being and the integrity of online discourse remain a pressing concern.

← Back to Headlines