New reports suggest that social media companies Meta and TikTok may have allowed more harmful content online. This happened to make more people use their apps. Whistleblowers have shared information about these practices. These claims suggest the companies may have put user safety second to getting more users and good political relationships.
Meta, which owns Facebook and Instagram, is at the center of these claims. Internal research reportedly found that "anger" made users interact more. Because of this, the company allegedly allowed more "borderline" harmful content. This happened especially when Instagram Reels started in 2020. Reels was Meta's answer to TikTok's success.
It seems that Meta wanted to compete with TikTok for users. Therefore, Reels was developed quickly. Safety rules were not fully put in place. Internal Meta research showed more bullying, hate speech, and violence on Reels. However, the company is accused of growing Reels instead of hiring more safety staff. One Meta engineer mentioned money problems as a reason.
TikTok also faces accusations. An insider reportedly showed internal data. It revealed that complaints about harmful content were often ignored. This happened when complaints involved political people. The insider said this was done to "keep good relationships" with politicians. This suggests they chose political ties over user safety.
These alleged actions mean users see more harmful material. This includes hate speech and bad ideas. The internal knowledge of these problems at Meta and TikTok is now questioned. These new reports raise serious questions about how social media companies handle content. There is a conflict between making money and protecting users. The future of user safety and online discussion is a big worry.