As part of a series surveying online-gender-based violence, the Institute for Strategic Dialogue (ISD) published a report that examines TikTok’s content moderation in English, French, German, and Hungarian. The results point to algorithmic bias. Using the method of qualitative analysis, the ISD researchers entered racist and misogynistic slurs – twelve in total, three for each language – as prompts in TikTok’s search engine and analyzed the produced outcomes. In two-thirds of the videos examined, the platform’s search function and recommendation algorithms “perpetuated harmful stereotypes,” effectively creating routes that connected “users searching for hateful language with content targeting marginalized groups.”
Paula-Charlotte Matlach, Allison Castillo, Charlotte Drath, Eva F Hevesi. Recommending Hate: How TikTok’s Search Engine Algorithms Reproduce Societal Bias, Institute for Strategic Dialogue (ISD), February 2025. https://www.isdglobal.org/wp-content/uploads/2025/02/How-TikToks-Search-Engine-Algorithms-Reproduce-Societal-Bias.pdf