Take note of which side is looking at data.
I am taking note of which side you believe.
It seems X provided their partner Sprinklr with 300 slurs that the model was based on. That doesn't sound independent to me.
What about these two questions:
Logic would dictate with less censorship that hate speech will go up, no?
Would someone saying the "H" was not real be censored or a violation of their new policy?
Sprinklr’s AI-based Toxicity Model - X, formerly Twitter Case Study
We understand that the goal of our partners at X, formerly Twitter is to understand, measure and reduce toxicity on the platform and to promote brand safety for advertisers. This made X, formerly Twitter a great early customer and partner for this capability.
X, formerly Twitter provided Sprinklr with a list of 300 english-language slur words. The list of terms was designed to capture hateful slurs and language that targets marginalized and minority voices. Sprinklr analyzed every english-language public tweet between January and February 2023 and identified 550,000 tweets that included at least one word from the list provided.