I am taking note of which side you believe. 
“If we have data, let’s look at data. If all we have are opinions, let’s go with mine.”
It seems X provided their partner Sprinklr with 300 slurs that the model was based on. That doesn't sound independent to me.
What about these two questions:
Logic would dictate with less censorship that hate speech will go up, no?
Would someone saying the "H" was not real be censored or a violation of their new policy?
Sprinklr’s AI-based Toxicity Model - X, formerly Twitter Case Study
We understand that the goal of our partners at X, formerly Twitter is to understand, measure and reduce toxicity on the platform and to promote brand safety for advertisers. This made X, formerly Twitter a great early customer and partner for this capability.
X, formerly Twitter provided Sprinklr with a list of 300 english-language slur words. The list of terms was designed to capture hateful slurs and language that targets marginalized and minority voices. Sprinklr analyzed every english-language public tweet between January and February 2023 and identified 550,000 tweets that included at least one word from the list provided.
In regards to your 2 questions -
1. No. Less censorship/more free speech does not mean more hateful impressions on X. Their algorithm is designed to automatically limit the reach of content it deems hateful. If there’s data proving that hateful impressions are up, it should be brought to light. As of now that doesn’t seem to be the case.
2. I don’t know their policy on a case by case basis, but from my understanding - if the post is illegal, the policy is to remove it, if it’s legal but hateful, it will be suppressed to the public unless it is specifically sought out.
In regards to the independent study - X/Twitter always had a list of slurs it uses in order for it to suppress/delete hateful or illegal tweets. Obviously it makes sense for them to track how they are performing over time. If someone has their own list of hate speech terms or develops AI to detect hate speech they are welcome to do the same analysis and claim that X is showing more hateful impressions than before.