Since the beginning of August, Twitch has been struggling to deal with an epidemic of harassment against marginalized anchors known as “hate attacks.” The chat content of these attacking spam anchors contains hate and paranoid language, which is magnified dozens of times by robots every minute. On Thursday, after a month of attempts but failed to combat this strategy, Twitch resorted to the legal system and sued two suspected hate attackers for “targeting blacks and LGBTQIA+ anchors with racist, homophobic, sexist, and other harassing content.” Violating its terms of service.
“We hope this complaint can clarify the personal identities behind these attacks and the tools they use, discourage them from using similar behaviors for other services, and help end these malicious attacks against our community members,” a Twitch spokesperson commented Zhong said wired.
Harassment based on gender, race, and sexual orientation is no stranger to society 10 years of game streaming Platform; however, in the past month, targeted hate attacks upgrade. Marginalized anchors will receive derogatory messages—sometimes hundreds of them at a time—such as “this channel now belongs to KKK”.To raise awareness of hate attacks and force Twitch to take action, thousands of anchors Tied together Under hashtags such as #TwitchDoBetter and #ADayOffTwitch, the service was boycotted for one day.
Twitch has made a number of changes aimed at reducing hate attacks. The company said it banned thousands of accounts last month, created a new chat filter, and has been building “channel-level bans to evade detection.” But stomping a mole rat is a bit like a mole; the perpetrator continues to create new accounts while hiding his online identity to avoid accountability. A Twitch spokesperson said in the comments: “The malicious actors involved are very aggressively violating our terms of service and creating a new wave of fake bot accounts designed to harass creators, even if we continue to update our site-wide protections Measures to prevent their rapid development of behavior.” wired.
The lawsuit filed in the U.S. District Court for the Northern District of California on Thursday targeted two users who were only identified as “Cruzzcontrol” and “CreatineOverdose”. Twitch believes that these two users are located in the Netherlands and Vienna, Austria. Twitch in the lawsuit stated that it initially took “swift action” to suspend and permanently ban their accounts. However, it wrote, “They evaded Twitch’s ban by creating new, replacement Twitch accounts, and constantly changing their self-proclaimed “hate raid code” to avoid detection and suspension by Twitch.” The complaint states that Cruzzcontrol and Creatine Overdose Still operating multiple accounts and thousands of bot accounts under aliases on Twitch for hate raids, and both users claim that, in the words of the lawsuit, they can “generate thousands of bots in a matter of minutes.” “Twitch claims that Cruzzcontrol is responsible for approximately 3,000 robots associated with these recent hate attacks.
On August 15, the lawsuit alleges that CreatineOverdose demonstrated how their bot software “is used to send graphic depictions of racial slander and violence against minorities to Twitch channels, and claims that the hate attacker is’KK K’”. The accusation of the defendant may be part of the “hate attack community” that coordinated attacks on Discord and Steam.
Twitch has had legal conflicts with robot manufacturers in the past. In 2016, the company sued several robot manufacturers for artificially inflating the number of viewers and fans-Matthew DiPietro, Twitch’s senior vice president of marketing at the time Call “Persistent frustration.” A judge in California ruled in favor of Twitch. order The robot manufacturer paid US$1.3 million to the company for breach of contract, unfair competition, violation of the Consumer Protection Act against Cybersquatting, and trademark infringement. Thursday’s lawsuit may help reveal the identity of anonymous hate attackers, so they can also face legal consequences.