Twitch Addresses #TwitchDoBetter Campaign, Affirms Commitment to Improvement
Revised Article:
Streaming Giant Twitch Addresses Harassment Towards Vulnerable Creators
In a bid to weed out rampant abuse on its platform, Twitch, a leading live streaming service, acknowledged the escalating concerns about botting, hate raids, and harassment targeted at marginalized creators. The announcement came out on August 25, promising swifter action and enhanced protections for affected users.
"Time to tackle those tough talks about creator safety," Twitch declared, acknowledging the pressure they've been under to address these persistent issues. The company has identified a critical flaw in their proactive filters that allow hate speech to slip through, a problem they're rectifying by rolling out an immediate update.
Moreover, they've outlined impending safety features, including a robust account verification process and efficient channel-level ban evasion detection tools, due to roll out in the weeks ahead. This move follows a wave of complaints by creators and users under the hashtag #TwitchDoBetter, a campaign brought to life after RekItRaven, a marginalized Twitch streamer, faced a hate raid that left their account overrun with malicious remarks.
Such hate raids involve bad-faith users employing bots and aliases to inundate specific streamers with abuse. The campaign gained momentum as more users stepped forward to condemn identity-based harassment that marginalized creators routinely endure on the platform.
"The harassment isn't discriminatory. It's equal-opportunity hate," Vanessa, a black Twitch user, told the Washington Post. "It's frightening because the hatred is aimed at every marginalized identity, regardless of strength, frequency, or status."
Twitch's past attempts to curb harassment and hate speech have been less than successful, with the company often responding precipitately rather than offering long-lasting solutions. For instance, a piecemeal solution targeted at users' sexual practices in late 2020, banning words like "simp," "incel," and "virgin" on the platform, failed to offer comprehensive protection.
Twitch's latest statement expressed gratitude to users who bravely shared their experiences and assured the community that they were undeterred in their efforts to create a safer, more inclusive environment. The platform intends to engage with community members to gain valuable insights into their struggles and encourage open feedback via UserVoice.
In the broader online landscape, proposals for unified abuse defense systems that involve collaboration between victims and platforms are gaining traction. Such systems aim to provide transparent services for abuse victims and leverage user expertise for abuse verification. However, specific changes by Twitch in response to recent events remain undisclosed.
Critics argue that Twitch's community guidelines, though well-intentioned, leave creators vulnerable to real-world threats necessitating stricter moderation and account verification processes. These changes are likely to shape future improvements on the platform as the conversation about platform safety continues to evolve.
- Twitch, in their latest statement, thanked users who bravely shared their experiences of harassment and expressed their unwavering commitment to creating a safer, more inclusive environment.
- Aimee, a streamer on Twitch, applauded the company's decision to enhance protections and implement new safety features, such as a robust account verification process and efficient channel-level ban evasion detection tools.
- The technology industry is watching Twitch closely as they address the issue of botting and harassment, with some calling for stricter moderation and account verification processes.
- In the future, Twitch users might see collaborative abuse defense systems enacted, where victims and platforms work together to combat hate raids and identity-based harassment, thanks to the platform's commitment to working with the community.