FRIENDLY SUGGESTION
Twitter announced Tuesday it is running a test for a tool that would allow users to revise replies before they’re published if they contain “harmful” language.
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” the tech platform said in a tweet.
Under the new tool, users will be alerted when they hit “send” on a reply if their message contains words that are similar to those in other posts that were reported. Users will then be given an option to change their response before their reply is published.
The test is just the latest development in Twitter’s efforts to respond to pressure to tackle hateful posts on its platform. Current monitoring is done by users who flag offensive posts and through screening technology.