Twitter is once again inserting itself into the 'free' exchange of ideas, announcing on Tuesday that iOS and Android users will now be prompted to review 'potentially harmful or offensive' replies before they hit send.
"We began testing prompts last year that encouraged people to pause and reconsider a potentially harmful or offensive reply — such as insults, strong language, or hateful remarks — before Tweeting it," reads a corporate blog post. "Based on feedback and learnings from those tests, we’ve made improvements to the systems that decide when and how these reminders are sent. Starting today, we’re rolling these improved prompts out across iOS and Android, starting with accounts that have enabled English-language settings."
In their example, Twitter's prude algo clutches pearls over 'mean' words.
Maybe they could apply similar measures to the rampant child porn on their platform?
According to the company, the rollout comes after extensive tweaks to 'capture the nuance in many conversations,' as it 'often didn't differentiate between potentially offensive language, sarcasm, and friendly banter.'
Since the early tests, here’s what we’ve incorporated into the systems that decide when and how to send these reminders:
Consideration of the nature of the relationship between the author and replier, including how often they interact. For example, if two accounts follow and reply to each other often, there’s a higher likelihood that they have a better understanding of preferred tone of communication.
Adjustments to our technology to better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways.
Improvement to our technology to more accurately detect strong language, including profanity.
Created an easier way for people to let us know if they found the prompt helpful or relevant.