INSUBCONTINENT EXCLUSIVE:
Twitter latest effort to curb trolling and abuse on the site takes some of the burden off users and places it on the company
algorithms.
If you tap on a Twitter or real-world celebrity tweet, more often than not there a bot as one of the first replies
This has been an issue for so long it a bit ridiculous, but it all has to do with the fact that Twitter really only arranges tweets by
quality inside search results and in back-and-forth conversations.
Twitter is making some new changes that calls on how the collective
Twitterverse is responding to tweets to influence how often people see them
With these upcoming changes, tweets in conversations and search will be ranked based on a greater variety of data that takes into account
things like the number of accounts registered to that user, whether that tweet prompted people to block the accounts and the IP
address.
Tweets that are determined to most likely be bad aren&t just automatically deleted, but they&ll get cast down into the &Show more
replies& section where fewer eyes will encounter them.The welcome change is likely to cut down on tweets that you don&t want to see in your
Twitter saysthat abuse reports were down 8 percent in conversations where this feature was being tested.
Much like your average unfiltered
commenting platform, Twitter abuse problems have seemed to slowly devolve
On one hand it been upsetting to users who have been personally targeted, on the other hand it just taken away the utility of poring through
the conversations that Twitter enables in the first place.
It certainly been a tough problem to solve, but they&ve understandably seemed
reluctant to build out changes that take down tweets without a user report and a human review
This is, however, a very 2014 way to look at content moderation and I think it grown pretty apparent as of late that Twitter needs to lean
on its algorithmic intelligence to solve this rather than putting the burden entirely on users hitting the report button.