Pages

Friday, January 27, 2012

Twitter Commits Social Suicide

TECH
 
|
 
1/26/2012 @ 10:22PM |
Image representing Twitter as depicted in Crun...
Starting today, we give ourselves the ability to reactively withhold content from users in a specific country — while keeping it available in the rest of the world. We have also built in a way to communicate transparently to users when content is withheld, and why.
With those words earlier today, in a blog posting titled “Tweets still must flow” the management of Twitter went over to the dark side and may well have dug their own grave.
In what can only have been a fit of corporate insanity, Twitter announced that it has the ability to filter tweets to conform to the demands of various countries.
Thus, in France and Germany it is illegal to broadcast pro-Nazi sentiments and Twitter will presumably be able to block such content and inform the poster why it was blocked.
[UPDATE: Please see below for the reason that the sections discussing computer mediated filtering are deleted.]
Quite obviously, Twitter’s management believes that there is some kind of value in being able to filter in this way but given that, over the course of 2011, the number of tweets per second (tps) ranged from a high of almost 9,000 tps down to just under 4,000 tps, any filtering has got to be computer-driven.
So, consider this tweet:
@FactsorDie Nazi Germany led the first public anti-smoking campaign.
Could that be considered to be pro-Nazi? How will a program accurately make that determination?
What concerns me is that if the algorithm Twitter uses registers a false positive (i.e. determines that the tweet is pro-Nazi when it isn’t) and the tweet has any time sensitivity to it then that attribute will be completely nullified by the time the tweet makes it out of tweet-jail if it ever does.
On the other hand if the software makes a false negative (i.e. determines that the tweet is NOT pro-Nazi when it is) then the filtering is useless and Twitter will be held accountable by every political group with an axe to grind.
Now it might be argued that some percentage of false positives or false negatives will be acceptable but what is that percentage? 0.0000o01%? That equates, at a minimum of 4,000tps, to 3,456 misclassified tweets per day or 1,261,440 per year!
And will the filtering software be able to detect irony and sarcasm? I rather doubt it.
And what about the fact that Twitter will be implicitly editing all tweets? Doesn’t that attract legal issues in that they are taking on an editorial responsibility and therefore become a lightning rod for lawsuits?
I see Twitter’s management having made an epic mistake. In trying to appease the demands of political pressure they’ve dug themselves a huge hole that they won’t be able to climb out of. The mere fact that they have published a blog posting claiming that they can filter seals their fate.
I really like Twitter; it’s a unique and amazingly rich social platform but Twitter’s management may have just diminished if not wiped out their edge and their global relevance.
You can’t service all of humanity if you allow the needs of politics to triumph over the needs of the people. And if you can’t service all of humanity, what is your relevance?
CORRECTION: I received the following comment from a Jodi Olson, part of the communications team at Twitter.  Jodi wrote:
I saw your piece on our news today and wish you would have checked in with us for perspective on the story–your piece is inaccurate and misleading.
What’s new today is that we now have the ability, when we have to withhold a Tweet in a specific country, to keep that Tweet visible for the rest of the world. We hold freedom of expression in high esteem and work hard not to remove Tweets.
The key is that this reactive only.  It’s on a case-by-case basis, in response to a valid request from an authorized entity.  This is not a change in philosophy. Twitter does not mediate content, and we do not proactively monitor Tweets.

Also key is that we’re making a clear effort to be transparent.
Our policy in these cases is to 1) promptly notify the affected users, unless we are legally prohibited from doing so; 2) withhold the content in the required countries only, rather than worldwide; 3) clearly indicate to viewers that a Tweet or Account has been withheld, and 4) make available any requests to withhold content through our partnership with Chilling Effects.
To put the news in context, see this post from the EFF’s Jillian York.
Could you please update your story to make the accurate corrections?
I stand corrected over the point that the filtering is reactive (and presumably human-driven) versus proactive (and computer-driven) … Twitter’s original blog posting obviously wasn’t clear on this issue and in many ways the willingness of Twitter to acquiesce  to “authorized” entities raises even more questions. I find Jodi’s wish that I “would have checked in with us for perspective” remarkably naive … wouldn’t you think that what Twitter writes in its own official blog would be what they mean and not require spinclarification. That said, I have replied to Jodi and I’ll update this posting whenI hear back from her.

No comments:

Post a Comment