Electronic Incitement: Don’t Blame Twitter

For the eleven months I lived there, Herzliya was comparatively quiet during the wave of stabbings plaguing Israel in 2016.  After the initial wave of stabbings in October 2015, I didn’t think too much about the attacks until March 8, 2016, when Taylor Force, a 28-year-old West Point graduate, was murdered while on a global entrepreneurship trip to Israel.  Suddenly, being an American didn’t carry that same sense of apartness from the Israeli-Palestinian conflict.

After the first stabbing occurred on October 3rd in the Old City of Jerusalem, Hamas took to the Internet to praise the attackers and encourage more violence.  The global reach of the Internet has made it a key tool for terrorist recruitment and incitement. The United States is not immune to this phenomenon.  In the past year, the Islamic State in Syria and the Levant (ISIL) has launched a widespread terrorist “grooming” effort (as the New York Times calls it) that is causing lawyers and politicians alike to wonder how to balance freedom of speech with stopping terrorist exploitation of the internet.

Social media companies try to keep their websites from being used as platforms to spout extremist rhetoric.  Facebook has risen to the challenge, working to flag content containing incitement.  Twitter, too, participates in the fight against ISIL by suspending accounts trumpeting extremist ideology.  It’s definitely no cakewalk, and social media companies need to keep working on this.  But despite the need, they themselves shouldn’t be held legally responsible for extremist content posted on their sites.

The First Amendment protects freedom of speech, but with some very important exceptions.  The classic example is that causing mass panic by shouting “Fire!” in a crowded theater is illegal.  But prosecuting this online is much more difficult.  For one, Internet anonymity is a major headache for law enforcement and counterterrorism agencies.  Transnational crime and international terrorism are able to function primarily because of the anonymous nature of the Internet.  Policing that, however, is close to impossible, even with the cooperation of technology companies.

Private companies like Facebook and YouTube try to help in the fight against online terrorist activity.  But they, too, are fighting an uphill battle.  Virtual violence and online training manuals can turn into real violence, sometimes before anybody has a chance to take it down.  But there are other cases where the inciting information has been up online, untouched and still available to anyone who knows where to find it.  Reuters reports that Facebook, Google, and other social media platforms may be using software similar to the ones they use to stop copyright violations to track and take down extremist content.  This type of software is imperfect, but it should be a standard practice for all social media platforms.

Even with these efforts, some dangerous content will slip through the cracks.  If it does, who is legally responsible for the damage this can provoke?  Corporations like Twitter have been accused of “consciously failing to tackle this threat and passing the buck hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror,” according to the Guardian.  But this doesn’t make them legally responsible for the way that someone else uses their product.  If they are petitioned to take down content, ignore it, and then this material is used in a terrorist attack, their negligence is potentially grounds for a civil suit, but they can’t be held criminally accountable.  It would be like trying to charge a car company as an accomplice in vehicular homicide.  They built the murder weapon (the car), but they weren’t behind the wheel.

Still, private media corporations are in a unique position to combat terrorism.  Twitter, Facebook, and YouTube have all created terms of service and use that to give them the right to discontinue accounts that are abused.  Facebook and Twitter, in particular, have included clauses that prohibit incitement of targeted abuse or violent activity.  They also pass the buck by denying any responsibility for third-party content posted on their websites.  These corporations need to step up and help in the counterterrorism campaign.  Especially in the United States, where the most recent terrorist attacks have been perpetrated by homegrown extremists.  Freedom of speech is a gift.  It needs to be upheld.  But security is a right.  And a merging of the private sector and the public sector on counterterrorism issues would be a serious force to be reckoned with.

For more on how the author believes the U.S. government should address terrorist incitement online, check out her article “Advising Twitter: How to fight electronic incitement” first published in The Hill.