As social media outlets increasingly become the favorite channels for terrorist groups to spread messages of violence and recruit new members, the Internet companies that maintain those services are in a tough spot.
Companies born on the Web like Google and Facebook promote an ethos of free speech, but at the same time recognize the dangers of terrorists, criminals and other bad actors co-opting their platforms in service of a violent ideology or illegal activities.
Alexandria Walden, public policy and government relations counsel at Google, outlined those tensions during a recent policy discussion in Washington.
How YouTube tackles barring violent content
On the company's immensely popular YouTube property, a variety of content is barred, Walden said. That includes videos that are deemed an incitement to violence, extremist propaganda or terrorist recruiting efforts, as well as the grisly execution videos promoted by the so-called Islamic State, also known variously as ISIS, ISIL or Daesh.
YouTube employs an army of screeners who respond to content that users flag as objectionable, taking down some 100,000 videos each day, according to Walden.
"When a member of the community flags a video, we have a team of reviewers around the world that are reviewing those flags 24 hours a day, seven days a week, and so that is one of the most important ways we are taking down content that is falling within the realm of what we find problematic for our site," she said.
Walden was quick to point out that many of those decisions are subjective, explaining that the determination to leave a video up or take it down is often a judgment call, and that there are legitimate reasons not to block material that some users might find dangerous or offensive.
"It's important," she said, "that we have exceptions, and that one reason why some things that people may find sort of offensive in terms of violence or other kind of related issues is that we think it's important to document things like atrocities and human rights violations that are happening around the world and that do have newsworthy or documentary value."
The government tries to balance censorship and free expression
Those same frictions arise as the government charts a policy toward what the State Department has termed the "use of Internet for terrorist purposes," according to Jason Pielemeier, who serves as the business and human rights section lead at the department's Bureau of Democracy, Human Rights and Labor.
Leaving aside the important but problematic questions about censorship and free expression, Pielemeier noted that simply squelching objectionable content can be a bad policy choice.
[ Related: Black hat hackers urged to protect Internet freedom ]
"As with hate speech, which is a sort of related issue in some ways, much of the content that terrorist groups espouse online is offensive, but that doesn't necessarily mean we should ban it, or even -- and this is the point -- that banning it would do any good," Pielemeier said. "While we routinely condemn this speech at the highest levels of our government, we also recognize that suppression of expression can be counter-productive, can raise the profile of offensive speech, and it can also cause it to fester in dangerous ways and to move to deeper and darker places."
The policy questions regarding the Internet have been under active consideration throughout the Obama administration.
At a high level, the State Department has a simple formulation that holds that the Internet is merely the newest extension of human activity, and therefore should be subject to the same rights and safeguards of expression, assembly and religion that are enshrined in the Bill of Rights. "We understand Internet freedom to be the exercise of universal human rights online," Pielemeier said.
But the challenges of combating the messages spread by a group like ISIS, or Daesh, remain very much in flux as a matter of policy.
"The challenges posed by the use of Internet for terrorist purposes, in particular the adept use of social media by Daesh as a tool for propaganda and recruitment, are causing us to reexamine this approach and these tactics," Pielemeier said. "This is a really serious issue."
Pielemeier described a framework under which policy makers would ask a series of questions when considering how to respond to terrorist activity online, including whether any effort to suppress content would be feasible in practice, given the inherent philosophical, jurisdictional and other challenges that arise when governments approach the territory of regulating speech.
Likewise, the State Department is mindful of the potential for any such efforts to serve as a pretext for repressive regimes to crack down on other types of expression that they deem objectionable, such as bloggers who are critical of a regime or document human rights violations.
At Google, those same considerations are under debate. Walden emphasized the tremendous upside of the Internet as a platform for sharing ideas and organizing, concluding that decisions about when to remove objectionable or potentially harmful content are more art than science.
"Ultimately we think that content can be a force for good," Walden said. "It can educate and mobilize people in really important ways, and so we have to be thoughtful about that when we're thinking about the things that we take down and leave up."
This story, "Tech giants, government struggle with online speech policies" was originally published by CIO.