Bill requiring reporting of social media terrorist content is back

terrorist

A pledge of allegiance to the Islamic State (IS) – otherwise known as Daesh – that might have been posted to Facebook by suspected terrorist Tashfeen Malik has prompted US lawmakers to revive a bill that would require technology companies such as Facebook and Twitter to report suspected online terror activity.

Sen. Dianne Feinstein, a Democrat from California, is sponsoring the legislation along with Sen. Richard Burr, a Republican from North Carolina.

From her statement:

We’re in a new age where terrorist groups like [Islamic State of Iraq and the Levant, or ISIL] are using social media to reinvent how they recruit and plot attacks.

That information can be the key to identifying and stopping terrorist recruitment or a terrorist attack, but we need help from technology companies.

Feinstein said that under the legislation, companies wouldn’t have to go out of their way to uncover terrorist activity. But if they do happen upon it, they’d be required to report it to law enforcement.

The bill, known as the “Requiring Reporting of Online Terrorist Activity Act,” was shelved almost three months ago after its sponsors had a dispute with Sen. Ron Wyden, an Oregon Democrat.

Wyden placed a procedural hold on the legislation, saying it would “create a Facebook Bureau of Investigations.”

He still doesn’t like it.

From Wyden’s statement on the reintroduced bill:

It would create a perverse incentive for companies to avoid looking for terrorist content on their own networks, because if they saw something and failed to report it they would be breaking the law, but if they stuck their heads in the sand and avoided looking for terrorist content they would be absolved of responsibility.

Wyden cited testimony from FBI Director James Comey that social media companies are already “pretty good about telling us what they see.”

Social media companies must continue to do everything they can to “quickly remove terrorist content and report it to law enforcement,” Wyden said.

The editorial board at the Los Angeles Times, for one, joined Wyden in pointing out the bill’s shortcomings, including:

  • There’s too much to catch. Facebook missed many of Malik’s posts, even though those posts alarmed her family back in Pakistan. Security analysts say that’s par for the course, given an internet that’s “awash” in terrorist recruiting and training materials that don’t get taken down.
  • The bill doesn’t define terrorist activity.
  • Tech workers aren’t trained to identify terrorist material or the people who should be scrutinized. As the LA Times board wrote, “…unlike child porn, there is no central database of images, videos and texts that could help identify terrorism-related activity online.”

Meanwhile, Google’s top guy is hoping to seek out and squash radical content as if it were a bunch of typos.

In an opinion piece for the New York Times, Google Executive Chairman Eric Schmidt on Monday proposed a “hate spell-checker” to suppress radical and terrorist content:

We should build tools to help de-escalate tensions on social media - sort of like spell-checkers, but for hate and harassment. 

Schmidt also wants online properties to target terrorist groups and remove their propaganda before it spreads:

We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

But if silencing terrorists were an effective way to infiltrate their operations or to uncover and subvert their planned attacks, the government might as well just fund Anonymous-affiliated activists and thus start Rickrolling Daesh supporters.

The response of intelligence agencies to that scheme, which involved Anonymous retaliating against Daesh for the Paris attacks by taking down thousands of accounts and launching Rick Astley’s “Never Gonna Give You Up” at them, can be summed up in one word: counterproductive.

One of the security groups that rely on Daesh’s social media presence to infiltrate and monitor jihadist accounts and forums is Ghost Security Group, known as GhostSec.

Like Anonymous with its denial-of-service (DoS) attacks, and like Schmidt’s proposed “hate spell checker” and content removal, GhostSec takes down terrorist sites.

But it does so with far more discretion, aiming primarily for recruitment sites.

More importantly, GhostSec also reaps intelligence from Daesh accounts, including plans for major terrorist attacks and bomb-making instructions, that it passes on to US intelligence agencies, such as the FBI.

This, not Rickrolling or DoS, is the type of counter-terrorism cyber work that’s productive.

For example, GhostSec once passed information through a third party to the FBI that reportedly disrupted a suspected Daesh-linked cell in Tunisia as militants plotted a 4 July repeat of the Sousse beach massacre.

GhostSec pulled it off with a mixture of Twitter tracking and geolocation via Google Maps.

Once a site is trashed, its intelligence is unrecoverable, GhostSec has said.

In fact, Anonymous has taken down sites that the group could have otherwise mined for intelligence.

Just as the Requiring Reporting of Online Terrorist Activity Act has risen once again, so too has this vital question: How do you intercept intelligence once it’s been wiped offline?

Image of Masked terrorist behind computer courtesy of Shutterstock.com