Facebook and Twitter may be forced to identify bots

Twitter and Facebook are all too aware that they’ve been infiltrated by Russia-backed bots.

Twitter, for its part, has purged tens of thousands of accounts associated with Russia’s meddling in the 2016 US presidential election. The company also said it would email notifications to hundreds of thousands of US users that followed any of the accounts created by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA), and has said that it’s trying to get better at detecting and blocking suspicious accounts. (As of January, it said it was detecting and blocking approximately 523,000 suspicious logins daily for being automatically generated).

That’s not good enough, according to California lawmakers. They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.

The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.

According to Bloomberg, the legislation is slated to run through a pair of California committees later this month.

Bloomberg quoted Shum Preston, the national director of advocacy and communications at Common Sense Media and a major supporter of the bill. Preston said that California’s on a bit of a guilt trip, given how the social media platforms that have been used as springboards to stir up political and social unrest are parked in its front yard:

California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole. We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington.

New York is also tired of waiting for the Feds to push social media companies into fixing the bot problem. Governor Andrew Cuomo is backing a bill that would require transparency on who pays for political ads on social media.

Proposed legislation at the Federal level includes the bipartisan-supported Honest Ads Act, a proposal to regulate online political ads the same way as television, radio and print, with disclaimers from sponsors.

California’s proposed bill steps it back to the processes that disseminate the content in the first place, but the online platforms say it can be tough to tell human from bot accounts run by ever more sophisticated technologies.

But there are signs to look out for. Twitter has said it’s developed techniques for identifying malicious automation, such as near-instantaneous replies to tweets, non-random tweet timing, and coordinated engagement. It’s also improved the phone verification process and introduced new challenges, including reCAPTCHAs, to validate that a human is in control of an account.

In January, Twitter said that its other plans for 2018 included:

  • Investing further in machine-learning capabilities that help detect and mitigate the effect on users of fake, coordinated, and automated account activity.
  • Limiting the ability of users to perform coordinated actions across multiple accounts in TweetDeck and via the Twitter API.
  • Continuing the expansion of its developer onboarding process to better manage the use cases for developers building on Twitter’s API. This, Twitter said, will help improve how it enforces policies on restricted uses of developer products, including rules on the appropriate use of bots and automation.

Researchers have also been working to come up with a set of tell-tale signs that indicate when non-humans are posting. A 2017 study estimated that as many as 15% of Twitter accounts are bots.

That paper, from researchers at Indiana University and the University of Southern California, also outlines a proposed framework to detect bot-like behavior with the help of machine learning. The data and metadata they took into consideration included social media users’ friends, tweet content and sentiment, and network patterns. One behavioral characteristic they noticed, for example, was that humans tend to interact more with human-like accounts than they do with bot-like ones, on average. Humans also tend to friend each other at a higher rate than bot accounts.

Mind you, not all bots are bad. Take Emoji Aquarium: it’s a bot that shows you a tiny aquarium “full of interesting fishies” every few hours.

Good bots are also useful: they help keep weather, sports, and other news updated in real-time, and they can help find the best price on a product or track down stolen content.

And then too, there’s Bot Hertzberg: the bot created by California Senator Bob Hertzberg to highlight the issue. Hertzberg introduced the pending California bot bill.

Here’s what human Senator Hertzberg, as quoted by Bloomberg, said about his bill:

We need to know if we are having debates with real people or if we’re being manipulated. Right now, we have no law, and it’s just the Wild West.

And here’s what his bot says in its bio:

I am a bot. Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg