There’s a bill currently in the works in the California state legislature (SB-1001 Bots) that would require bots to disclose that they’re bots. Specifically, the bot needs to own up to its purpose if it is being used to “mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.”
According to the bill as it stands now, it would be okay to use a bot for these purposes as long as it discloses that it is a bot; otherwise, the bot’s use is “unlawful,” says the bill, though the consequences aren’t exactly clear.
Bots are automated accounts on online services (frequently on social media like Twitter and Facebook) that can be used to drum up support for a specific point of view in order to sway public opinion.
The idea here is that if convincingly-human-seeming bots flood social media with messages for support for a specific political opinion, their overwhelming presence will start to bring real voters to their side. This tactic was purportedly used in social media campaigns for both Brexit and for the 2016 election in the United States, and it’s understandable that this bill was introduced to try and prevent this kind of interference from occurring in another election.
But bots on social media have other uses. They’re used by artists and hackers alike to generate everything from poems to memes to self-care reminders to randomly-generated nonsense. Requiring every single one of these bots to disclose their bot-ness would be oppressive to free speech and creativity, said the EFF in its appeal to Senator Hertzberg of California, who introduced the bill.
As of the time of this writing, the text of the bill has been amended several times and had one major revision. The previous version of the bill came down hard both on bots and services the bots used, with at least one previous revision requiring that the service provider investigate suspected bots within 72 hours of notice of a bot’s potential presence, as well as providing possible reports to the Attorney General about any action taken against bots. Notably, these requirements were removed in further revisions, and right now the bill only requires that a bot disclose itself in a “clear, conspicuous, and reasonably designed” fashion… and that’s about it.
While the initial version of the bill may have had more teeth, it was decried by organizations like the EFF as likely to cause a lot of unintended harm. The current version of the bill is narrower in scope, but it’s not really clear if, even in its current form, it would make any kind of helpful impact to stem the tide of bots astroturfing election chatter online.