So, the US FCC has launched their ABC for ISPs at the CSRIC. You catch my drift?
No? Okay, I’ll translate. Last week, the US Federal Communications Commission (FCC) launched a new voluntary U.S. Anti-Bot Code of Conduct (ABCs) for Internet Service Providers (ISPs) at their Communications Security, Reliability and Interoperability Council (CSRIC) meeting.
It was developed with extensive industry focus and participation, including input from Verizon, Cox, and Comcast.
It creates new opt-in procedures for ISPs tackling the networks of enslaved zombie computers (computers that can be controlled by unauthorised third parties) used to distribute spam, DDoS attacks and malware.
This self-regulation approach seems to have appealed to many stakeholders, with big ISPs like AT&T, Sprint, Time Warner Cable and CenturyLink getting on board.
According to the code of conduct, the ISPs will benefit from “fewer calls to help desks from customers with infected machines, reduced upstream bandwidth consumption from denial-of-service attacks and spam, increased customer goodwill, and a drop in spam-related complaints from other ISPs.”
What company wouldn’t want increased customer goodwill, right?
So what is the catch? Well, to take part, an ISP has to engage meaningfully with one of the following:
Educate end users
ISPs should make information available to advise end users on preventing and reducing exposure to bot risks. They should provide advice and resources on removing infections from their system too.
Increasing user understanding of botnets is a positive step that seems to incur a minimal burden on ISPs: they can link to existing publicly available guidance on bot management.
Detection of bots
ISPs should deploy “capabilities within their networks that aid in identifying potential bot infections.” They could use notifications of potential infections from end users and third parties, like Spamhaus Project Block lists and the US Department of Homeland Security Computer Emergency Readiness Team.
The code of conduct is a bit vague in detailing measures ISPs should take to identify threats. They do call out inadequacies in simple ‘pattern matching’ detection have trouble differentiating between botnets and legitimate internet applications like ‘distributed host-based caching’, and online gaming.
Clearly, legitimate services shouldn’t be impacted by bot prevention, but improved systems need to be chosen carefully. US-wide ISP use of deep packet interception tools, for example, are not sustainable – the surveillance and privacy implications are too significant.
Notification and remediation for end users
ISPs will notify users of active infections and provide tools, guides and services to prevent, verify and mitigate threats.
Users are often oblivious that their computers are infected. Once they are informed, the removal of bot infections has often been left up to the user, with ISPs lacking the necessary resources to help.
For technical users, disinfection may be an easy exercise, but the average user will be able to get guidance to third party assistance and removal tools by ISPs.
Collaboration within the ‘internet ecosystem’
Search providers, hosting companies, security vendors, cloud computing providers and financial services are all to build their strength in numbers against the common bot enemy.
They will do so by sharing methods for gathering intelligence, detecting and mitigating threats, for developing common strategies and identifying relevant technical resources.
It seems the provisions are sometimes quite vague. Because ISPs only have to sign up voluntarily to one of the five sections, it’s full overall impact might not be felt.
Perhaps it was a missed opportunity for providing clear guidance for ISPs on suitable privacy-protecting bot detection technologies.
There was great potential to harmonise industry practice and, despite some beneficial advances in the Code, like increasing end-user education, it does feel like the guidelines could have gone further.