An essay on Australian internet filtering

Filed Under: Law & order, Privacy

The dust is starting to settle after the Australian government's announcement three days ago that it will legislate in 2010 to compel Australian ISPs to censor local internet access.

The cabinet minister in charge of the project, Stephen Conroy, infamously awarded the title Internet Villain of the Year in 2009, took great pains to avoid the C-word in his statement, formalising the censorship part of the plan under the headline "Measures to improve safety of the internet for families".

Many people object to internet censorship, noting the ease with which it can turn sour, or become repressive, later on. But most modern democracies manage to balance some degree of censorship with broad personal freedom.

My objection is not to the censorship. My objection is that the government has presented its case, and at least some of the media seem to have accepted it and to have described it, in a way which implies an effectiveness and a usefulness it simply cannot achieve.

The mandatory filtering plan will prevent Australians from directly visiting a list of approximately 1000 unacceptable URLs. Tests bankrolled by the government suggest that this degree of filtering is possible without affecting download throughput, originally a major concern of many objectors. But in media-speak, the test results have turned into statements similar to this one, which states that "the recent pilot trial ... found the filtering was broadly 100 percent effective" [1].

Effective in what way?

The tests demonstrate merely that ISPs can operate a 1000-URL blocklist in the core of their network without affecting performance. The tests measured nothing about the accuracy, relevance or currency of the blocklist itself, the proportion of depraved content on the internet it might actually cover, or the ease which which the blocklist might be circumvented. (The test document admits that any "technically competent" user can bypass the blocklist. So, too, can anyone who knows a "technically competent" user, or anyone who can use a search engine to find freely-available software to provide that "technical competence" automatically.)

SophosLabs finds an average of 23,500 newly-infected and actively dangerous URLs every day. These are not hard-to-find child pornography sites, but right-in-your-face risky content hacked into otherwise-legitimate web pages. In 2009, a legislated project to to block 1000 URLs at the core of Australia's internet infrastructure, even if every URL is genuinely and currently bad, is simply a waste of time and effort.

Sadly, the latest government rhetoric strongly implies that endpoint filtering, content scrutiny at the PC, and parental supervision, are not primary solutions for family safety online. Focusing on PC safety and security in every household is largely to be replaced with a mixture of mandatory blocklists plus additional, voluntary, subscriber-funded filtering by ISPs.

In my opinion, this is quite the wrong way round.

Endpoint filtering and content scrutiny, plus informed parental supervision, are by far the most effective ways of protecting children online. Content filtering at the endpoint means that the computing power of every user's PC, not just of an ISP's servers, can be applied to the job of actually performing the content inspection. This, in turn, permits much more careful and accurate scrutiny. It allows the content actually delivered to the endpoint to be assessed, which means it is not easily tricked by anonymising proxies or encrypted tunnels. And it can much more flexibly be adapted as children learn and grow up.

The next most useful approach is voluntary, subscriber-funded filtering by ISPs. I approve of this part of the government's proposal. This lets parents choose what to filter, can protect against a wide range of threats, and, like the endpoint solution, provides visibility into what was filtered, and when.

But the mandatory censorship component is a truly useless part of the government's plan. It is an unjustifiable waste of our taxation money, squandered to fulfil an election promise.

Rather than pretending to protect Australian children from a tiny fraction of the odious and depraved sites out there, we should take genuine steps to protect them from the clear and present dangers.

We should protect them from the ever-present risk of malware infection from hacked legitimate sites. We should teach them not to part willingly with information they ought to keep private on perfectly legal, and wildly popular, social networking sites. We should educate them about the difference between browsing and research; about the difference between research and plagiarism. We should show them the importance of respect for other people's intellectual property. We should protect them not just from child pornography and the like, but from the attention of the paedophiles and child pornographers themselves.

In short, we should request the government to spend the money it proposes to waste on internet censorship on something useful instead.

So write to your MP and say so!

[1] Computer Daily News, print edition, 2009-12-16, p.2

, , , , , , , , , ,

You might like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

About the author

Paul Ducklin is a passionate security proselytiser. (That's like an evangelist, but more so!) He lives and breathes computer security, and would be happy for you to do so, too. Paul won the inaugural AusCERT Director's Award for Individual Excellence in Computer Security in 2009. Follow him on Twitter: @duckblog