Last week, the Financial Times revealed that Google has given British security the power to quickly yank terrorist content offline.
The UK government doesn't want to stop there, though - what it really wants is the power to pull "unsavoury" content, regardless of whether it's actually illegal - in other words, it wants censorship power.
The news outlet quoted UK's security and immigration minister, James Brokenshire, who said that the government must do more to deal with material "that may not be illegal but certainly is unsavoury and may not be the sort of material that people would want to see or receive."
He further told Wired.co.uk in a statement that the targeting of content is part of the government's fight against terror:
Terrorist propaganda online has a direct impact on the radicalisation of individuals and we work closely with the internet industry to remove terrorist material hosted in the UK or overseas.
Brokenshire says that the government is also gung-ho about options wherein social media sites tweak their algorithms to keep nasty content from popping its head up at all, or at least get to the point that such content is served up with more balanced material.
Of specific concern are Britons getting radicalised by travelling to take part in the ongoing Syrian conflict, Wired reports.
The Home Office told Wired that any videos flagged by the Metropolitan Police's Counter Terrorism Internet Referral Unit (CTIRU) for review have been found to be in breach of counter-terrorism laws, with 29,000 such having been removed across the web since February 2010.
Brokenshire's comments came in the context of an interview around the UK government's alleged "super flagger" status - i.e., the power to request that masses of clips are pulled on a large-scale basis instead of flagging individual videos, one by one, that breach guidelines.
The Home Office told Wired that the CTIRU doesn't, actually, have super flagger status, in spite of wide news reports to that effect. Rather, it's risen to the rank of Trusted Flagger, which designates users that regularly, correctly flag questionable content.
Google confirmed to the Financial Times that the Home Office has been given the powerful flagging permissions on YouTube but that Google itself still has the final say on what stays and what goes.
What goes is definitely content that incites violence, as the FT quotes YouTube:
We have a zero-tolerance policy on YouTube towards content that incites violence. Our community guidelines prohibit such content and our review teams respond to flagged videos around the clock, routinely removing videos that contain hate speech or incitement to commit violent acts.
To increase the efficiency of this process, we have developed an invite-only program that gives users who flag videos regularly tools to flag content at scale.
Jaani Riordan, a barrister specialising in technology litigation, told Wired that the concept of government going beyond takedown of illegal content to compel takedown of undesirable material is censorship, plain and simple:
The push against "unsavoury" content is in line with the UK's pressure on service providers to provide filters in an ever-increasing range of subject material, starting with child abuse content and expanding to include pornography, with the 2012 Online Safety Bill stating that ISPs and mobile telcos should provide a porn-free internet connection by default.
Wired points out that if, in fact, the government were to take the reins and actually force YouTube to remove content, it would be breaching Article 10(2) of the European Convention on Human Rights, related to the right to freedom of expression.
What's your take? Should legal content glorifying terrorism be yanked, whether it's legal under countries' laws or not?
Is tweaking algorithms to keep it from rising high in search results - in effect, smothering content - more desirable than outright deletion?
Please share your thoughts in the comments section below.