The fight against fake news has a new participant: Mozilla. The organization, which wants to keep the internet a healthy public resource, has announced its Mozilla Information Trust Initiative (MITI), which is a multi-pronged effort to keep the internet credible.
We should pronounce MITI “mighty”, according to Phillip Smith, Mozilla senior fellow for media, misinformation and trust. He explains that Mozilla started this initiative because fake news is threatening the internet ecosystem which Mozilla’s manifesto has vowed to protect.
Ecosystems can withstand some pollution, he says, which is just as well because all ecosystems have some of it. Eventually, though, the pollution reaches a tipping point. For Smith, the internet is an ecosystem, and fake information is the pollutant. He says:
The question we’re asking at Mozilla is whether it’s reaching a point where it risks tripping a positive feedback loop that’s no longer sustainable.
A multi-faceted approach
MITI will tackle fake news in several ways. It will work on products that target misinformation, both itself and with media organizations. It will also research the spread and effect of fake news (expect some reports soon), and it will also host “creative interventions” that seek to highlight the spread of misinformation in interesting ways. It gives the example of an augmented reality app that uses data visualization to show how fake news affects internet health.
Fake news has been a problem for years, but it has surfaced far more visibly of late. That’s in large part because of the 2016 US presidential election, says Smith.
There are big questions about the role this new form of online disinformation potentially played in influencing peoples’ opinion during a very important and divisive US election.
Tackling fake news is a daunting task with different challenges. One of them is its sheer volume. “It’s an asymmetrical problem,” says Smith. “Fake information is produced in exponentially larger quantities than debunks can be produced.”
Another is speed. Fake news spreads like wildfire, making it around the world with just a few unthoughtful clicks. Research shows that it takes far longer – between 13 and 14 hours after a fake story first appears – to stamp it out.
There have been different attempts to solve the problem. Some sites try to act as “debunk hubs” – go-to sites that act as authoritative voices when debunking fake news. Snopes, the grandma of all debunking sites, has been doing this for two decades. In India, Check4Spam is trying to halt the spread of fake news via WhatsApp. Buzzfeed launched Debunk in an attempt to out-virus viral falsehoods with stories correcting them.
Automating the fake news fight
Other organizations, already acting as fact-check hubs, are aiming for more automation. A tool from UK fact-checking organization Full Fact promises to scan newspaper headlines, live TV subtitles and parliamentary broadcasts for statements that match its existing database of facts. The goal is to debunk or confirm statements in real time. Representatives have likened it to an immune system for online fakery.
This idea of automated immunity has found traction among the hyperscale search engine and social media sites. With the enormous power they wield on the web, these players risk being infection vectors for fake news if they don’t become part of the solution.
Twitter seems behind the curve when it comes to fake news. It has reportedly been mulling the idea of a fake news tab, but has said little on the record, other than a mid-June blog post explaining that it’s working on detecting spammy bots.
Google has rolled out its own fact-checking tool for Google News internationally. Unlike Facebook, it isn’t relying on users to tag dodgy stories. Instead, its list of 115 partner organizations will check the facts and label the stories accordingly. They won’t be checking every story, though, and Google won’t be following a set rule to counter different opinions over whether something is fake news.
That highlights another problem for fake news fighters: it isn’t always easy to spot, or quantify. Smith points out that fake news isn’t always binary. Often, the falsehoods lie on a continuum.
“Is it mostly right, but with an incorrect fact? Is it completely fabricated?” he asks, articulating the subtleties of some fake news. “So there is a range, and I think it’s hard to automate the identification or categorization of content with that nuance.”
That doesn’t mean people aren’t trying. Full Fact is one organization behind the Fake News Challenge, which organizes artificial intelligence experts to detect fake news using natural language processing and machine learning algorithms.
It’s a good effort, but Smith says that it has its shortcomings. “None of the teams were able to produce a reliable model for categorizing content that has the nuance that a human would require to discern,” he says.
From technology to literacy
With that in mind, should we be using technology to pick news stories for readers, or simply to advise them? Smith says that technology has a place, but shouldn’t overstep its bounds.
We believe that Firefox users are smart people and are capable of making these decisions or discernments themselves.
Google won’t use its fact-checking information to alter search results, but Facebook wants to use its own algorithms to alter content rankings.
The social media giant has introduced a tag that enables people to report fake news stories (although the reporting option doesn’t appear to have rolled out across all countries yet). It has partnered with third party organizations like Snopes that support Poynter’s fact-checkers’ code.
Facebook, which already collects vast amounts of data about how you interact with its site, has vowed to watch whether reading an article makes people less likely to share it. It will fold this into its rankings, it warned.
The thing is, Facebook’s anti-fake news measures aren’t working that well. Untagged copies of fake news stories are still showing up on its site. Are we really ready to entrust our news choices to its code?
“I’m not sure technology is going to be the answer,” says Richard Sambrook, deputy head of school and director of Cardiff University’s Centre for Journalism. He argues that online users are ultimately responsible for their own media literacy.
They also need to take responsibility for their own news diets – and realise that if you only consume junk, it’s not good for your health! More seriously, we all need to protect against only seeing our own views reflected back at us in filter bubbles or echo chambers.
That’s where the other part of Mozilla’s work will come in. Alongside product partnerships, “creative interventions” and research, MITI’s other weapon in the fight against the spread of online misinformation is literacy. Says Smith:
There is evidence that online knowledge and education are incredibly important to the next billion people coming online “What is lacking right now is a web or media literacy for those people, or resources for those people to use in understanding their information environment.
It isn’t just newcomers to the web that may need some help with media literacy, other studies suggest. Stanford University’s recent research into this area suggests that young people – supposedly our savvy digital natives – are just as vulnerable as others when it comes to critical thinking about what they read and see online.
Mozilla has focused on literacy for a long time, Smith points out. Under MITI, it will develop a web curriculum to help with media literacy, and continue investing in Mission:Information, an existing curriculum aimed specifically at teens.
Targeting kids will be critical, warns Sambrook. “Awareness is a big part of the answer, but we also need to take media literacy more seriously from junior school onwards,” he says. “Investment in media literacy will take a generation or more to catch up.”
Smith also cites other resources to help increase media literacy, including the University of Washington open source media literacy course “Calling Bullshit”, which is available for free online here. OpenSources is curating a list of credible and non-credible sources, along with its reasons, while Full Fact has a handy checklist along with a fact-checker to help verify claims.
There are many more online resources for fact-checking, but the challenge will be getting people to use them and develop their own critical faculties, rather than relying on some opaque algorithm somewhere to make their evaluations for them.
As new fake news techniques emerge, Smith doesn’t entirely rule out the use of technology to fight it. But how we apply that technology will be critical, especially as purveyors of fake news take advantage of new techniques such as the manipulation of video using AI.
“There are pushes to create tools that identify false information created through those means,” he says, adding that AI may play a part in identifying manipulated content in the future. “That will be pretty critical very soon.”
He doesn’t rule out the idea of a common standard for uniquely hashing fake content and storing them in an accessible way, much as anti-malware companies use digital fingerprinting to identify malware. Other technologies could be used to accelerate literacy, such as by privately notifying a person when they have shared content later found to be fake.
Unless we get this right, the future looks dark, warns Sambrook, who describes a future in which Smith’s ecosystem is overrun with fake news, hopelessly polluted with an ocean of misinformation.
“The world is also becoming more polarised politically and less tolerant. I am afraid I see no signs of that being reversed. It may be a period, like the 1960s in the USA, where division eventually recedes, or it may end in war or civil violence. Given the disruption technology is bringing in all areas of the economy and employment, I’m not optimistic, I’m afraid.
Technology may have a place in fighting that future, but ultimately it’s going to come down to us. Marshall McLuhan voiced it best in 1964, five years before researchers flipped the switch on the internet’s first router: “Faced with information overload, we have no alternative but pattern recognition.”
33 comments on “Fake news: Mozilla joins the fight to stop it polluting the web”
Fake news has always existed. Wars have been started because of fake news. The responsibility for determining fake news rests with the individual, who should be informed, literate, and possessed with critical thinking skills inherent in an education system (“how” to think, not “what” to think).
Any initiative to “stop fake news” is indistinguishable from censorship.
Case in point – Snopes is a veritably fake news site with an admittedly non-objective agenda. Setting them up as an authority is nothing less than the fox guarding the hen house.
Snopes does a very good job of establishing the veracity of all kinds of stories. In what way do you think it’s a “fake news” site? Citations needed.
I don’t know if it counts as fake news, but it does count as Snopes being inaccurate – My father was a counselor in the Illinois (US) prison system in the 1970’s and had told me then about people sagging their pants as advertisement for,, activity. I debated this with snopes to get them to stop calling it false, but they refused. I have next to no trust in them as I know at least one of their “fact check” items is wrong. Unless they can site sources, then I would still check them.
Snopes does cite its sources.
Well, there’s this thing called the “internet”, and it has something called a “search engine”. Google has one, I’m told. If you know how to work it (it’s not difficult, a child in your neighborhood should be able to help), you just start it up and type in “snopes errors”. You’ll be presented with a plethora of citations. If they all conflict with your preconceived conception of reality, well, then we just found the problem.
No, that’s not how it works. You make assertions, you provide the evidence.
OK, you’re asserting that Snopes is reliable. I need you provide the evidence. I’ll be waiting.
Mate, you made the initial claim upon which this entire comment thread is predicated: That Snopes is unreliable. It’s intellectually dishonest to initiate a conversation with an assertion, fail to provide evidence of your assertion, and then flip the burden of proof on the people who disagree with you.
If you had provided evidence in the first place, then – and only then – would the burden of evidence be shifted to your opposition.
And I just did provide the evidence. I showed you where to look, but you (apparently) refuse. You can see for yourself. That is, if you really want to know. So, as I suspected, you’re simply not open to an opposing viewpoint.
“And I just did provide the evidence.”
By saying “google it?” You know that’s not evidence, right?
I don’t care about wether snopes is reliable or not, but being condescending and proving your point by suggesting to google for facts that support your opinion is kinda….
Your request for evidence that Snopes is reliable raises an interesting question: what does ‘reliable’ mean? Does it mean a site that has been entirely error-free, forever? You’d have trouble finding any online site that passed this test. There will be errors in even the must trusted publication at times. The New York Times keeps a public list of its own.
It’s also worth pointing out the difficulty of fact checking as a process that produces consistent results. Chloe Lim at Stanford wrote a great paper about that, earlier this year. This platform doesn’t let me hyperlink but the URL is https://drive.google.com/file/d/0B_wUaJ01JSddZTNWVWpkRzVXUzg/view
So what’s the alternative? I think we can swap the squishy and hard-to-define term ‘reliability’ and replace it with an objective assessment of a fact checking site’s methodology and motivations. How does the site check its facts? Are its methods transparent so that you can evaluate them yourself? Does a site cite its sources so that you can check those, too? Has it spoken to primary sources in its research?
Who runs the site, and what is their background? Who funds them, and how? Are there any conditions attached to that funding? All of this information is publicly available from Snopes, and you can use it to build your own opinion of whether the site is trustworthy or not.
If you want to rely on other opinions, you’ll also find positive assessments of Snopes from organizations like Fact Check, which seem pretty well respected too.
One thing I like about Snopes is that it explicitly says you shouldn’t just rely on its own evaluations, but should use several to get a well-rounded view of the subject. That’s really smart advice and raises the point that I tried to make in the article: the onus for evaluating veracity ultimately lies with the reader. I worry about abdicating this responsibility to some non-transparent algorithm somewhere. Having the critical faculties to evaluate a source of online content is both empowering, and important for the health (and civility) of online debate.
Thanks for taking the time to comment. It’s edifying to see people reading the story and reflecting on the topic.
There’s one problem I see that seems to be ignored when challenging fake news, which is that the chocolate weight loss hoax was entirely factually correct. Certainly it lied by omission, but none of the facts stated were untrue. The study was too short, the sample size too small, and of all the measures of physical fitness they measured only two were actually reported on, but journalists weren’t going to spot that.
How do we spot factually correct fake news?
Journos have to do a better job at spotting dodgy surveys. This is something that few people seem to do, especially in the trade press. We need to be asking about survey methodologies, confidence intervals, and so on. Newsrooms are being pared back so much that a lot of people seem not to have the training, or are in such a hurry to meet a story quota that they’d rather not ask.
There is no need to Google for anecdotes about Snopes errors because we can have no reasonable expectation that Snopes would be error free (just ask a child in your neighbourhood).
My email spam filter does not stop 100% of spam 100% of the time but that does not mean it’s ineffective. In fact it’s a nailed-on certainty that my spam filter will suffer from false positives. This is also true of anti-virus software, spell checkers and countless other things we happily rely on every day (not least “search engines” like Google and their ability to correctly identify, codify and rank text it finds on the “internet”).
YOU HAVE GOT TO BE KIDDING RIGHT. SNOPES IS A TOTAL LEFT WING TOOL OF THE DEMOCRATIC PARTY. YOU MIGHT AS WELL SAY SNOPES IS PART OF THE DEEP STATE WHERE HAVE YOU BEEN HIDING.
Thank you for campaigning for democracy against internet diktats, dictators and dictatorships, Claire Annette Reed.
Or will the Sophos dictators and dictatorship censor this pro-democracy comment? LMAO.
Please explain, I don’t see how “democracy” fits into this. That would imply that people vote on what truth/facts are, facts just are, opinion is another matter.
Shakespeare wrote about the problem of fake news in Julius Caesar.
I don’t hold with this idea that just because something isn’t new it can’t suddenly get worse. The fact that Ebola was already known to exist before the most recent Ebola outbreak does not mean that the outbreak didn’t happen.
Of course not, but the natural state of the human is Ebola-free, so we can positively identify where Ebola is. I’ve yet to see any evidence of any unbiased media to use as a baseline.
Which is why I like nakedsecurity. You guys wear your pro-Sophos bias openly and it means I know where you’re coming from.
Does this mean CNN will be banned from the Internet?
or at least have them put a disclaimer like The Onion does 🙂
I know, let’s use democracy and have a vote to sort this out!
Oh, no, Trump and Farage used democracy. Let’s face it, Sophos, your problem is democracy, isn’t it?
Question: would any of this be happening if Trump had lost? Let’s have a vote! It’s democracy, stupids.
Censorship is a very slippery slope. Destined to fail.
“Mozilla to launch anti-fake news initiative funded by George Soros” is supposed to be the setup to a joke. Anyone who hears that should immediately bust out laughing.
1984 redux: Google, Facebook, Snopes and their ilk ‘filtering out’ fake news.
Richard Sambrook has it right when ‘he argues that online users are ultimately responsible for their own media literacy.’ Oh, and don’t hold your breath counting on the current education ‘system’ teaching the upcoming how vs what to think – just watch the anti-free speech brigade at work.
“Buzzfeed launched Debunk in an attempt to out-virus viral falsehoods with stories correcting them.” It’s a joke, right?
It’s good to see an article about this topic that has more then “Google, Facebook and co. should do something” approach to fake news. If people reason and think about stuff they read, we wouldn’t need fake news filters, algorithms or AI.
The beauty of the internet is that I, myself, that’s me, can search the net for myself to find information that I can determine for myself as to whether I believe it to be truth or not based on the evidence presented. Does the term “thought police” come to anybody elses mind?
So will they censor all the magazines as well that keep telling me that Angelina Jole is pregnant with triplets again or that Jennifer Aniston and Brad pitt are getting married again or baby bumps confirms that the soap opera star is pregnant to her boyfriend of three weeks.
The fact that there are sites like snopes shows that we are becoming a lazy in our search for truth and who fact checks Snopes?
Every person on the internet should be their own Fact Checker.
I agree, but then again I think everyone already thinks they are, and that’s the problem.
The internet is a popularity machine based on darwinian principles. Search results, money (via advertising) and our time naturally flow to the things that are most attractive to us, not the most truthful. The content on the internet, and on platforms like Facebook, Twitter, Google etc evolves, things that don’t deliver disappear quickly in favour of things that do.
Meanwhile… facts are static and don’t evolve.
We seek the truth but we do it in a fog of laziness and confirmation bias using a machine that has no interest in making the truth available to us. We find things we believe to be the truth that are, in fact, things that have evolved to fit perfectly into our expectations of what the truth ought to be.
Take any conspiracy theory. People on either side will tell you, with total conviction, that they are their own fact checkers, that the truth is out there if only you’d look for it and that it’s obvious when you find it. Somewhere in the middle there is a charlatan either concocting a fake conspiracy or covering a real one up and a much, much larger group of honest people convinced that they’ve earnestly sought and found the truth.
I’m *FAR* more concerned about the arbitrary censorship of ‘fake news’ than the existance of fake news itself.
CenSorsHip plain and simple