Fake news still rattling cages, from Facebook to Google to China

Post-election, the ripples from fake online news continue to rock boats, from Google to Facebook to China and beyond.

The way to tackle the problem, as far as China’s concerned, seems to be to track down those who post fake news and rumors, and then “reward and punish” them – whatever that means.

According to Reuters, Chinese political and business leaders speaking at the World Internet Conference last week used the spread of fake news, along with activists’ ability to organize online, as signs that cyberspace has become treacherous and needs to be controlled.

Ren Xianling, second in command at the Cyberspace Administration of China (CAC), said that the country should begin using identification systems to track down people who post false news and rumors.

It’s one more step on the road to a more restricted internet: one that China’s already walking and one that extends even beyond its infamous Great Firewall of censorship.

Earlier this month, the country adopted a controversial cybersecurity law, set to go into effect in June 2017, that has companies fearing that they’ll have to surrender intellectual property or open backdoors in their products in order to operate in China.

Meanwhile, over at Facebook, employees have reportedly gone commando, forming an unofficial task force to study fake news.

According to BuzzFeed, the renegades have already disagreed with CEO Mark Zuckerberg, who called it “a pretty crazy idea” to think that fake news on Facebook influenced the outcome.

He’s since dialed it back, saying that this is an issue that Facebook has “always taken seriously”.

Over the weekend, Zuck took to his personal Facebook page to post seven projects launched to tweak the site and polish the algorithms that pushed fiction to the top of Trending, where it’s been masquerading as real news.

They are:

  • Stronger detection to the systems that spot misinformation before users have to do it themselves.
  • Much easier user reporting.
  • Third-party verification by fact-checking organizations.
  • Possible warnings on stories flagged by those fact-checkers or the Facebook community.
  • Raising the bar for what stories appear in “related articles” in the News Feed.
  • Cutting off the money flow. “A lot of misinformation is driven by financially motivated spam. We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection,” Zuckerberg said.
  • More input from news professionals, to better understand their fact-checking systems.

As the media has been covering in minute detail post-election, it’s been suggested that such fake news swayed voters, who shocked much of the world by voting for Donald Trump in the US presidential election.

If we bounce on over to Google, another heavyweight in the news dissemination machinery, we find that it’s reportedly planning to remove its “In the news” section from the top of desktop search results in coming weeks.

Google got dragged into the fake news mess last week, when its search engine was prominently displaying a bogus report about Donald Trump having won the popular vote.

One of the top results for the In the news section when visitors searched for “final election count” was a blog, 70 News, that falsely claimed Trump had won the popular vote by a margin of almost 700,000.

He didn’t. As of Tuesday, votes were still being counted, but Hillary Clinton’s lead of 1.7 million votes was still growing.

Business Insider spoke to a source familiar with Google’s plans who said that it will replace the In the news section with a carousel of top stories, similar to what it now features on mobile.

The plan was in the works for some time before the 70 News piece got featured.

The removal of the word “news” will, hopefully, help visitors distinguish between Google’s human-vetted Google News product and the results of its Google Search engine, which don’t get assessed on the basis of whether they’re true or not – just whether they’re newsy.

However, Google has made clear that it’s not interested in serving up nonsense. Last week, Google CEO Sundar Pichai had this to say on the matter:

From our perspective, there should just be no situation where fake news gets distributed, so we are all for doing better here.

To put some bite into that bark, Google said it would starve out fake-news sites, banning them from its ad network and all that revenue. Facebook did the same.

In his post, Zuckerberg stressed that this is complex stuff, technically and philosophically. Facebook doesn’t want to suppress people’s voices, so that means it errs on the side of letting people share what they want whenever possible. The more people share, the more the ad revenue flows, and it doesn’t matter to ad revenue what people share, be it divine inspiration or drivel.

But over at Princeton University, four college students last week showed that as far as the technical part of the equation goes, it might not be quite that hard after all.

The Washington Post reports that the four spent 36 hours at a hackathon, coming out the other end with a rudimentary tool to block fake news sites.

They’re busy with class work and a little overwhelmed with an outpouring of interest. Want to have a spin with their Chrome extension? Here you go: they open-sourced it.

As the fake-news saga keeps spinning, bear in mind that we can influence this, too. If we see something that we consider fake and comment on it, that’s a +1 as far as the algorithms are concerned.

Did you share it with friends so you can all laugh at how dumb the post was? That’s another +1. All your friends who chimed in? +1, +1, +1, +1. Instead, just ignore it; starve fake news until it shrivels out of our feeds.