Friday, November 18, 2016 :::
Some guy is collecting ideas for preventing people from believing fake news.
The one overarching point I would make is that what sources can be trusted has to be user-configurable, which could (should) include some mechanism for allowing trusted sources to vouch for other sources. I see some ideas on that list that seem worthwhile -- articles that are shared by diverse groups of people are more likely to be reliable; "a Dem flagging a site that is typically visited by more Dems gets more weight" -- but imply a single arbiter of truth. Not that those ideas are worthless -- if you are going to design a system that allows for each user to select an arbiter (or a combination of arbiters), it's certainly worth designing at least one to start with. But expecting right-wing nuts and left-wing nuts and everyone else to accept the same arbiter seems more naive to me than allowing people to sign up with arbiters from their "team" and expecting that the junkiest junk will be flagged by left-wing arbiters and right-wing arbiters alike.
We can learn something from the fact-check genre of journalism here. Most fact-checking stories seem to me to consist of articles generally adding context to the "fact" being checked, with a verdict in the headline. The articles are generally full of indisputable (or nearly so) facts, but the verdict is often very disputable or even clearly wrong. When the verdict is correct, the article generally provides crucial context; when the verdict is wrong, the article goes off on a tangent with what the author seems to think is relevant context but which seems to me to be interesting color at best. I advise people to ignore the verdicts but read the articles.
An example that particularly sticks with me is when Politifact responded to National Review's Kevin Williamson's point that some dubious forms of medicine, such as gay-conversion therapy, are prohibited by states, while other dubious forms of medicine are subsidized by the Affordable Care Act. The Politifact writer rated his statement "half true," dwelling on the fact that the amount of money spent on the dubious therapies cited by Williamson is not terribly large in the grand scheme of things. It struck me that if Williamson had written a column about homeopathy saying, "here's a great way to cut the federal budget deficit," a verdict of "half true" would have been, if anything, generous, but given that his point was that fringe medicine can be either prohibited or subsidized by the state for purely political reasons, his assertion was the 100% unadulterated truth.
Sometimes, complete falsehoods float around, either as unrecognized satire or as deliberate deceit, and those do seem to be getting more common. But political and commercial advertisers are more likely to bend the truth than to break it. I think fact-checking organizations should recognize this by using non-linear scales for their verdicts - they shouldn't just say how true or false something is, they ought to be able to say, "literally true, but misleading" or "literally false, but substantially true" or "true, if you understand the following context." I'm not sure a Facebook feature can do better than show or ignore a website; a browser plug-in might reasonably signal a little bit more. But improving the information environment of the Internet depends mostly on filtering out the completely fabricated, and the only way to get a partisan to believe that something is completely fabricated is to get someone on their team to declare it unreliable.
Labels: fact checking, Kevin Williamson, media distortion, Politifact
::: posted by Steven at 12:55 AM