“Post-truth” doesn’t have to mean factual relativism

Most pieces about the “post-fact” or “post-truth” world express concern regarding the possibility that it’s now commonplace – or even somehow acceptable – to make stuff up instead of offering arguments and evidence for your claims.

My contribution to the discussion was to point out that truth has never mattered as much as we might prefer. But the fact that people don’t care to (or find it difficult to) escape their filter bubbles doesn’t need to entail giving up on facts and the truth entirely. Continue reading ““Post-truth” doesn’t have to mean factual relativism”

The value of comment sections and debates in digital spaces

Regular readers will know that I’ve recently been wondering whether to continue hosting comments here on Synapses, as well as about their value in a more general sense.

downloadI’m not shutting comments down, but will move to moderating them, meaning that it might take up to 24 hours for any comment to appear, and some comments will not appear at all, if I deem them abusive or idiotic. The decision to do so is precipitated by two coincidences, featuring two friends who raised overlapping conversations on Facebook, both of which I engaged with.

The debate on Nathan Geffen’s wall about trolls on GroundUp, and how to deal with them, raised the point that without full-time moderation, comment sections can easily become toxic.

Also, I’ve been led to believe that there’s a potential for legal liability for things posted on one’s own site by commenters, while no such liability exists on Twitter or Facebook (for what other people say, I mean).

Then, Eusebius McKaiser asked for a view on Nick Cowen’s IOL piece arguing that we can’t have productive debate in online spaces, and much of what I say below is a response to that piece (in short, I think we can, but that it takes more work than many of us care to do. In my case, I get few enough comments that the necessary moderation is possible).

Before I get to responding to that IOL piece, just a note on how things will work here with regard to comment and debate. Individual posts will have a moderated comment section, but please also feel free to do one of three things instead, if you prefer:

  1. The old-fashioned “letter to the editor”, where if you’re amenable, and I think your contribution might be of broader interest, I’ll post it as a separate entry.
  2. If you’re on Facebook, there is a page for Synapses. Every entry appears there, and you can comment as much as you like, unmoderated. The same is true for Google+.
  3. Lastly, there’s Twitter, which isn’t ideal for debate, but certainly gives you the opportunity to call me names (if that’s your thing), or to make more friendly noises.

On to the IOL piece, which you don’t have to have read to follow what is about to follow. To quote myself:

it seems to my mind at least plausible that we’re living though an era in which ideas themselves are not that welcome. Where, as Neal Gabler recently put it in a column John Maytham was kind enough to alert me to, the “public intellectual in the general media [has been replaced] by the pundit who substitutes outrageousness for thoughtfulness”.

Despite the demise of postmodernism in academic circles, it still lives and breathes in the popular viewpoint that everybody’s opinion is equally worthy of consideration, and that individuals are under no special obligation to set aside their opinions in favour of what the evidence points to.

The Internet, its potential anonymity, and the sheer volume of both opinions and outrage don’t encourage thoughtful reflection and engagement. I find that the overall quality of discourse and openness to correction is poor on the Internet, and as a result, I tend to only read comment sections to confirm that they are places where people seem unafraid to express their racism, sexism and (other forms of) stupidity.

There are pockets where people do engage earnestly and sincerely, and where there is a chance of shifting peoples’ perspectives. Eusebius’s Facebook wall is itself one small example of that. It’s true that people don’t often say “you’ve changed my mind”, but it’s something that can be intuited from how the tone and content of a conversation shifts.

Second, I’m not sure that the situation is significantly better in meatspace. There, just as on the Internet, people are stubborn, prone to confirmation bias and the backfire effect, etc. It’s partly the fact that there are more participants – with those participants not being carefully selected – in the online space that creates the impression that it’s more chaotic there. In other words, if we were to have an open house in meatspace to discuss something contentious, we might more often have the same impression of shouting past each other.

By contrast, if you do online what you do in meatspace, i.e. carefully select your interlocutors, you’d have the same “civilized” conversations (at least in a relative sense). The problem is that a) you don’t always get to select who talks to you online and b), all the non-verbal cues, such as smiles and body-language, aren’t available to us online.

Complicating this all is my sense of the conversations in both spaces being less civilized than they used to be, because everyone is now an expert in everything. The idea of democracy has been illegitimately expanded into epistemic territory, where the average person has been persuaded that their views are as legitimate as any other person’s view, and where they are somehow attacking you as a person when they criticise your view, rather than us simply having a contestation about the facts or interpretation of them.

We’ve become too personally invested in our beliefs, to put it simply.

The Responsible Believer – my #TAM2014 talk

Earlier this year, I had the opportunity to present a paper at The Amaz!ng Meeting, held in Las Vegas. Here’s the YouTube video of my presentation, with the text pasted below that.

It addresses concerns I have regarding epistemic humility and prudence – basically, remembering that we’re often wrong, no matter how smart we might think we are, and also that the claims we make need to be offered with a level of conviction that is appropriate to the evidence at hand.

The virtues of (epistemic) agnosticism, and the responsible believer

Jacques Rousseau, Free Society Institute.

One of the important lessons that the skeptical community can teach others is that things are often uncertain. We might have very good reason to believe something, yet not feel entitled to claim that we are sure of it. This attitude of epistemic prudence – not making claims that aren’t warranted by the evidence – alongside a certain humility, that of being able to accommodate the possibility that you might be wrong, are essential resources for triangulating on the truth.

The idea of epistemic prudence is worth dwelling on for a moment: one striking difference between skeptical thinkers and non-skeptical thinkers is their attitude towards certainty. I’d suggest that a skeptical thinker would be far more likely to recognise that what we typically call ‘true’ is merely the best justified conclusion available to us given the available evidence – we are rarely making the claim that it is, in fact, certain to be true. Justification, or warrant, is our proxy for truth, and our strategy for triangulating on the truth.

By contrast, the tone of much popular discourse, including coverage of important fields within science in newspapers and on social media, proceeds as if things can be known, for certain. This leads to absurd contestations where things are “proved” and then “disproved” with each new bestseller, and where apparent “authorities” rise and are then quickly forgotten as our attention shifts to the next sensation.

The diet wars are a current and fitting example of this, where moderation is drowned out by inflated claims that Paleo is the only way to go, or that sugar is not just something to be careful of, but something that is “addictive”, even “the new tobacco”.

The temperature of these debates might sometimes be far more comfortable, and the outcomes more fruitful, if proponents adopted a more considered tone and resisted claiming the final word. After all, knowledge is contingent on what we can possibly know at any given time, and the chances that you’ve got things exactly right at some particular point in time are therefore often small.

Then, epistemic humility – being willing to recognise that you might be wrong – is an essential component of holding a robust set of beliefs. Smugness or overconfidence regarding the set of ideas that you regard as true might sometimes be justified – leaving aside the issue of how attractive or politically effective smugness might be – but it can also be a sign that your belief could have ossified.

Your conviction can, in these instances, become something closer to an item of faith, rather than a belief held responsibly – meaning that you still regard as potentially falsifiable (even as you think it unlikely that it would ever be falsified).

And it’s exactly because we skeptics tend to be relatively virtuous in terms of these two attributes – humility and prudence – that we need to remind ourselves to perform a diagnostic check every now and then, and perhaps especially in situations where our point of view is being challenged by others.

Being better at avoiding confirmation bias and other common ways of getting things wrong doesn’t make us immune to those mistakes. In fact, our confidence in getting things right might be a particularly problematic blind spot, because if we think we’ve learnt these lessons already, we might falsely believe that there’s little need to keep reminding ourselves of various mistakes we might still be making.

So in light of the fact that the world is complex, plus the fact that we’re unable to be a specialist in everything, should it not strike us as odd how seldom we hear anyone say something like “I simply don’t know enough about that issue to have a position on it” – instead of taking a view, and then eagerly defending that view?

Jonathan Haidt’s account of how moral reasoning works is a useful analogy for many claims regarding contested positions. Where we find ourselves committed to a view, and stubbornly defending it, rather than finding that a view develops in light of the evidence, we should recognise that the psychology of belief and the politics of debate mean that it’s often the case that “the emotional dog wags the rational tail”.

In other words, we emotionally commit ourselves once we take a position, and then make rational-sounding noises to justify it, rather than being able to admit that we’re not sufficiently informed to take a position on this matter, or that our position is far more tentative than we’d like you to believe.

Too many of us seem to despise doubt or uncertainty, even if that’s the position best supported by the evidence we have access to. We like to have strong opinions – and with the rise of social media – where robust and hyper-confident expression gets most attention – the space for being uncertain and expressing that uncertainty closes off just that little bit more.

To add to the difficulty of entertaining and encouraging considered debate, the widespread availability of information via the Internet has arguably “democratised” expertise itself. The idea of authority – and authorities themselves – are under constant challenge from everybody with an Internet-connected device. In other words, by everybody.

While it’s of course true that we shouldn’t accept the testimony of authorities in an uncritical way, we need to nevertheless accept that expertise and privileged access to information are real things that can’t easily be replaced by Googling. Sometimes – most of the time, in fact – someone will know more than you, and you could quite possibly be wrong.

What the death of authority means is that no matter what your point of view, you can find communities that share it, and that reinforce your belief while you reinforce theirs, with all of you walking away believing that you are the authorities and everyone else bafflingly obtuse.

Eli Pariser’s concept of the “filter bubble” articulates this point well – if you’re looking for evidence of Bigfoot on a cryptozoology website, you’ll find it. Chances are you’ll end up believing in the Loch Ness monster too, simply because the community creates a self-supporting web of “evidence”. When these tendencies are expressed in the form of conspiracies, the situation becomes even more absurd, in that being unable to prove your theory to the doubters is taken as evidence that the theory is true – the mainstream folk are simply hiding the evidence that would embarrass or expose them.

Combine the filter bubble and the democratisation of expertise with the nonsense of a blanket “respect for the opinions of others”, and we quickly end up drinking too deeply from the well of postmodernism, where truth takes a back seat to sensation, or where simply being heard takes too much effort, and we withdraw from debate.

Despite these complications, we can all develop – as well as teach – resources for separating unjustified claims from justified ones, and for being more responsible believers. By “responsible believers”, I mean both taking responsibility for our beliefs and their implications, but also holding beliefs responsibly – in other words, forming them as carefully as possible, and changing our minds when it’s appropriate to do so.

Peter Boghossian’s superb 2013 book, “A manual for creating atheists”, introduces the concept of “street epistemology” – simple but effective rhetorical and logical maneuvers that we can deploy in everyday situations. In a similar vein, I’d like to articulate a few concepts that can serve as resources for making it more likely (as there are no guarantees available) that we end up holding justified beliefs in a responsible fashion.

THE POLITICS OF KNOWLEDGE

Many of our blind spots and argumentative failures involve the politics of argumentative situations, rather than errors in cognition. What I mean by that is that even though we might be quite aware of how our thinking is flawed when thinking these things through abstractly, we forget what we know about good reasoning in the heat of “battle”, especially when engaging in some areas of frequent contestation.

The sorts of contestations I’m imagining are atheists debating theists, or scientific naturalists versus Deepak Chopra, debates around gender and sex, or the extent to which atheism, skepticism, humanism and the like are supposed to intersect. We can become so obsessed with being right, and being acknowledged as right, that we forget about being persuasive, and about what it takes to get people to listen, rather than leap to judgement.

Debates occur in a context. Your opponent is rarely stupid, or irredeemably deluded – they more often simply have different motivations to yours, as well as access to a different data set (regardless of its quality relative to yours). So, to paraphrase Dan Dennett, we might usefully be reminded of the importance of applying Rapoport’s Rules when in argument.

If you haven’t encountered Rapoport’s Rules before, they invite us to do the following:

  1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
  2. List any points of agreement (especially if they are not matters of general or widespread agreement).
  3. Mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

One immediate effect of following these rules is that your targets will be a more receptive audience for your criticism than they would otherwise have been.

EXPLANATIONS & REASONS

Then, we could bear in mind what Leonid Rozenblit and Frank Keil from Yale University dubbed “the illusion of explanatory depth“. We’re inclined to believe that we have a robust understanding of how things work (especially things we’re emotionally committed to), whereas in fact our understanding might be superficial, or at least difficult to convey to a less-informed interlocutor.

Those of us who teach professionally, like me, know this phenomenon well. You might launch into an exposition on a topic you think you know well, but then quickly realize that you don’t quite have the words or concepts to explain what you’re trying to explain – even though it seemed crystal clear to you when planning your lesson.

Philip Fernbach, of the University of Colorado, wrote up an interesting 2013 experiment that invites us to recognize and leverage this illusion to not only make ourselves focus on the quality of our explanations, but perhaps also aid us in persuading others that they are wrong: instead of providing reasons, try providing explanations.

For example, instead of asserting that we need universal healthcare because everyone is morally equal in this respect, and therefore equally entitled to care from the state, try explaining how your envisaged universal healthcare scheme would work – how would it be implemented, what would it cost, who would pay, who would benefit.

This approach stands a better chance of persuading others that you are right, because you have “shown your workings”, rather than asserted your view. It also – crucially – stands a better chance of showing you where (and if) you are wrong, because sometimes those workings don’t stand up to scrutiny, and exposing them allows you to spot that.

Another way of putting this is that if you’re offering reasons, it’s likely that you’re mostly trying to demonstrate that you’re right. If you’re explaining, it’s more likely that you understand why you’re probably right, and therefore, that you’d be able to effectively articulate to others why they should be persuaded to subscribe to your point of view.

BACKFIRE EFFECT

Whether you’re explaining or not, keep in mind that we don’t operate in a space of pure reason. We’re often emotionally invested in our beliefs, to the extent that we’re prone to what’s known as the “backfire effect”.

While we like to think that when our beliefs are challenged with strong evidence, we’d be willing to alter our opinions – adjusting our claims in light of the new evidence that has been presented to us – the truth is not always that flattering.

When our deep convictions are challenged, the perceived threat can mean that we dig in our epistemic heels, allowing our existing beliefs to become more entrenched rather than considering the virtue of the challenge that has been put to us.

Consider possible longer-term implications of this: once you rule one set of considerations out as irrelevant to the truth of your thesis, how much easier might it be to cut yourself some slack on some future set of considerations also – and therefore, how easy it might be to end up with beliefs that are essentially unfalsifiable, and therefore only as virtuous as those of an astrologer?

CONCLUSION

In learning about various ideas related to scientific reasoning and how to assess evidence, we shouldn’t forget that we can be the victims of various biases ourselves. Furthermore, congregating as self-declared skeptics shouldn’t be allowed to obscure the fact that we can create our own filter bubbles at events like TAM, and need to guard against that possibility.

To return briefly to where I started, the epistemic prudence I was speaking of might properly lead – more often than we think – to a conclusion that is essentially agnostic. In other words, the most justified position for you to take on a particular issue might be to say something like, “I simply don’t know”, or perhaps to engage in debate mostly for the sake of argument, rather than for the sake of defending a view you’re not fully qualified to hold.

Agnosticism of this sort does not necessarily entail thinking that two perspectives on an issue are equally well justified. The agnostic can believe – and even be strongly convinced – that one side of the argument is superior to the other. Agnosticism of this modest sort simply means that we recognize we are not justified in claiming certainty, and speaking in ways that presume certainty. Our discourse should acknowledge the limits of what we can know.

This is an important attitude or style to cultivate, because for as long as we are resisting unwarranted confidence or the appearance of it, we’re signaling to others and reminding ourselves that the evidence still matters, and that our minds can still be changed.

I’m emphasizing this idea because our considered views are – or should be – always contingent on the information we have access to. And, we are often in a position to confess in advance that this information is inadequate for conviction to be a reasonable attitude. We nevertheless feign conviction in conversation, partly because many of these debates take place on social media platforms that eschew nuance.

But it’s our job to fight for nuance, and to demonstrate, partly through showing that we’re willing to embrace uncertainty, why you should take us seriously when we claim that some conclusion or other is strongly justified. We devalue our skeptical currency, or credibility, when we assert certainty – and we do the political cause of skepticism harm.

To repeat, this doesn’t mean we can’t take sides, and it also doesn’t entail the sort of false balance that would require one to give a creationist a seat at the adult table. Instead, I’m urging that we become more aware of our own fields of expertise, and not overstep the boundaries of those fields, or our knowledge more generally, without expressing our views in a qualified way, aware of our limitations.

We might say that while we’re not sure, we think it’s overwhelmingly likely that some position is wrong or right. The point is that avoiding dogmatism and its more diluted manifestations reminds us that it’s possible to change an opinion when new evidence comes to light.

Our worth as skeptics is not vested in conclusions, but in the manner in which we reach conclusions. Skepticism is not about merely being right. Being right – if we are right – is the end product of a process and a method, not an excuse for some sanctimonious hectoring.

Sometimes we need to remind ourselves of what that method looks like, and the steps in that process, to maximize our chances of reaching the correct conclusion. Focusing simply on the conclusions rather than the method can make us forget how often – and how easily – we can get things wrong.

As skeptics, we need to set an example in the domain of critical reasoning, and show others that regardless of authority or knowledge in any given discipline, there are common elements to all arguments, and that everybody can become an expert – or at least substantially more proficient – in how to deploy and critique evidence and arguments.

As humanism can be to ethics – a woo-woo free inspiration and guide for living a good life – skepticism should be for science, providing resources and examples of how to be a responsible believer, and of the importance of holding yourself responsible for what you believe.

So if we’re spending excessive skeptical energy in self-congratulation for how smart we are compared to some gullible folk out there, rather than in helping them develop the intellectual resources we’re so proud of, perhaps we should consider whether we might be doing it wrong – or at least, whether we could be doing it better.

Epistemic prudence, Noakes, and the limits of authority

Wittgenstein said “Whereof one cannot speak, thereof one must be silent”, and that quote seems as good a place as any to kick off a post on appeals to authority, the death of expertise, and the boundaries of disciplines. As I argued in a 2012 column, agnosticism is often the most reasonable position on any issue that you’re not an expert in (with “agnosticism” here meaning the absence of conviction, not necessarily the absence of an opinion). Continue reading “Epistemic prudence, Noakes, and the limits of authority”

Politics, science, and the art of the possible

Green policy_0Otto von Bismarck observed that politics is “the art of the possible”, but the statement holds true in many more domains than that. It’s only trivially true to say that anything is constrained by what is possible and what is not – yet that sort of retort is usually as far as the conversation might go (on social media in particular).

It’s more likely that Germany’s first Chancellor was trying to say that there’s frequently a mismatch between our ideals and what can reasonably be achieved. Not, in other words, that things are literally impossible – more that we need to bear the trade-offs in mind when making judgements as to whether people are doing a good job or not.

Cognitive biases like the Dunning-Kruger effect describe how we overemphasise our own expertise or competence, leading us to ascribe malice in situations where the explanation for someone’s screw-up is most probably simple incompetence, or simply that the job in question was actually pretty difficult, meaning that expecting perfection was always unreasonable. (As some of you would know, this paragraph describes a more gentle version of Hanlon’s Razor – “Never attribute to malice that which is adequately explained by stupidity”.)

So, instead of paying attention to the arguments and their merits when it comes to something like blood deferrals for gay men, we claim prejudice. Or, when someone dies after taking the advice of a homeopath too seriously, some of us might be too quick to call the victim stupid or overly gullible, instead of focusing on those who knowingly (because some quacks are of course victims themselves) exploit others for financial or other gain.

The point is that some problems are difficult to solve, and certainly more difficult than they appear to be from a distance, or from the perspective of 20/20 hindsight. So, when you accuse your local or national government of racism, or being anti-poor, or some other sort of malice, it’s always worth pausing to think about the problem from their point of view, as best as you are able to. They might be doing the best they can, under the circumstances.

In case you aren’t aware of two recent resources for helping us to think these things through more carefully, I’d like to draw a recent comment in the science journal Nature to your attention, as well as a response to it that was carried in the science section of The Guardian.

In late November this year, Nature offered policy-makers 20 tips for interpreting scientific claims, and even those of you who aren’t policy-makers should spend some time reading and thinking about these (though, don’t sell yourselves short in respect of not labeling yourself a policy-maker, because on one level of policy, you’d want to include for example parenting. And what you choose to feed your children, or the medicines you give them, would usually be informed – or so one would hope – on scientific claims of whatever veracity.)

The Nature piece talks about sample size, statistical significance, cherry-picking of evidence, and 17 other import issues, many of which you’d hope some scientists would themselves take on board – not only those scientists who might play fast-and-loose with some of the issues raised, but also simply in terms of how they communicate their findings to the public. If you’re asked to provide content for a newspaper, magazine or other media, the article highlights some common areas of confusion, and therefore helps you to know where you perhaps be more clear.

Second, and in response to the first piece, The Guardian (who had also re-published the Nature list) gave us the top 20 things that scientists need to know about policy-making. And this piece I’d commend to all of you, but especially the armchair legislators that routinely solve the country’s political problems on Twitter, or make bold claims about how little or how much governments might care for the poor, and so forth.

In short, making policy is difficult, and doing good science can be difficult too, because among the things we can be short of is time, money, attention, the public’s patience, and so forth. In the majority of cases, both policy-makers and scientists might be doing the best they can, under those situations of constraint. So before we tell them that they are wrong, we should try to ensure we at least know what they are trying to do, and whether they are going about it in the most reasonable way possible, given the circumstances.

They don’t get the luxury of ignoring what is possible and what is not when doing the science, or making the policy. When criticising them, we shouldn’t grant ourselves that luxury either.

To ask for evidence is not (necessarily) scientism

As submitted to The Daily Maverick

Respect is due to people, rather than to ideas. While it may be politically incorrect to say so, there is no contradiction between saying that someone has a misguided, uninformed or laughable point of view, and at the same time recognising that person’s worth or dignity in general. But our sensitivity to being challenged, and to having the intrinsic merit of our ideas questioned, often leads us to conflate these two different sorts of respect.

Respecting a person is partly a matter of not causing them unnecessary trauma through ridicule or contempt. It also requires not prejudging their arguments or points of view, but rather judging those arguments on their merits. But if it is established that those arguments lack merit (when compared with competing arguments on the same topic), there is no wrong in pointing this out. It is perhaps even a duty to point it out, assuming that we care for having probably true, rather than probably false, beliefs about the world. Continue reading “To ask for evidence is not (necessarily) scientism”

The dangers of tolerance

As published in The Daily Maverick, a companion piece to my previous post entitled Suffer the little children (some overlapping content, sorry).

Julian Barnes’ novel “Nothing To Be Frightened Of” opens with the sentence “I don’t believe in God, but I miss Him”. This echoes a question asked by Daniel Dennett in “Breaking The Spell” – that of whether we care more about being able to believe that our beliefs are true, or about those beliefs actually being true.

We might have rational doubts about all sorts of beliefs, yet still want them to be true. Or find value in living our lives under the assumption that they are true. It would be impossible – or at least exceedingly difficult – to live your life feeling that your job was meaningless, that you were not loved or that you had no free will and no actual soul, despite the fact that one or more of those statements may be true. We seem to seek out (and perhaps that indicates need) some transcendence or metaphysics in our lives.

But those desires and/or needs do not make their objects true or real. We need to bear in mind the possibility that certain beliefs serve a social or psychological function only, and that “belief in belief” may take us as far as we can go. In other words, that no value is added by insisting on the actual truth of some of our beliefs. In particular, we need to contemplate the possibility that treating some beliefs as literally true could be harmful, rather than neutral. Continue reading “The dangers of tolerance”

Preliminary thoughts

Much of what I’ve been interested in over the last decade or so has revolved around epistemology, and in particular virtue epistemology – in other words, questions around what it is that we should believe, and how we should form our beliefs. These are normative questions, and raise a whole bunch of issues relating to the extent to which we are in fact able to be rational epistemic agents; what such agents would look like; and whether we would want to be disposed in this way at all. Continue reading “Preliminary thoughts”