Word count: 1,152

Over the past year or so I’ve become steadily more aware of and annoyed by a phenomeon I’m going to call, for lack of a better term, ‘intuition jousting’ (‘IJ’). My experience, and obviously I can only speak for my own, is that IJ is a quite a serious phenomenon in the Effective Altruism (‘EA’) community. It also exists amongst academic philosophers, although to much more modest extent. I’ll explain what IJ is, why it’s bad, why I think it’s particularly prevalent in EA and what people should be doing instead.

Intuition jousting is the act of challenging whether someone seriously holds the intuition they claim to have. The implication is nearly always that the target of the joust has the ‘wrong’ intuitions. This is typically the last stage in an argument: you’ve already discussed the pros and cons of a particular topic and have realised you disagree because you’ve just got different starting points. While you’ve now exhausted all logical arguments, there is an additional rhetoric move to make: claiming someone’s fundamental (moral) instincts are just flawed. I call it ‘jousting’ because all it involves is testing how firmly attached someone is to their view: you’re trying to ‘unhorse’ them. Intuition jousting is a test of strength, not of understanding.

It’s possible there’s already a term for this phenomenon somewhere I’ve not come across. I should note it’s similar to giving someone whose argument you find absurd an ‘incredulous stare‘: you don’t provide a reason against their position, you just look them in the eye like they’re mad. The incredulous stare is one potential move in an intuition joust.

To give an common example, lots of philosophers and effective altruists disagree about the value of future people. To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’). To others, it’s just obvious there is nothing morally good about creating new people and we should focus on present-day suffering. Both views have weird implications, which I won’t go into here (see Greaves 2015 for a summary), but conversation often reaches its finale with one person saying “But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.” Typically at that stage the person will fold his arms (it’s nearly always a ‘he’) and look around the room for support.

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the pursuit of knowledge. I hope it’s clear that IJing isn’t arguing, it’s just disagreeing about who has what intuitions. Given that the intuitions are the things you have without reasoning or evidence, IJ has to be pointless. If you reach the stage where someone says “yeah, that is just what I believe, I can’t give you any further reasons” and you then decide to tell them those beliefs are stupid, all you’re doing is trying to shame or pressure them in admitting defeat so that you can win the argument. Obviously, where intuition jousting occurs and people feel they will get their personal views attacked if they share them, people will be much less inclined to cooperate or work together.  To be clear, I don’t object at all to arguing about things and getting down to what people’s base intuitions are. Particularly if they haven’t thought about them before, this is really useful. It’s the jousting aspect I think is wrong.

I’ve noticed IJing happens much more among effective altruists than academic philosophers. I think there are two reasons for this. The first is that the stakes are higher for effective altruists. If you’re going to base your entire career on whether view X is right or wrong, getting X right really matters in a way it doesn’t if two philosophers disagree over whether Plato really meant A, A*, or A**. The second is that academic philosophers (i.e. people who have done philosophy at university for more than a couple of years) just accept that people will have different intuitions about topics, it’s normal and there’s nothing you can do about it. If I meet a Kantian and get chatting about ethics, I might believe I’m right and he’s wrong (again, it’s mostly ‘hes’ in philosophy) but there’s no sense fighting over it. I know we’re just going to have started from different places. Whilst there are lots of philosophical-types in effective altruists, by no means all EAs are used to philosophical discourse. So when one EA who has strong views runs into another EAs who doesn’t share his views, it’s more likely one or both them will assume there must be a fact of the matter to be found, and one obvious and useful way to settle this fact is by intuition jousting it out until one person admits defeat.

I admit I’ve done my fair share of IJing in my time. I’ll hold my lance up high and confess to that. Doing it is fun and I find it a hard habit to drop. That said, I’ve increasingly realised it’s worth trying to repress my instincts because IJing is counter-productive. Certainly I think the effective altruist community should stop. (I’m less concerned about philosophers because 1. they do it less and 2. lots of philosophy is low-stakes anyway).

What should people do instead? The first step, which I think people should basically always do, is to stop before you start. If you realise you’re starting to spur your horse for the charge you should realise this will be pointless. Instead you say “Huh, I guess we just disagree about this, how weird“. This is the ‘lower your lance’ option. The second step, which is optional but advised, is to trying to gain understanding by working out why a person has those views: “Oh wow. I think about it this way. Why do you think about it that way?” This is more the ‘dismount’ option.

As Toby Ord has argued, it’s possible for people to engage in moral trade. The idea is that two people can disagree about what’s valuable but it’s still better for both parties to cooperate and help each other reach their respective moral goals. I really wish in the EA community I saw more scenarios where, should an X-risk advocate end up speaking to an animal welfare advocate, rather than each dismissing the other person as being wrong or stupid (either out loud or in their head), or jousting over who supports the right cause, they tried to help other progress their thinking on how to better achieve their objectives. From what I’ve seen philosophers tend to be much better and taking it in turns to develop either others’ views, even if they don’t remotely share them.

If we really do feel the need to joust, can’t we at least attack the intuitions of those heartless bastards over at Charity Navigator or the Make-a-Wish Foundation instead?*

*This is a joke, I’m sure they are lovely people doing valuable work.

Leave A Comment