How hard should you dodge motivated reasoning?
Sometimes, the things we believe are convenient, or make us happy. For example, we might believe that our partner loves us, that humans are fundamentally good, that the future will be a post-scarcity paradise, or that there’s a jar of delicious hummus in the fridge.
Sometimes, the things we believe are inconvenient, or make us sad. For example, we might believe that our crush isn’t into us, or that our capitalist economic system is fundamentally broken, that the future will be dystopian, or that actually, we finished the hummus yesterday.
How convenient a belief is, or how happy it makes us, is not connected to how true it is.
There are some people who pretty much only believe things that are convenient for them (particularly in the political or philosophical realm). If an idea would make them sad or anxious if they believed it, they simply refuse to accept it. I expect that none of my readers would defend this. But it’s worth noticing that we can have the opposite bias — we can over-correct against motivated reasoning, and be too unwilling to adopt beliefs that are convenient for us, or that make us happy.
I keep getting stuck on this. I find that I’m unwilling to change my beliefs from sadder or more pessimistic ones to happier or more optimistic ones, because I’m aware that it’s in my interests to adopt the more optimistic ones. And I think this wariness is reasonable, but, like… I’m probably wrong in a pessimistic direction sometimes, right? There are some beliefs that are convenient for me and that I’m strongly convinced of, such as ‘some people like me’, ‘I have enough money in my bank account to pay my rent next month’, ‘same-sex attraction is not immoral’ and ‘the sun will most likely rise tomorrow’. It would be silly if I started having doubting my deeply-held liberal principles, or fearing that my bank is lying to me about my balance, just because those things happen to have good consequences for me. There’s a risk that I’m engaging in motivated reasoning, but I also have solid reasons for actually believing these things.
So I’m clearly willing to hold some beliefs that are convenient for me, but I’m wary of updating my beliefs from ones that are less convenient to ones that are more convenient. Also, when I’m genuinely unsure about a question, and some answers to the question are much better for me than others, I find it very difficult to think about; if I conclude that the pessimistic answer is right, then I have to be sadder, whereas if I conclude that the optimistic answer is right, I won’t trust the conclusion, because maybe I just got there by motivated reasoning.
Here are some things I’ve found difficult to think about because I’m stuck in this trap:
Having kids/‘creating happy people’/population ethics
I believe that it’s morally neutral to have children, or (for the population ethicists out there) ‘bring beings into existence’. This is convenient for me, because I don’t want to have children. I used to worry that this was motivated reasoning; now I’m less worried about that, because I realised that by that analogy, I should also worry about other life choices that I think are uncomplicatedly morally neutral or good, such as being poly, reading novels, or dyeing my hair pink. (But maybe my modus tollens is someone else’s modus ponens here, and I should be agonizing more about whether my pink hair really increases global utility).
Carbon offsetting
I worry that flying is morally wrong because it’s so bad for the environment. Lots of people I know think that flying is morally ok if you do carbon offsetting. I’m interested to think more about the ethics of carbon offsetting (and moral offsetting generally), but I feel stuck because I want it to be true that it’s morally ok to fly if I offset, and I worry that this desire will get in the way of me finding the truth.
Having nice things
Is it justifiable for altruistic movements (e.g. effective altruism, social justice movements) to spend money on (seeming) luxuries if they think it’ll advance their altruistic goals? Relatedly, is it justifiable to spend money on luxuries for myself, if it’ll help me be more productive or altruistic? To some extent, this is obviously justifiable: if you have back pain because you work for 6 hours in a shitty chair every day, that’s probably affecting your work, and you should probably just buy fancy ergonomic furniture, if you can afford it. If a modest treat would greatly lift your morale, just do it! Charities should definitely pay their staff decent salaries so they don’t have to worry about meeting basic expenses.
However, I’m a bit suspicious of ‘I am buying X nice thing because it will help me be more altruistic’ (both in myself and others). I recognize the tendency to do motivated reasoning. To be clear I’m not actually super frugal - I value my own happiness and comfort intrinsically. But I don’t think that all of my ‘luxury’ purchases can be justified by saying ‘this will help me be more altruistic somehow’.
Is AI going to kill us all?
Lots of my friends are worried about misaligned artificial intelligence. I find this really hard to think about, partly because I don’t know how computers work, but partly because I really don’t want to believe (as some people do) that AI is extremely likely to kill me and everyone I care about within the next few decades. This seems like an extremely stressful thing to believe, and I know people whose mental health has got really bad because of their belief that this is true. I’m not sure how to reason clearly about this when one conclusion is so clearly preferable to the other. Yet it also doesn’t seem rational to just adopt the most pessimistic conclusion in order to avoid believing things that are convenient.