The author defends the claim that there are cases in which we should promote irrationality by arguing (1) that it is sometimes better to be in an irrational state of mind, and (2) that we can often (purposefully) influence our state of mind via our actions. The first claim is supported by presenting cases of irrational belief and by countering a common line of argument associated with William K. Clifford, who defended the idea that having an irrational belief is always worse than having a rational one. In support of the second claim, the author then explains how the control we have over our beliefs could look like. In conclusion, the author suggests that the argument of this essay is not restricted to the irrationality of beliefs, but can be applied to irrational states of mind in general (like desires, intentions, emotions, or hopes). In an outlook on the “ethics of belief” debate, the author points out that the argument of this essay need not conflict with evidentialism, but does so when combined with another plausible claim about the meaning of doxastic ought-statements.
* Winner of the first prize for the 2016 essay competition for students sponsored by the Gesellschaft für Analytische Philosophie (gap) in cooperation with the Grazer Philosophische Studien. The question for the 2016 competition was: Ist es immer gut, vernünftig zu sein? [Is it always good to be reasonable?]. After careful study, the jury (consisting of six members) chose two of the 20 submitted essays as deserving of the first prize. This is one of the two winning submissions.
Suppose you were raised in a religious family. As a result of your upbringing you believe firmly in an omnipotent and benevolent god. Furthermore, your belief is of utmost importance to you. Imagining a world without such a god is very distressing for you. But now, as you have matured, you are able to reflect about this belief and you are aware of the fact that there are considerations which are in conflict with your belief. In fact, after carefully considering the problem of evil (i.e., the problem of how to reconcile the suffering in the world with the existence of an omnipotent, benevolent god), you believe that this problem poses a serious threat to your belief. You are convinced that, epistemically, you should no longer believe in god. But, nevertheless, you continue to believe in god. This is possible because your religious belief was deeply ingrained by your upbringing and because of the importance of this belief in your life. Thus, you believe that god exists and you believe that there are decisive epistemic reasons not to believe in god. This is a paradigmatic case of epistemic irrationality.1 In order to be epistemically rational, you should abandon one of your beliefs. But is it good for you to be rational in this case? What should you believe all things considered?
Firstly, I think that in this case – other things being equal – it is better all things considered if you are epistemically irrational, i.e. it is better if you continue to believe in god.2 Secondly, I think that if you know of this, it has practical implications for what you should do. If you cannot ignore the problem of evil, but you know that believing in god is better all things considered, then you should – if possible – adopt reasonable strategies to uphold your belief in god. Since the belief that results from your implementation of those strategies is irrational (it conflicts with your belief about the problem of evil), you should promote irrationality.
I will argue for these points in the following Sections 2 (that irrationality is sometimes better for you) and 3 (that you should sometimes promote irrationality). In conclusion, I will suggest that if my claims are true with respect to irrational beliefs, they also apply to other mental states: Sometimes we know that it is better to have irrational mental states, and if it is in our power to produce them, we should do so. My argumentation thus defends the general possibility of what Derek Parfit called “rational irrationality” (Parfit 1984, 12 f; 2001, 27). In an outlook, I will finally point out implications of my argument for the debate between evidentialism and pragmatism in the ethics of belief (Section 4).
2 Why it is Better for You to be Irrational
Someone who disagrees with me has to argue that believing in god is not better for you all things considered. In this section, I defend my claim that it is sometimes better to be (epistemically) irrational against what I take to be the most common way of objecting against it.
Clifford’s overall argument in this famous paper on “The Ethics of Belief” can, very generally, be interpreted in the following way:
No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may some day explode into overt action, and leave its stamp upon our character for ever. […] And no one man’s belief is in any case a private matter which concerns himself alone. […] Every hard-worked wife of an artisan may transmit to her children beliefs which shall knit society together, or rend it in pieces.
- (1)Truth is an important (prudential or moral) good, and
- (2)since we endanger this good with every instance of epistemic irrationality,(conclusion)we should never have irrational beliefs.3
(2) is true for conceptual reasons: If we do not believe what is strongly supported by our evidence, we are in danger of losing the truth.
But why should truth be so valuable as to allow the conclusion that we should never have irrational beliefs? Truth is nothing you should promote no matter what. There are other goods that can outweigh the value of truth. If I know how a movie ends, I should not tell everybody in order to produce more true beliefs (Kelly 2003, 626). This shows that I should not promote every truth at any cost.
However, it might still be the case that we should promote truth, other things being equal. If we could produce more true beliefs, and nothing else would change, we should do so. There are two worries with this claim. The first is that, even if it were true, the conclusion that we should never have irrational beliefs would not follow. This is because sometimes things are not equal. Sometimes we lose something important if we aim at truth by being epistemically rational, as in our case above. Secondly, the claim is not true. We should not produce more true beliefs if the opportunity arises and other things are equal. This is because some truths have no value at all. There is no value in having all true beliefs about how many blades of grass are in which front lawn. If I could acquire all those beliefs just by deciding to do so, I would still have no reason at all to do it. Given that I cannot use this knowledge in some way – e.g. by winning bets or by astonishing others with it –, it does not have any (non-epistemic) value.4
Even so, believing the truth about important matters may seem to be a good worth promoting, other things being equal. So maybe after all, you should not be irrational in the case of your believing in god, because you deal with an important matter. But in this case, even if we grant that it is an important matter, the value of your belief does not depend on its being true. It is only important to maintain the belief-state with the content that god exists, no matter whether this belief is true or not. You might be better off if god exists, but you do not have control over that. It is only in your power to influence your state of belief. What matters is the value of your belief and the consequences of having this belief. But it does not matter whether the content of your belief matters.5
Someone like Clifford could grant that truth is not always valuable but still argue that the personal value of your belief in god is outweighed by other values which would be promoted by being rational. Pursuing such a strategy, one could claim that it is in fact impossible to be irrational only once, at least if such important matters as your religious belief are at stake: If you are irrational in such a case, this means that your epistemic capacities do not work properly in general. Arguably, this is something we should avoid.
It is important to note that it is an empirical question whether one instance of irrationality always produces more irrationality. However, being irrational does not endanger your epistemic capacities if your belief in god is sufficiently isolated from the rest of your system of beliefs. If an irrational belief is sufficiently isolated, it has no influence on whether you are epistemically rational in other cases. It is part of everyone’s experience that otherwise very rational people tend to be irrational when it comes to certain topics. People often commit to contradictions when they discuss our relation to animals (eating them versus loving them), for example. This does not mean that they cannot be completely rational when thinking about other issues.
We can also think of other cases than religious belief where a valuable irrational belief seems to be sufficiently isolated from the rest of our beliefs. Allan Hazlett cites decisive psychological evidence for the idea that a biased self-conception about one’s own traits, abilities, and the amount of control one has over one’s own life, as well as an unrealistically optimistic outlook concerning one’s own future will make one less (likely to be) depressed (Hazlett 2013, 44–52). Here, like in the case of believing in god, being irrational is better for one’s own wellbeing. Hazlett also notes that this phenomenon “is highly ‘selective,’ both in the contents of biased beliefs and in the contexts in which said bias manifests itself” (61) and refers to psychological studies which provide us with evidence for this (63 f). I conclude that there are instances of epistemic irrationality in which we do not expose our epistemic capacities to the danger of general irrationality.
A Cliffordian might still object that the personal value of your belief in god (or of your biased self-conception) may be outweighed by the potential danger for other people – that is, by storing irrational beliefs on a certain topic you may cause other members of society to have more irrational beliefs (for example, your children). Thus, we collectively lose the truth on certain questions. Even if we are better off prudentially, we are never better off morally if we have irrational beliefs.
I think we should reply to this worry with two comments: First, it is not unconceivable that we damage no one else’s epistemic capacities when we are epistemically irrational. We might live alone, or never tell anybody about what we believe and never act or behave on it in a way that transfers the irrational belief to someone else or does some other kind of harm (Papineau 2013, 67 f). Second, even if we damage someone else’s epistemic capacities, it is not clear that this is bad for this person. Especially in the case of an over-optimistic self-conception, it seems to be valuable to transmit belief forming practices to other people which enable them to form valuable irrational beliefs about themselves. You might, for example, educate your children in such a way that they are more self-confident about their own capabilities than their evidence actually justifies. Since such a bias can arguably promote our wellbeing, upholding doxastic practices which promote such irrational and optimistic beliefs about oneself is a good thing to do.
My discussion shows that there is no reason in principle to deny that there are in fact cases in which it is better to be irrational. This is because the state of affairs that you have an irrational belief is a state of affairs like any other. As such, it might be something you should avoid prima facie – like the state of affairs that you are in pain. However, every prima facie or pro tanto bad state of affairs can be good all things considered: you might be better off being irrational, or you might be better off if you endure the pain (when the doctor has to cause pain for the sake of your health).6
It may be complex to tell in a given case whether your irrational belief would be better all things considered. If this is so, you might be on the safer side trying to be epistemically rational. However, as William James noted before me, it is not better “to keep out of battle forever than to risk a single wound” (James 1896, 19). Whether you do so – whether you risk being irrational – is also a question of your character. And, possibly, it is often better to be someone who takes the risk.
3 Why You Should Promote Irrationality
My conclusion of the last section – that sometimes it is better to be epistemically irrational – might be of no practical interest for us. A lot of things would be better for us. It would maybe be better for some of us if we were taller. Even if it were, this would have no practical implications, since we cannot influence how tall we are. If, as David Hume thought,7 we were at the mercy of our belief forming equipment, if beliefs just “came to us” like natural disasters, then there is no point in saying that it would be better for us to have this rather than that belief, because it would not be true that we should have irrational beliefs – at least the “should” would not be normative or prescriptive in the sense that it gives us advise on how to lead our (cognitive) lives.8
Luckily, it is not like that. We can do a great deal to influence our beliefs. We can influence our belief on whether it will rain tomorrow by checking the weather forecast. We can influence our belief on whether we have a free will by visiting philosophy (or neuroscience) classes. We can even influence our belief on whether our children are the next Picassos, or Einsteins, or Mother Theresas, by focusing our attention on certain features of their behavior which show them in the right light with respect to our aim (i.e., the desired belief).9
Our capacity for self-deception provides the means for you to maintain your irrational belief in god. When you encounter the evidence against your belief (the problem of evil), you are, first, able to do something in order to maintain your belief. As Herbert Fingarette points out, self-deception is not a mechanism in us that operates without our active participation (Fingarette 1969, esp. Ch. 3). By directing our attention to certain features of the situation, and by interpreting them in a certain way, we can keep ourselves from considering evidence that might bring us to believe something we do not want to believe (see the child prodigy example above). Thus, in the case of your belief in god, you could try not to think about the problem of evil, avoid talking to non-believers and instead seek the company of believers by continuing to engage in the religious life, etc.10 Secondly, you know that doing this is probable to lead to the result that you will uphold your belief. Since you also believe that it is better for you to uphold the irrational belief in god, it is rational for you to keep it up. You have good reasons to perform actions which lead to an epistemically irrational belief. You should perform them in order to continue to be irrational.11
Similar points apply to Hazlett’s example of a biased and over-optimistic self-conception (if this example seems more convincing). Even if this bias is largely due to unconscious mechanisms (Hazlett 2013, 43), we can also influence the bias purposefully. Since we know about it, and since we know that through the mechanisms of self-deception we can actively influence what beliefs we form about ourselves, there is a lot we can do to promote valuable irrational beliefs about ourselves (directing our attention on certain aspects of ourselves, interpreting them in the right light).12
Even if we cannot be irrational “at will,”13 we can cultivate useful irrational beliefs. My claims, however, are not restricted to irrational beliefs. Other mental states can also be irrational if they do not match their intentional object in the right way. Our desires are irrational if we desire something we believe not to be desirable; our intentions are irrational if we intend something we think we should not do; and so on. If we can conceive of examples where we know that it is better for us to have irrational mental states, and where we are able to bring them about, then there are cases in which we should do so. Since we can think of such cases when considering belief, there is no reason to deny that we can conceive of them when considering other mental states, like desires, motives, emotions, intentions, or hopes.
My conclusion does not downgrade the role reason plays for a responsible, self-reflective and autonomous life. On the contrary, it highlights the multifaceted ways in which we can influence ourselves by using reason: Even if it is sometimes better to be irrational, it is also true that in these cases, it is rational to do something in order to become irrational.
4 Outlook: Implications for the Ethics of Belief (and the Ethics of Mind)
Does my argument imply some form of pragmatism with respect to the question of what we ought to believe? Not necessarily. Shah defines pragmatism as the view that there are “at least some non-evidential reasons for belief” (Shah 2006, 482). Following this definition, pragmatism is the view that the fact or consideration that it is good for me to have a belief would (sometimes) be a reason to have this belief. My view does not imply this. It only claims that there are practical reasons to perform certain actions which lead to useful beliefs. This does not by itself commit me to any substantial view about what can be counted as a reason for belief. Up to this point, my argument only concerned the question of what beliefs we ought to bring about, and thus concerned reasons for action. Viewed from this perspective, my argument is about ordinary ethics, not about the ethics of belief. Thus, most evidentialists, i.e., people who think that only evidence determines what to believe (or what beliefs to have), will think themselves not in trouble if they agree with my argument up to this point.
However, given a further assumption I deem plausible, an interesting version of pragmatism follows. Matthew Chrisman (2008) argues that doxastic ought- or should-sentences like “You should believe that the earth is round” can be understood analogously to state-Oughts like “The lights should be turned off by eleven.” Sentences of this kind, i.e. sentences about what state should occur, usually imply that somebody should do something in order to produce the state in question (“that the lights are turned off by 11”). In the same way, “You should believe that the earth is round” must be understood as implying that someone (the subject, or even other people, like your teachers or your parents) should bring you to believe that the earth is round, according to Chrisman.
If some doxastic ought-sentences can be understood in this way, it follows that reasons for actions which produce certain beliefs would (also) determine what we ought to believe (and not just what beliefs we ought to bring about). This form of pragmatism does not imply that there are practical reasons for belief, but nevertheless claims that what we ought to believe is (also) determined or made true by practical reasons (for actions which probably result in states of believing). This would show that the question of what to believe (or what beliefs to have) cannot always be distinguished from the question of what beliefs we ought to bring about. However, there would still be no non-evidential reasons for belief. The practical reasons in question are still reasons for action.
Still, if evidentialists also claim that the only reasons which determine what we ought to believe are provided by evidence, then evidentialism would be in trouble if there are some doxastic state-Oughts. For then also reasons for action would determine what we ought to believe – and these are not provided by evidence, but, for example, by the value of the consequences of the action. To defend their position, evidentialists would have to argue either that doxastic Oughts can never be understood as state-Oughts, or that evidentialism is restricted to a special type of doxastic Oughts which excludes state-Oughts. Evidentialism would thus either be wrong or only true in a restricted sense.
If we apply the idea that there may be “state Oughts” which imply “Oughts to do” to mental states in general (not only to beliefs), it would follow that the question of what mental states we should have, all things considered, is determined by what states we should bring about. Thus, not only the question of whether to promote being rational or irrational but also the question of whether to be rational or irrational would be answered (at least partly) by practical reasons for actions. Hence the answer will partly depend on the value of the (expected) consequences of those actions, including the value of having the mental states in question.14
Thanks to Valeria Zaitseva for providing me with invaluable input, clarification, and objections in numerous discussions on the topic, as well as for proofreading this paper. For further helpful, inspiring comments, as well as for mentioning many possible objections which deserve a much fuller treatment than I could afford here, I would like to thank Dorothee Bleisch, Inga Bones, Miruna Gavaz, Leonie Junker, Max Kocher, Johann Roch, Steffen Lesle, Prof. Konstantinos Sargentis (Crete), Konstantin Weber, as well as the members of the essay prize jury.
James William 1896. “The Will to Believe.” In: The Will to Believe. And other Essays in popular Philosophy, and Human Immortality. Dover / ny: Dover Publishing 1956, 1–31.
Kelly Thomas 2003. “Epistemic Rationality as Instrumental Rationality: A Critique.” Philosophy and Phenomenological Research 66, 612–640.
Owens David J. 2017. “Value and Epistemic Normativity.” In: Normativity and Control. Oxford: Oxford University Press (forthcoming).
Papineau David 2013. “There Are No Norms of Belief.” In: Chan Timothy (ed.), The Aim of Belief. Oxford: Oxford University Press, 64–79.
Parfit Derek 2001. “Rationality and Reasons.” In: Egonsson Dan et al. (eds.), Exploring Practical Philosophy: From Action to Values. Aldershot: Ashgate, 17–39.
Schmidt Sebastian 2016. “Können wir uns entscheiden, etwas zu glauben? Zur Möglichkeit und Unmöglichkeit eines doxastischen Willens.” Grazer Philosophische Studien 93, 571–582.
For the purposes of this essay, I will follow Thomas Kelly: “by epistemic rationality, I mean, roughly, the kind of rationality which one displays when one believes propositions that are strongly supported by one’s evidence and refrains from believing propositions that are improbable given one’s evidence” (Kelly 2003, 612).
I presuppose that it is impossible or at least very hard for you to abandon the belief that the problem of evil presents decisive epistemic reasons against your belief, e.g. by ignoring the evidence against your belief. So there is no point in trying to uphold your rationality by trying to forget about the problem of evil – it is much easier for you to uphold your belief in god instead. I will say something about controlling belief in Section 3.
Premise (2) mentions rationality, even though Clifford does not use the word “rationality” in the quotation above. However, his central claim that “it is wrong always, everywhere and for everyone, to believe anything upon insufficient evidence” (Clifford 1877, 77) can be interpreted as prohibiting epistemic irrationality in the sense defined by Kelly (see fn. 1).
See also David Owens’ argument against the pro tanto value of truth (Owens 2017): If I believe that I will die tomorrow, does this mean that I have a reason to kill myself tomorrow so as to ensure that my belief is true?
Pascal already claimed that due to uncertainty and due to the potential harm caused by disbelieve in god (eternal punishment), we should do what we can to make us believe in god (Pascal 1670, 227–231). I here assume that you will not get punished or rewarded for either belief or disbelief in god. I also assume that your life will not change dramatically depending on which belief you choose, except that you will feel much more comfort in this world and less distress if you continue to believe in god. This is part of the “other things being equal”-clause in the description of the example.
While “prima facie” is an epistemic notion, “pro tanto” is not. Something that is prima facie bad, that is “bad on the first view,” might not be bad on the second view. Furthermore, what seems bad to us need not be bad in any respect (i.e., it need not be pro tanto bad).
See Papineau 2013 on the idea of ought-sentences with and without “normative content,” esp. p. 69.
This kind of indirect control is nicely described in Walgenbach 2016, who argues convincingly that having this control is sufficient for talking about “decisions to believe” in a meaningful way.
These practices can be considered as self-deception insofar as you engage in them with the (unconscious) purpose of upholding your irrationality (they need not be self-deceptive if they are done from other motives). It is important to note that you are not trying to become rational again by manipulating your belief that the problem of evil presents decisive evidence against your belief in god – I assumed that this is no reasonable option for you currently (see fn. 2). The only strategy open to you is to strengthen your belief in god.
The assumption that we can sometimes know that it is better to have in irrational belief may seem too strong. I think that such cases are conceivable, and that I presented two of those cases above. However, even if one does not accept this, one could still accept that we sometimes think that we know that irrational belief is better. It still follows that it would be rational to try to uphold the irrational belief in these cases. It is a further question, however, whether this would already imply that we should try to uphold it.
Parfit noted that when we engage in the project of causing a good belief in us because we see that having the belief would be good, we must maybe forget our original motive at some point so that it becomes possible (or at least easier) to maintain the belief (see Parfit 1984, 41, see also the thought experiment of the “Credamites” in Bennett 1990, 93, which relies on the possibility that we forget our motive). However, here we should remember the distinction between believing that p and thinking about (the truth of) p. If I constantly think about the fact that I deceived myself into believing something, it will be hard or impossible to maintain my belief. Just believing that I deceived myself, however, does not imply attending to this fact.
See Schmidt 2016, where I argued that we cannot form beliefs “at will” (in the sense of forming them immediately for practical reasons).