Doxastic access as recollection: A revised doxastic presumption

The epistemic agent cannot be epistemically responsible for holding a belief unless it is justified. In order to justify a belief in question, the epistemic agent must appeal to other beliefs that she holds which support it. However, the beliefs to which she appeals for support of the belief in question must themselves be justified by other beliefs, and these beliefs must be justified -- and so forth. It appears that belief justification requires the epistemic agent to make an infinity of appeals to an infinity of beliefs, and this is absurd; it is obviously impossible for an epistemic agent to hold an infinity of beliefs. The foundationalist approach to solving this problem – the “infinite regress problem” -- is to posit “basic beliefs”, which are beliefs that require no justification by appeal to other beliefs; they are inherently justified. If certain beliefs are inherently justified, then it seems that the epistemic agent can ultimately justify any belief in question by appealing to them, thusly ending the regress of appeal and justification. But foundationalism is not without its problems, the most troublesome of them being that it seems impossible for an epistemic agent to move from holding a basic belief to holding beliefs based on that basic belief. For instance, if the epistemic agent holds the basic belief that there is a patch of red appearing to her in her field of vision, it seems impossible (by a purely foundationalist approach) for her to make the move to holding the belief that there is a red ball before her. For her to believe that the red patch in her vision is a red ball requires justification; and by this way the regress problem returns to plague foundationalism.

An alternative to foundationalism is coherentism, which is an explanation of the justificatory process that relies on the appeal of the epistemic agent to the coherency of a finite number of beliefs. In “The elements of coherentism”, Lawrence BonJour sketched how he believed this might work. As it turned out, BonJour’s coherentism cannot be viable without the epistemic agent presuming that she represents her global belief system to herself with approximate accuracy in justifying a belief in question; BonJour called this presumption “the doxastic presumption”. As it does not seem possible for the epistemic agent to approximately represent her global belief system to herself, it seems that for her to presume that she can is epistemically irresponsible; and if she is epistemically irresponsible in justifying a belief in question, then the method she is using in order to do so must be altered. I propose that BonJour’s coherentism can be altered by revising his doxastic presumption, so that the epistemic agent need not be epistemically irresponsible in justifying a belief in question; and the goal of this paper is to sketch how this can be done.

Before I revise the doxastic presumption, I should explain why it must be made; it is not possible to understand the revision that I propose without an understanding of how the doxastic presumption is employed. According to BonJour’s coherentism, when one of the epistemic agent’s beliefs comes into question, she appeals to the fact that it coheres with her global belief system in order to justify it. But she cannot be justified in making this appeal unless she is capable of representing her global belief system to herself accurately enough to determine whether the belief in question increases or decreases the coherency of her global belief system. BonJour’s coherentism depends on the principle that as the epistemic agent interacts with the world, verifying and denying previously held beliefs, all the while forming new ones, the coherency of the her global belief system increases. From this principle it supposedly follows that true beliefs are more prevalent in more coherent global belief systems than they are in less coherent global belief systems. So if the epistemically responsible epistemic agent is interested in forming true beliefs, accordingly she must be able to verify that rejecting a belief in question will decrease the coherency of her global belief system in order to maintain it; and therefore she must represent her global belief system to herself accurately enough to do so. (Of course, just how accurate this representation must be depends on how confident the epistemic agent must be in a belief in question in order to be justified in holding it; I will not have space in this paper to address this more fully.)

It may not be clear just why it does not seem possible for the epistemic agent to represent her global belief system. This is a matter of justifying the metabeliefs that she would have to hold about her global belief system. It seems that if the coherency of her global belief system is to serve as justification for a belief in question, the epistemic agent has to hold the metabeliefs that her global belief system is fairly coherent, and that this coherency accounts for the accurate truth assessments in her life that are based on her beliefs. Together, these metabeliefs seem to imply that the epistemic agent must hold the metabelief that most of her beliefs contribute to the truthfulness of her global belief system. This is problematic because it is impossible for the epistemic agent to justify this metabelief; as BonJour suggests, coherency is valueless as justification in the case of a global belief system that is largely mistaken . And the possibility of an epistemic agent’s global belief system being largely mistaken is no small concern. But, one might ask, how could this even be possible if the epistemic agent acts on her beliefs every day, and gets along fine?

When an epistemic agent is required to justify a belief in question, she accesses other beliefs that she relates to it. If all of these accessed beliefs are themselves justified, and they are related to the belief in question by rules of coherency, then they would seem to justify it. But this is not the whole story. If within her global belief system there are false beliefs that that are coherent with the accessed belief system, and therefore serve as a basis for the epistemic agent’s justification by virtue of the metabeliefs that are implicit in BonJour’s doxastic presumption, it would be delinquent to grant her appeal to coherency for it. This is because the epistemic agent’s justification, which is the metabelief that her global belief system is majorly true, totally ignores the possibility that her global belief system is not majorly true. What manifolds this complication is that false belief formation seems to be a basic cognitive process; underlying most true beliefs are false ones that simplify the nature of whatever the true beliefs regard (for instance, one might hold the true belief that rain comes from clouds despite having many false beliefs about why this happens). So if an epistemic agent is required to have the metabeliefs that her global belief system is highly coherent and that the majority of her beliefs are true in the face of justifying a belief in question, then appealing to coherency becomes a questionable means of justification for her by virtue of her inability to represent her global belief system to herself.

So I propose that the epistemic agent cannot represent her global belief system to herself, and therefore that should not presume that she can. The alternative I propose is that she can represent the belief system that she actually accesses to herself in justification, and that by presuming her ability to do this she is justified in appealing to the coherency of her accessed belief system for justification. So my revised doxastic presumption is that the epistemic agent is able to represent her accessed belief system to herself in justifying a belief in question. This requires that the epistemic agent has a subconscious cognitive process that allows her to access beliefs in her global belief system that are relevant to the belief that has been called into question, which does not require her to represent her global belief system to herself even implicitly. This seems like a requirement that every epistemic agent can fulfill.

I propose that accessing beliefs is similar to accessing memories, and that actually beliefs are usually things that have to be remembered. It is certain that the cognitive process of recollection never requires the simultaneous representation of every memory that the epistemic agent has locked away in her brain, and indeed it seems that this process never requires the representation of any memories other than those that are being accessed; or in other words, when the epistemic agent recalls a memory, that memory is the only thing that is being represented. So it seems that beliefs that have to be recalled as memories are recalled without any sort of representation of the epistemic agent’s global belief system. Now, while it is clear that the epistemic agent holds beliefs that she does not have to recall as memories – such as beliefs about her immediate environment – it is not clear that these beliefs require her to represent her global belief system to herself. Quite to the contrary, it seems that the belief systems that she accesses in order to justify such beliefs consist at least partially of beliefs that must be recalled. If the epistemic agent forms the belief that the sky is blue while glaring up at the sky, she accesses a belief system which consists of beliefs such as, “the ‘sky’ is the atmosphere above the ground” and “‘Blue’ looks like that” – and these beliefs must be recalled as memories. So it seems that all beliefs must be justified by accessing belief systems consistent at least partially of recalled beliefs. And if this is the case, then it seems that there is never a need on the part of the epistemic agent to represent her global belief system to herself in justifying a belief in question.

According to the revised doxastic presumption that I propose, the epistemic agent must appeal to the coherency of the accessed belief system, rather than her global belief system, in justifying a belief in question. In order to appeal to the coherency of an accessed belief system in justifying a belief in question, the epistemic agent must hold the metabelief that the coherency of the accessed belief system is an indication of the truth of the belief in question; and therefore she must also hold the metabelief that the coherency of the accessed belief system is increased by the truth of the belief in question. These metabeliefs imply that the epistemic agent must hold the metabelief that the cognitive process by which she accesses her beliefs reliably accesses the right beliefs; and this is reasonable because, in representing the beliefs that she accesses, it seems clear that she can verify the relevancy of the accessed beliefs to the belief in question (her ability to do so is my revised doxastic presumption). But for the epistemic agent to hold the metabelief that the coherency of her accessed belief system reliably indicates the truth of a belief in question, she must also hold metabeliefs about the relationships among the belief in question, the accessed belief system, and her global belief system. That is, she must hold the metabelief that in verifying the belief in question, she is increasing the truthfulness of her global belief system; otherwise, the belief in question would have to be untrue. She must therefore also hold the metabelief that the increased coherency of the accessed belief system increases the truthfulness of her global belief system.

These metabeliefs do not imply that she must hold the metabelief that accepting the belief in question will increase the coherency of her global belief system; as I mention above, her global belief system may be largely mistaken, and by verifying the belief in question which is taken to be true she may in fact be decreasing the coherency of her global belief system. Clearly, the epistemic agent’s metabelief that verifying a belief in question increases the truthfulness of her global belief system is not problematic for my revised doxastic presumption; she need not represent her global belief system with any accuracy whatever in order to hold it. It should be noted that the epistemic agent may be mistaken in holding metabeliefs about how verifying the belief in question affects the truthfulness of her global belief system (that is, the belief in question may in fact decrease the truthfulness of her global belief system); but this does not undermine her appeal to the coherency of the accessed belief system in verifying the belief in question.

By revising BonJour’s doxastic presumption, I have proposed an alternative to the coherentism sketched in “The elements of coherentism”. Rather than appealing to the coherency of her global belief system, the epistemic agent only appeals to the coherency of the belief system that she accesses by the cognitive process of memory recollection. This is an improvement on BonJour’s coherentism, because by making a more reasonable demand of the epistemic agent’s cognitive abilities the metabeliefs that she must hold in order to justify a belief in question become less susceptible to skepticism. Whereas according to BonJour’s coherentism the epistemic agent has to hold the metabelief that her global belief system is majorly true, such unjustifiable metabeliefs never arise with the acceptance of my revised doxastic presumption.




The only source material for this paper is Bonjour's "The elements of coherentism", reprinted in the anthology Knowledge: Readings in Contemporary Epistemology, edited by Sven Bernecker and Fred Dretske (Oxford University Press, 2000).

This has been a NODE YOUR HOMEWORK presentation. Sorry that there really isn't a lot of linking; but it's tough to make relevant links from an analytic paper. I do hope to fill in some of the nodeshells that I'm creating with some of these links over the summer, however. Thanks for reading!