Human brains can store knowledge of many kinds. They can store memories of specific events, the skills necessary to tie one's shoes, and the ability to visually distinguish one's mother from one's arch nemesis. There are some aspects of human knowledge representation that scientists have agreed upon as being true, but there are still huge differences of opinion about other aspects.

There are some things that one can take for granted as being true. For instance, it is assumed that in order for knowledge representation to take place, there must be some sort of perception; otherwise, there would be nothing to represent. Information is encoded by neurons, of which different pathways are responsible for different tasks. We also know that memories of things tend to be stored in the same parts of the brain in which their perception was originally processed. Additionally, memories of different categories of things (e.g. living things and large outdoor things) have their own distinct parts of the brain in which they are stored. However, the way in which knowledge is actually encoded is still hotly debated.

The propositionalist view

Jerry Fodor, a propositionalist, claims that brains store information in the form of sentences (propositions), or at least something quite similar to sentences. He points out that we can understand sentences that use the same symbols but different syntax. For instance, he says, the sentences "John loves Mary" and "Mary loves John" contain identical words, but they have completely different meanings because of the order of the words and the syntactic rules of those orders. Therefore, Fodor says that human brains must contain a set of symbols and rules that is isomorphic to the one of our language; otherwise, we would not be able to interpret sentences or differentiate them from each other. But he does not reserve this theory of propositionally stored information for language interpretation alone; he also extends it to all other stored information in the brain. He gives the example of a person deciding what to order at a restaurant and choosing the lasagna. While the person is ordering lasagna because of all the things he likes about it -- the taste, the texture, and so on -- Fodor says he does not actually experience those things in within his memory. Rather, he simply has the knowledge that he enjoys lasagna's taste and the texture, so he knows that ordering lasagna will please him.

Response to the propositionalist view

Stephen Kosslyn agrees with Fodor in part: he thinks that human brains do store linguistic information propositionally. However, he also thinks they store information in five additional ways, one for each of the senses (visually, aurally, etc.). He also gives numerous examples. For instance, Kosslyn says, a brain-damaged patient who has problems with storing information about where things are located will have an analogous problem in actually perceiving spatial location. He cites another patient with damage to the dorsomedial nucleus of the thalamus. This person had problems with verbal memory but not with facial recognition, indicating that linguistic memory is different from visual memory. Kosslyn also gives the example of an experiment in which the subjects are asked to say whether a given image is the same as another image after rotating it. People's reaction times to the question vary proportionally in accordance with the degree of rotation of the object. This, says Kosslyn, shows that people have actual images in their minds; otherwise, they would be able to answer any of the rotation questions in the same amount of time.

Response to the response to the propositionalist view

Convincing as Kosslyn's arguments may first seem, however, there are rebuttals for almost every one of them. For instance, it is true that the original perceptions of something and the memory that one forms of it tend to occur in the same part of the brain. However, in no way does this demonstrate the way in which the information is encoded. It is quite possible that upon seeing, for example, a memorable tree using a particular area of the brain, the information could get encoded in a sentence to the likes of, "There is an interesting tree in the field outside of my dorm." Similarly, the example of the person who lost verbal ability but not facial recognition only demonstrates that these two kinds of information are stored in different areas of the brain; again, it does not give any hint as to how that information is stored. Kosslyn's mental object-rotation example can also be refuted in a couple of ways. First of all, say Zenon Pylshyn and Fodor, the latency effect is due to the fact that the subjects have the knowledge from previous experience of how long the rotation would take, and, knowing what the experimenters expect them to do, they answer according to the experimenters' expectations. A much more convincing argument against an actual mental image is that the same experiment has been done with pigeons. They can produce the same number of correct answers as humans, but there is no variation in time from one answer to the next.

Fodor's theories are not faultless either. As Pat Churchland points out, Fodor seems to be confusing a product of mind -- that is, language, the way we communicate our mental processes to other people -- with the mental processes themselves. Babies, for instance, can still think despite the fact that they cannot speak. In fact, the first few years of life are when humans acquire and store the most knowledge, mostly of the way in which the physical world functions. Fodor's theory that humans encode knowledge solely through language or some process analogous to it cannot possibly be true if one considers this. Fodor responds, weakly, that we still think in terms of language as soon as we have language; but that still does not explain how babies think. Furthermore, a baby just beginning to speak would never be able to talk about anything under Fodor's theory due to the fact that it would have needed language in order to encode the knowledge of things to talk about. In addition, we know that animals encode knowledge, but they do not have language like ours if they have any language at all. Whales, for instance, communicate musically, but it is ridiculous to think that whales might somehow think musically as well.

Just as Fodor refutes Kosslyn by saying that actual mental images do not exist despite the intuition that they do, one can refute Fodor by saying that thoughts might seem like sentences but are not. Fodor might respond by saying that thought must be propositional because we need some way to understand language's syntax; but along the same lines, one could say that memory of images is actually visual because we need a way to understand pictures, and Fodor has already shown why that is not plausible. Therefore, the question still remains of how human brains do store knowledge.

How does knowledge coding actually work?

This question does not take us back to the very beginning, as one might fear. Although Fodor's theory has a number of flaws, it has some worthwhile ideas as well. For instance, it is true that one must have some way to tell the difference between "John loves Mary" and "Mary loves John." However, this understanding of the different meanings need not be represented propositionally in the brain. In fact, Fodor never really seems to explain what propositional representation actually means on a neural level. So, one would think that just as the brain can sort out chronology of events, it should also be able to sort out chronology of words.

There are several other parts of knowledge representation that seem fairly clear. There has been a debate in the past of the way in which declarative knowledge (knowledge of facts), episodic knowledge (knowledge of events), and procedural knowledge (knowledge of how to do things) are encoded. Some data have supported the theory that this is done through such things as models, scripts, and production systems. However, these are probably not correct. First of all, they are set up differently from each other, but in computer programming they can be used to accomplish the same things. This suggests that they are really just equivalent processes that would not account for the differences in knowledge that they attempt to explain. Second of all, there have been data that do not support the theory anyway.

A more realistic explanation of how knowledge is stored is the spreading activation model. In this model, concepts are grouped according to how they are associated with each other; for instance, bird, robin, and blue jay would all be fairly close. However, this model still only explains information storage in terms of symbols rather than neural function. An even more realistic explanation is a PDP system in which every different bird would be represented by a different, unique pattern of neural activation. The pattern of each exemplar would be made up of a number of different sub-patterns to represent different characteristics -- e.g. wings, beak, feathers -- and so the more characteristics two images had in common, the more similar the overall patterns would be to each other and the more they would overlap. This idea gives a believable explanation as to why humans associate certain concepts with each other (e.g. robins and blue jays) more than they associate others (e.g. robins and rats).

Beyond recognition

This system gives an idea of how a human brain recognizes what things are, but it still does not explain how it recognizes where they are or how it recognizes those different kinds of information as pertaining to the same image. This can happen because although the perceptual information of an image splits off to the "what" pathway and the "where" pathway, the two eventually join up again and form a loop so that the different parts of the whole pathway can communicate with each other and create a complete concept. Additionally, after figuring out an object's spatial orientation, we sometime want to interact with it. In order to pick something up, for instance, we must understand where the object is in relation to our eyes and then translate that information into instructions for where to put our hand in order to grab it. Since our hand is in a different place from our eyes, we use vector translation to compensate for the difference. The brain has a huge number of interconnections like this that make it possible to have the rich associations between different characteristics of an object, and between different objects, that we do.

This gives a reasonable idea of how information is stored, but there is still the question of how it is encoded in the first place. The encoding of a representation in the brain occurs in several different areas of the brain, but the hippocampus plays a particularly important role. When a new piece of information is perceived, neurons that synapse onto other neurons in the hippocampus are stimulated. This creates both a stronger synaptic connection, making it more likely to fire the next time it is stimulated, and it a change the post-synaptic neurons. Eventually, if the piece of information becomes a long-term memory, the hippocampal neurons will synapse onto neurons in other brain structures, which changes them and shows that there is new knowledge there. In order to become long-term memory, a piece of information must be repeated over time. If it is procedural knowledge, we provide the practice ourselves by doing it over and over; but if it is declarative knowledge, our hippocampus must do it for us. In either case, however, the end product is the same.

So, we have been able to conclude that knowledge is encoded by neuronal stimulation, especially in the hippocampus, which eventually leads to more permanent changes in other parts of the brain. Different types of information are stored in different places in the brain, and they are stored in the form of connections of different strengths and different patterns of neuron activation, which is why we can perceive some things as more similar than others. It does seem that, as Fodor claims, we have only one form of information representation in the brain. However, that representation probably does not take the form of language, as that concept has numerous holes in it and is quite abstract anyway.

Log in or register to write something here or to contact authors.