Is All Knowledge in Biological Systems Innately Given?

Definitions

In order to discuss whether or not knowledge is innate, it will be helpful to provide definitions for both the words knowledge and innateness. Each are very loaded terms that have been discussed by numerous people and defined in many different ways. Henry Plotkin (1993) says that knowledge "is a deeply rooted characteristic of all organisms, and the way in which adaptations evolve should then be seen as a way of gaining knowledge. If all adaptations are a form of knowledge, then, of course, so too are adaptive behaviours forms of knowledge" (p. 153). Under this definition, everything from the bright colors of a tree frog to the idea that Boston is the capital of Massachusetts can be seen as knowledge. According to the Merriam-Webster Dictionary (2003), knowledge can be seen as "the fact or condition of knowing something with familiarity gained through experience or association", "acquaintance with or understanding of a science, art, or technique", "the fact or condition of being aware of something", or "the circumstance or condition of apprehending truth or fact through reasoning : cognition." Obviously, knowledge is not an easy term to define. While Plotkin's definition does reflect the need for biological systems to survive in their environments, it does not adhere to the traditional use of the word as something contained within the mind and/or relating to the organism's behavior; it seems too broad. The dictionary definition, on the other hand, does not seem quite broad enough. Therefore, the definition of knowledge to be used throughout this essay will be: any characteristic of an organism that is both an adaptation representing something in the world and instantiated by the neurons.

Like knowledge, innateness is a term that can have many different meanings. Traditionally, innate knowledge is thought of as something imbedded in the genes, whereas learning is thought of as information obtained from the environment. Another way to think of innateness is as something inevitable. In their book Rethinking Innateness, Elman et al. (1996) say:

we use the term innate in the same sense as Johnson and Morton (1991) to refer to putative aspects of brain structure, cognition or behavior that are the product of interactions internal to the organism. We note that this usage of the term does not correspond, even in an approximate sense, to genetic or coded in the genes. (p. 23, emphasis added)

They claim that, because genes interact with their environment from the moment they form, one cannot separate the effects of genes alone from the effects of the environment. Therefore, it makes sense to say that innateness refers to products of reactions inside the organism, whereas learning refers to products of interaction with the environment outside the organism. Furthermore, Elman et al. claim that in order to be innate, knowledge must be constrained at one or more of three levels: the representational level, the architectural level, or the chronotopic (timing) level. If constrained at the representational level, the knowledge must already be there from the start; at the architectural level, the actual knowledge does not exist at the start, but the neurons and connections are in place so that certain knowledge will be formed assuming the "expected" environmental conditions occur; and at the chronotopic level, certain knowledge forms in the part of the system where it does due to the point in time at which it develops. Having established this, the title question of this essay can now be rephrased as such: In a biological being, are all representations -- meaning aspects that are adaptations, representations of the world, and instantiated by the neurons -- products of reactions within the being constrained by at least one of the three levels?

Plotkin's View of Innateness vs. Learning

Plotkin (1993) argues that nature, via the process of evolution, builds in as much knowledge as it can; but, since environments can change in unpredictable ways, some learning occurs to compensate. For instance, nature can "assume" that a duckling will have a mother duck (Broude 2003), and so ducklings have a propensity for adult female ducks over other objects built into their brains. However, every duck looks slightly different, and there is no way to predict exactly what a particular duckling's mother will look like; so the duckling must learn which adult female duck is its mother, by using cues such as its proximity to her when it hatches. Plotkin refers to the tendency to have built-in knowledge as the primary heuristic, and he refers to the tendency to implement learning when details cannot be predicted as the secondary heuristic. He claims that the secondary heuristic builds on the innate knowledge of the primary heuristic.

Just How Much is Innate?

It is clear that at least some of knowledge is innate. There are a multitude of examples of behavior (Broude 2003 b) exhibited by humans and other animals that cannot possibly have had time to develop as a result of interactions with the environment outside of the body. For instance, an infant who hears a bell will turn his/her head in the direction of the sound. This action implies that the baby expects to see something, and yet s/he has not yet had enough experience with the world to have learned that a sound indicates something to see. This behavior, therefore, must be a product of some interaction of genes, neurons, and so on within the baby. There are also certain aversions that are present from birth, such as the aversion to the smell of ammonia. An infant who has never had an experience with ammonia cannot have learned that the chemical is dangerous, and yet s/he will demonstrate a dislike of the odor.

Selectionism!

It cannot be argued that no knowledge is innate, but the question still remains as to how much of it is innate, and how much is learned. Based on the primary and secondary heuristics suggested by Plotkin, a logical conclusion would be that much of knowledge is innate, but the more specific, detailed information that nature cannot specifically predict is learned. However, some people argue that all knowledge is innate. These people are called selectionists. As Michael Gazzaniga (quoted in Broude 2003a) says, "For the selectionist, the absolute truth is that all we do in life is discover what is already built into our brains. While the environment may shape the way in which any organism develops, it shapes it only as far as pre-existing capacities in that organism allow." In the selectionist's point of view, every neural connection an organism may ever use is already in place when the organism has finished developing; any instance of "learning" is simply a strengthening or weakening of connections.

If this is true, then it would follow that there are limits on what different organisms can know; and this does, in fact, turn out to be true. For instance, a rat can learn, via classical conditioning, to avoid a certain taste if the taste is followed by sickness. It cannot, however, be taught to associate a flash of light with a feeling of sickness (Atkinson, et al. 2000). Recognizing a flash of light is not particularly more difficult than recognizing a taste (and, in fact, when the experiment was tried with a flash of light and a shock, the rat learned to associate them), so this inability to learn cannot be attributed to a simple lack of brain capacity. Rather, it is clear in this case that the necessary connections between the neurons in the rat's brain that sense light and the ones that detect illness are simply not there; and the connections apparently do not form. This supports the selectionist's point of view, that learning is simply picking from the choices that are available.

When faced with the idea that all knowledge is innate, one might ask how the knowledge of human inventions, from computers to the written language, could be built in. The process of evolution takes huge amounts of time, and humans -- and especially their inventions -- have simply not been around for long enough for natural selection to have built in the connections for these things. There are a number of responses to this question. One argument is that it is known that humans already have structures in the brain that recognize objects. As Plotkin pointed out, nature cannot predict every detail about the environment, and so it lets learning fill in the details. Coming to visually recognize a computer, therefore, is probably quite similar to learning to recognize a cactus, or any other object that is not widely-occurring enough for most humans to be likely to see one. If visual recognition of an object is instantiated by the firing of a certain pattern of neurons in a particular area or areas of the brain, then it is quite probable that -- given the huge range of objects that a person might see in his/her life -- all (or most) of the neurons in this area are connected to each other to allow for a huge variety of objects. Therefore, seeing a computer would simply set off a unique firing pattern in the appropriate area of the brain, depending on its shape, color, etc., and the person would learn to distinguish it from other objects. Biederman suggested another idea: that humans understand objects by putting together the ideas of simpler shapes, which he called geons. This is another possibility, although many scientists are skeptical of "matching" theories such as this one. However, the previous argument can still hold without Biederman's theory.

Furthermore, the human ability to create computers, writing, and so on must have come from existing connections. And if the connections to create these things were already in place, it would not be much of a stretch to think that the connections necessary to see and understand them would also be there.

Constraints, Revisited

When Elman et al. (1996) suggest their three theories of constraint, they do not seem to believe that knowledge is completely under an architectural constraint, although they do point out its importance; as they say, "although it is rarely acknowledged, architectural constraints of one kind or another are necessarily found in all connectionist networks" (31). However, it is precisely this constraint that Gazzaniga, Plotkin (to an extent), and other selectionists think is most important. The environment does not instruct the organism, they say; rather, the organism selects from its existing connections the most appropriate response to whatever stimulus is being presented by the environment.

How to Interpret All of This

Clearly, biological systems would not be able to survive in their environments without some knowledge obtained before contact with the outside world. But it is also not possible for them to learn anything for which they simply do not have pre-existing neural connections. It is obvious that a newborn baby, for instance, does not have the intricate understanding of the world that someone twenty years old has. Furthermore, it is impossible to argue that everybody knows everything; after all, some people specialize in neuroscience and know nothing about painting, while others become painters and know nothing about neuroscience. Therefore, the most logical conclusion is to agree with Elman et al.'s second constraint: the idea that the neural connections are there from the beginning, but not the knowledge itself. In other words, biological beings are innately given the potential for all possible knowledge they might ever have; but which knowledge they actually acquire is influenced by a variety of environmental factors.


Bibliography

Atkinson, Rita L., Richard C. Atkinson, Edward E. Smith, Daryl J. Bem, and Susan Nolen-Hoeksema (2000). Hilgard's Introduction to Psychology: Thirteenth Edition. New York: Harcourt College Publishers.

Broude, Gwen. Lecture on February 11, 2003.

Broude, Gwen (a). Handout: "More quotes for your notes on selection vs. instruction". 2003.

Broude, Gwen (b). Lecture on February 6, 2003.

Elman, Jeffrey L., Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, and Kim Plunkett (1996). Rethinking Innateness. Cambridge, Massachusetts: The MIT Press.

Merriam-Webster Dictionary Online. http://www.m-w.com/cgi-bin/dictionary. Retrieved March 3, 2003.

Plotkin, Henry (1993). Darwin Machines. Cambridge, Massachusetts: Harvard University Press.


See also: Nature vs. Nurture, Is language innate or learned?, How your brain codes knowledge

Log in or register to write something here or to contact authors.