Of a computer program:
the notion that its functionality
(and hence, the code implementing it)
is partitioned into distinct functions or modules which can be independently written, understood, and worked on.
Humans can only keep track of so many details at once,
so breaking large programs up into modules is the only way of keeping them tractable.
Ideally, modules are not mere subsets of the whole, but subsets which are partitioned and defined in some useful way. A well-defined module will perform some distinct function and have a minimum of interconnections with other modules.
(The reason for keeping the number of interconnections small is so that the module will acheive its goal of being an independently-understandable part of the whole.
The more interconnections there are with the rest of the larger whole,
the more of the larger whole must be understood and kept in mind while working with the module.)
Modules connect to and communicate with other modules through interfaces; a nicely simple set of interconnections is also referred to as a "narrow interface" or "loose coupling".

The modularity hypothesis of language

What is it; how fine-grained and widespread is it; how did it evolve?

The modularity hypothesis is the claim that some functions of the brain, including language, are organized into quite specific groups of dedicated processes, to a large degree isolated from other brain processes.

A precursor to this hypothesis was the phrenology of Franz Gall, wrong in detail but not perhaps entirely unreasonable in science. Broca and Wernicke discovered specific language areas in the nineteenth century, but for most of the twentieth the emphasis was on connectionist, associationist, or holist views of brain function. Jerry Fodor1 proposed that not only were perceptions modular, but language was also. By modular he meant a group of defining features, generally found together, no one of which necessarily essential.

First for the manner of their operation: fast, and obligatory. We see what we see, and have no choice in the matter. We hear our language, and can't help but hear it as our language. These processes are as instantaneous as anything we can be aware of.

Next for how they relate to the rest of the cognitive processes. Modules are domain-specific, and they are informationally encapsulated. The vision module deals with vision and does not concern itself with how the visual object presented is related to other cognitive information that might help identify it. By encapsulation is meant that the module has an entire database of knowledge (perhaps) and rules dedicated to its purpose, operating only within the module, and impenetrable from without. The module performs its processing and presents a completed object -- a parsed sentence, a visual representation -- to the rest of the brain.

As to the physical basis of the modules, they are supposed to be underpinned by specific neural architecture of their own, and they are separable: organic damage to one does not have to affect anything else. Probably these two points are most closely connected.

They do not have to be innate, as regions dedicated to reading can appear with exposure to reading, but presumably this is parasitic upon innate regions for speech and vision. A connectionist in cognitive science would claim they could grow all special regions this way, with enough exposure, but the claim of Fodor and Chomsky is that these specialist processors and databases are indeed innate.

Chomsky suggests some other faculties are modular in a similar way, naming music, number, and justice, while Fodor sees all that is not modular as a broad and perhaps inexplicable general cognition. Chomsky also talks of modules within the language faculty, meaning the interactive systems such as binding, theta-marking, and Case marking within his GB theory of language.These are highly constrained and quirky, not at all like general cognitive mechanisms. Perhaps for this reason he can't see how language could have evolved as an adaptation: it was perhaps a saltation or an epiphenomenon.

As a saltation it would just appear for free after some suitably large brain plasticity or capacity is achieved. As an epiphenomenon it would be an exaptation of some other facility that the core linguistic property of recursion was an adapation for.

The most obvious candidate for an adaptive facility is communication. All attempts to teach apes to sign have to some extent assumed that our close shared ancestry implies that our communicative systems are comparably close, and that a rudimentary cognitive capacity for language can be unearthed in ape communication. But actually the evidence is quite small that the two have anything particular in common. Humans cry, scream, shudder, and shout: these still function like ape calls, but not much like language. Human communication is much more to do with pragmatics than with our unique power of syntax.

The other candidate for an adaptation fits well with other versions of the modularity hypothesis: theory of mind. Here apes and humans do share a lot; ape theories could be seen as precursors to human theories. The work of Cosmides and Tooby in particular suggests that there might be a lot more modules, smaller, weaker, less watertight than Fodorian modules, to do with a great deal of human cognition: specific tools for each of mate selection, incest avoidance, cheater detection, and many others. The Wason card test shows that a problem is solved much faster when presented as about social cheating than as an equivalent logic problem.

Cosmides-Tooby modules, or tools from the toolbox, are probably much less likely each to have dedicated neural underpinnings, and would not have the incorrigibility of impenetrable modules, but do share enough features to be of interest for the one big problem about language.

Vision or hearing modules have had tens of millions of years to evolve, building on vastly longer histories; language is brand new. Debates about Neanderthal hyoids and the Shanidar Cave burial aside, something extraordinary happened 50 000 to 100 000 years ago. Humans almost went extinct in the Toba volcanic winter of 70 000 years ago, but 50 000 years ago modern humans were in Australia making art. How could the intricate, quirky Language Acquisition Device (LAD) implicit in GB theory arise in such a short time?

Chomsky's move to the Minimalist Program suggests to me that most of the complexity of the language module can be derived from a combination of simple tools in the social cognition toolkit. Results on domestic dogs published in November 20022 showed that they could identify food hiding places from human gazes and pointing: this ability is innate in puppies, but neither wolves nor chimpanzees can do it. So it arose (by artificial selection, admittedly) when dogs were first domesticated, apparently around 15 000 years ago. Social cognition tools can be added quickly.

We don't know what the secret is of the recursion that makes Chomsky's module so formidably powerful, and different from anything in chimpanzees, but none of the defining properties Fodor gave presents a chasm for the evolution of the LAD as an adaptation to give humans a communicable theory of mind.

References
1. Fodor, J. 1983. The Modularity of Mind, MIT Press.
2. Hare, B. et al. 2002. The Domestication of Social Cognition in Dogs, in Science vol. 298 pp. 1634-6, available on-line at http://dusk.geo.orst.edu/lydia/doggies_science.pdf

Log in or register to write something here or to contact authors.