Update 22 Nov. 2002: I'm going to address some of tdent's excellent criticisms/questions about my original writeup. Hopefully this will help to clarify my ideas. My responses will be in blockquotes following my original paragraphs.

First of all, I have to make clear what I mean by “consciousness” in the context of this writeup. I’m using the word to refer to all thought. I need to make this clear to differentiate from the common usage of the term to refer only to waking or what is often called “conscious” thought. I conceptualise this latter “aware” type of thinking simply as those parts of the thinking process which have been selected for inclusion in the internal monologue.

I use the word "consciousness" to refer to all mental activity. Waking (or what is often called "conscious") thought, dreaming, storage of phone numbers etc., calculations neccesary to catch a ball, etc. Obviously this whole subject is full of semantic pitfalls; I'm not trying to say that there are not distinctions between waking, or aware thought, and other kinds of thought, only that for the purposes of this writeup they will all be referred to under the umbrella term "consciousness."

It seems I didn't make it clear that by "internal monologue" I don't really mean a purely verbal monologue. It seems to me that the "aware" portion of consciousness consists of something like a verbal monologue, however. A series of thoughts and impressions, some verbal, some visual, some neither, but, a series of one thought at a time. It's my sense that this series represents what seems at each moment to be the most important or pressing thought to be dealt with. A sort of simple linear sketch of a much more complex and distributed process.

Imagine an electronic storage medium with a brain interface. Not terribly hard to imagine, probably not terribly far down the road of technological development. We ordinarily think of consciousness as being that electrochemical activity (or more metaphysically, the product of that activity) taking place within the brain, but given the development of this external storage, it’s clear that consciousness would migrate out of the brain and into the storage unit.

When I say above that consciousness "migrates" out of the brain, I don't mean that nothing is left behind! What I'm saying is that we can easily picture (nano)technology reaching the point where neural activity could be extended outside of the brain electronically. I suppose what I picture technically would be something like using some portion of, for instance, the visual cortex as input from this device. Output could come from any number of places. Just as there are reservoirs in the brain for data, phone numbers, for instance, it's not hard to picture creating external storage. Of course, this external device needn't be only passive. It could conceivably be processing its contents as well. What I'm trying to say is that this device could then be functionally considered to be part of one's consciousness, in fact that there is little reason to not consider it so, other than the faith that consciousness can only exist within brains.

tdent differentiates storage outside the brain from storage inside by pointing out that ideas jotted down can become meaningless when examined later in life. I agree, but would also turn this backwards. Many thoughts that are at this moment meaningful and are stored inside the brain become irrelevant later, become odd impressions that you know have some significance, but this significance has been forgotten or confused. A well-considered written document might be able to restore this thought later better than the brain's memory. There are clearly differences between the storage media, ways in which neural storage is better, and ways in which written storage is better. My contention is that they serve the same function, and that both can be considered to be parts of our consciousness. Cutting out a piece of someone's brain with a knife can have a similar effect on their consciousness to removing their address book.

The reality is that we already have the ability to store our thoughts outside the brain. For instance, by writing them down. Or uploading them to an external storage medium such as Everything2. The only real difference is that these media are low-bandwidth in comparison to some imagined wire-in-the-head system. For that matter, they are only distinguished from parts of one’s own brain by the awareness of where these thoughts are coming from.

tdent points out that consciousness is characterised by its perpetual change. I assume that tdent here is using the term in the sense that I outlined above, as an umbrella term encompassing all forms of thought. There are clearly large parts of consciousness that are forever changing, but there are also large parts that stay the same. For instance, I never have to wake up and figure out how to make the left arm go. Consciousness seems like hardware sometimes, and software others. Suppose you think a person is honest. One day you catch them out in a lie. You revise your opinion of them as "partially honest" or "dishonest." Is this any different from having this jotted down in a notebook next to their name, and then scratching out the note "honest" and writing "lies sometimes" next to it? Non-brain media aren't neccesarily any less changeable than the brain's internal storage. tdent says I confuse the "cargo with the ship"; I suggest that in the case of consciousness, the two are not distinct from one another.

Of course, the same goes for other people’s thoughts. Reading a novel is functionally the same thing as having a low-bandwidth connection to the writer’s brain.

Okay, it's not the same as having a direct neural connection to any random part of the author's brain. The connection is mediated by the construction of language. We can look at it as a low-bandwidth connection to the author's brain in the same area that her vocal cords are connected.

Consciousness can be seen as flowing freely through all manner of different media. We tend to identify our brains as its centre only because in our brains thoughts can interact with one another, recombine, produce new thoughts, so much more rapidly than anywhere else. Also our brains would seem to be home to this phantasm known as the ‘self,’ but that’s the subject for another writeup.

tdent's is correct in pointing out that when you write a few thoughts down, they don't interact with one another. This is if you write them down in a notebook. If you write a few thoughts into a computer program, they certainly do interact and produce new thoughts.

tdent's writeup closes with the observation that written thoughts are just "marks on paper", I contend that this is true, but that stored thoughts within the brain are just marks on brain matter.

tdent warns of the recursive nature of our writeups reacting to one another, possibly to the point of an infinite regress (assuming the death of neither of us):

(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!
(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!
(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!

I look forward to seeing how far this process does in fact go. Maybe this node will grow to dwarf the rest of e2! Stay tuned!

I'd love to know what idiot thinks this has anything to do with postmodernism.