Update 22 Nov. 2002: I'm going to address some of tdent's excellent criticisms/questions about my original writeup. Hopefully this will help to clarify my ideas. My responses will be in blockquotes following my original paragraphs.

First of all, I have to make clear what I mean by “consciousness” in the context of this writeup. I’m using the word to refer to all thought. I need to make this clear to differentiate from the common usage of the term to refer only to waking or what is often called “conscious” thought. I conceptualise this latter “aware” type of thinking simply as those parts of the thinking process which have been selected for inclusion in the internal monologue.

I use the word "consciousness" to refer to all mental activity. Waking (or what is often called "conscious") thought, dreaming, storage of phone numbers etc., calculations neccesary to catch a ball, etc. Obviously this whole subject is full of semantic pitfalls; I'm not trying to say that there are not distinctions between waking, or aware thought, and other kinds of thought, only that for the purposes of this writeup they will all be referred to under the umbrella term "consciousness."

It seems I didn't make it clear that by "internal monologue" I don't really mean a purely verbal monologue. It seems to me that the "aware" portion of consciousness consists of something like a verbal monologue, however. A series of thoughts and impressions, some verbal, some visual, some neither, but, a series of one thought at a time. It's my sense that this series represents what seems at each moment to be the most important or pressing thought to be dealt with. A sort of simple linear sketch of a much more complex and distributed process.

Imagine an electronic storage medium with a brain interface. Not terribly hard to imagine, probably not terribly far down the road of technological development. We ordinarily think of consciousness as being that electrochemical activity (or more metaphysically, the product of that activity) taking place within the brain, but given the development of this external storage, it’s clear that consciousness would migrate out of the brain and into the storage unit.

When I say above that consciousness "migrates" out of the brain, I don't mean that nothing is left behind! What I'm saying is that we can easily picture (nano)technology reaching the point where neural activity could be extended outside of the brain electronically. I suppose what I picture technically would be something like using some portion of, for instance, the visual cortex as input from this device. Output could come from any number of places. Just as there are reservoirs in the brain for data, phone numbers, for instance, it's not hard to picture creating external storage. Of course, this external device needn't be only passive. It could conceivably be processing its contents as well. What I'm trying to say is that this device could then be functionally considered to be part of one's consciousness, in fact that there is little reason to not consider it so, other than the faith that consciousness can only exist within brains.

tdent differentiates storage outside the brain from storage inside by pointing out that ideas jotted down can become meaningless when examined later in life. I agree, but would also turn this backwards. Many thoughts that are at this moment meaningful and are stored inside the brain become irrelevant later, become odd impressions that you know have some significance, but this significance has been forgotten or confused. A well-considered written document might be able to restore this thought later better than the brain's memory. There are clearly differences between the storage media, ways in which neural storage is better, and ways in which written storage is better. My contention is that they serve the same function, and that both can be considered to be parts of our consciousness. Cutting out a piece of someone's brain with a knife can have a similar effect on their consciousness to removing their address book.

The reality is that we already have the ability to store our thoughts outside the brain. For instance, by writing them down. Or uploading them to an external storage medium such as Everything2. The only real difference is that these media are low-bandwidth in comparison to some imagined wire-in-the-head system. For that matter, they are only distinguished from parts of one’s own brain by the awareness of where these thoughts are coming from.

tdent points out that consciousness is characterised by its perpetual change. I assume that tdent here is using the term in the sense that I outlined above, as an umbrella term encompassing all forms of thought. There are clearly large parts of consciousness that are forever changing, but there are also large parts that stay the same. For instance, I never have to wake up and figure out how to make the left arm go. Consciousness seems like hardware sometimes, and software others. Suppose you think a person is honest. One day you catch them out in a lie. You revise your opinion of them as "partially honest" or "dishonest." Is this any different from having this jotted down in a notebook next to their name, and then scratching out the note "honest" and writing "lies sometimes" next to it? Non-brain media aren't neccesarily any less changeable than the brain's internal storage. tdent says I confuse the "cargo with the ship"; I suggest that in the case of consciousness, the two are not distinct from one another.

Of course, the same goes for other people’s thoughts. Reading a novel is functionally the same thing as having a low-bandwidth connection to the writer’s brain.

Okay, it's not the same as having a direct neural connection to any random part of the author's brain. The connection is mediated by the construction of language. We can look at it as a low-bandwidth connection to the author's brain in the same area that her vocal cords are connected.

Consciousness can be seen as flowing freely through all manner of different media. We tend to identify our brains as its centre only because in our brains thoughts can interact with one another, recombine, produce new thoughts, so much more rapidly than anywhere else. Also our brains would seem to be home to this phantasm known as the ‘self,’ but that’s the subject for another writeup.

tdent's is correct in pointing out that when you write a few thoughts down, they don't interact with one another. This is if you write them down in a notebook. If you write a few thoughts into a computer program, they certainly do interact and produce new thoughts.

tdent's writeup closes with the observation that written thoughts are just "marks on paper", I contend that this is true, but that stored thoughts within the brain are just marks on brain matter.

tdent warns of the recursive nature of our writeups reacting to one another, possibly to the point of an infinite regress (assuming the death of neither of us):

(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!
(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!
(r) tdent says I edited it... when you change the original, I'll have to edit again... etc!

I look forward to seeing how far this process does in fact go. Maybe this node will grow to dwarf the rest of e2! Stay tuned!

I'd love to know what idiot thinks this has anything to do with postmodernism.
I'd like to criticise the preceding writeup, I hope constructively.

I just find the first part confusing. It seems to be saying "the unconscious is conscious too" - since "unaware thinking" can't be anything but unconscious. (I don't believe in an "internal monologue", since I don't think in a sequence of words - it requires conscious effort to express thoughts in verbal form, and when I do they usually come out to be something slightly different from the original intention. For me, the "stream-of-consciousness" style works by creating a sympathetic stream of consciousness in the reader's mind, not by actually reproducing the consciousness of its subject.) But that's not important for the rest of the argument. (If it means that what cannot be put into words is a major part of consciousness, it seems to contradict it, if anything.)

A distinguishing feature of thought in the brain, be it verbal, inchoate, (un)conscious, (un)aware, is action - dynamics - motion; being out of equilibrium. The storage medium, however, like a book or a music manuscript, just sits there: in equilibrium. An essential requirement for consciousness is changefulness and the possibility of reacting on itself. The medium is an inanimate object that acts as a stimulus to change the consciousness of whoever plugs in to, reads, or plays it. If I can take the musical analogy further, no two performances of the same piece are the same; you never read the same book twice, since every time you are a different person.

"Consciousness would migrate out of the brain and into the storage unit" sounds deeply problematic. If this really happened, you'd be left brain-dead. I hope it'd be more like writing a book: you create something inanimate outside yourself that nevertheless has an intimate - if rather mysterious - connection to your thoughts at the time.

Imagine you wrote some notes of a wonderful new idea down on a bit of paper. Then the next morning, or years later, you find it and wonder what the hell you were thinking of. If one person can't even store his or her thoughts reliably, what hope for communication between people?

Yet it occurs, which shows that communication isn't thought transfer. The skill of good writers is in using words which most readers will associate with particular thoughts, because the readers have some experiences in common, and in coordinating these thoughts to stimulate the reader's consciousness in an organised, or cunningly disorganised, way. Still, people read the same novel and come up with incredibly different interpretations of what the author intended, or was thinking of. To be pedestrian, people have radically different experiences of the same thing, so to name the thing in a book will lead to radically different states of consciousness in different people.

A book is a connection between two minds, to be sure, but with a high degree of anti-redundancy. What does this mean? - the book only gets its meaning when read, and a lot of the information that goes into the meaning comes from the previous experiences of the reader. On the most basic level, this includes the experiences that enable such a reader to learn the meanings of words. To get metaphorical, meaning, thought, consciousness lie dormant in a medium, hibernating until the next human interaction - then emerge as a changed species in a different brain.

Now, how would this electronic storage medium work? By picking up the detailed electrical activity of the neurons, I guess. Here comes the crunch: how would one read back such a medium? Presumably, by its (re)creating some neuronal state in one's brain, or by feeding back the electrical activity to the appropriate places. But even given the required technical expertise this creates a host of problems.

What might it feel like? What happens to the thoughts that you had in that bit of the brain just before the readback? Suppose the configuration of your neural connections had changed in the meantime, so that the same pattern of activity had a different meaning? Suppose the essence of the thought actually involved some far-lying neurons that didn't get picked up by the scan? (There's non-locality, if you like.) Well, then let's make it a whole-brain scan. Then you would be resetting your entire neuronal state - ad then you would proceed to think the same thoughts as before ad infinitum, until jolted out of the repeat by external events. Now imagine trying to feed someone else's neuronal activity into your own brain, with the catch that the other guy's neurons are, as always, configured totally differently to yours. Like feeding a Fortran program into a C compiler - or sticking cogs into your gas tank?

Unlike wire-in-the-head, books and websites have a language, a set of sounds or symbols common to different people, which are associated with particular things or actions by repeated usage. In order to use the wire-in-the-head, you'd have to develop a neuronal language which allowed you to make sense of the assault of impulses from the wire. Your brain isn't set up to deal with this sort of input: whatever comes out of the neuronal feed surely won't be in the form of words.

"Consciousness can be seen as flowing freely through all manner of different media. We tend to identify our brains as its centre only because in our brains thoughts can interact with one another, recombine, produce new thoughts, so much more rapidly than anywhere else."
Flowing like ketchup... uh, sorry, my mind was wandering. There seems to be a confusion here between the cargo and the ship. The media enable information to travel safely from one place to another, but it only becomes thought once unpacked at the destination. What kind of thought, depends on who is doing the unpacking. "Consciousness flowing" is a fine metaphor, but no more than a metaphor.

It would be great if thoughts could breed and interact while they were outside brains. Or indeed, if thoughts existed at all outside brains. Just think of it - you write down a few sentences in a notebook, then next day they've had a litter. In reality, the sentences don't even know that they're supposed to be thoughts - they just sit there until the next English-speaking human comes along. We call them thoughts because they stimulate our consciousness in interesting ways.

Log in or register to write something here or to contact authors.