An amusement which mangles some input text. It simulates a k'th order Markov chain based on the text's statistics. The chain can either be based on letter statistics ("if the last 4 letters in the text were ` the', what is the chance that the next letter is `r'?), or on word statistics ("if the last 2 words in the text were `I love', what is the chance that the next word is `nobody'?"). The parameter k is the degree of continuity required; 4-6 is a good choice for a letter-based chain, while 2-3 for a word-based chain. The results are always amusing, since they're almost English-like.

In Emacs, you can perform dissociation based on the current buffer using "M-x dissociated-press RET". See that function's documentation string ("C-h f dissociated-press RET") for details on how to specify k and the use of letter or word statistics.

In practice, most implementations don't do a real Markov chain, but instead do the following:

• Pick a random place in the text.
• Search forward for the require continuity (wrap around from the end of the text to its beginning).
• Output the next letter or word.
If the original text were generated by the Markov model, the results would be the same. Unfortunately, this is not true of English.
dispress = D = distribution

Dissociated Press n.

[play on `Associated Press'; perhaps inspired by a reference in the 1950 Bugs Bunny cartoon "What's Up, Doc?"] An algorithm for transforming any text into potentially humorous garbage even more efficiently than by passing it through a marketroid. The algorithm starts by printing any N consecutive words (or letters) in the text. Then at every step it searches for any random occurrence in the original text of the last N words (or letters) already printed and then prints the next word or letter. EMACS has a handy command for this. Here is a short example of word-based Dissociated Press applied to an earlier version of this Jargon File:

wart: n. A small, crocky feature that sticks out of an array (C has no checks for this). This is relatively benign and easy to spot if the phrase is bent so as to be not worth paying attention to the medium in question.

Here is a short example of letter-based Dissociated Press applied to the same source:

window sysIWYG: n. A bit was named aften /bee't*/ prefer to use the other guy's re, especially in every cast a chuckle on neithout getting into useful informash speech makes removing a featuring a move or usage actual abstractionsidered interj. Indeed spectace logic or problem!

A hackish idle pastime is to apply letter-based Dissociated Press to a random body of text and vgrep the output in hopes of finding an interesting new word. (In the preceding example, `window sysIWYG' and `informash' show some promise.) Iterated applications of Dissociated Press usually yield better results. Similar techniques called `travesty generators' have been employed with considerable satirical effect to the utterances of Usenet flamers; see pseudo.

--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.

The following Python script implements a quick-and-dirty word-based Dissociated Press algorithm.
```#!/usr/bin/env python2

from whrandom import choice
from sys import stdin
from time import sleep

dict = {}

def dissociate(sent):
"""Feed a sentence to the Dissociated Press dictionary."""
words = sent.split(" ")
words.append(None)
for i in xrange(len(words) - 1):
if dict.has_key(words[i]):
if dict[words[i]].has_key(words[i+1]):
dict[words[i]][words[i+1]] += 1
else:
dict[words[i]][words[i+1]] = 1
else:
dict[words[i]] = { words[i+1]: 1 }

def associate():
"""Create a sentence from the Dissociated Press dictionary."""
w = choice(dict.keys())
r = ""
while w:
r += w + " "
p = []
for k in dict[w].keys():
p += [k] * dict[w][k]
w = choice(p)
return r

if __name__ == '__main__':
while 1:
if s == "": break
dissociate(s[:-1])
print "=== Dissociated Press ==="
try:
while 1:
print associate()
sleep(1)
except KeyboardInterrupt:
print "=== Enough! ==="
```

This code may be used from the command line or as a Python module. The command-line handler (the last chunk of code, beginning with if __name__ == '__main__') reads one line at a time from standard input, and treats each line as a sentence. When it reaches EOF, it begins printing one dissociated sentence per second.

The dissociate function stores frequency information about successive words in the global dictionary dict. That is to say: Every word in the input text occurs as a key in dict. The value of dict[foo], for some word foo, is itself a dictionary. It stores the words which have occurred immediately after foo in the source text, and the number of times they have done so. The end of a sentence is represented with the null value None.

The associate function creates a new sentence based on the frequency information in dict. It begins a sentence with a random word from the source text. Next, it uses dict to select a word which, in the original text, followed the first word. The probability of each possible word being selected is based on that word's frequency following the first word in the original text. If the "word" selected is the None value, the sentence is complete; otherwise, it picks a new word that has followed the last.

Here is a sample source text file. Please note the lack of punctuation; this program isn't smart enough to deal with it appropriately.

all your base are belong to us
everything is a base for soy lesbians
are you all monkeys
monkeys lesbians and soy are good for you
now is the time for all good lesbians to come to the aid of their monkeys
good monkeys belong in a zoo
on everything you can meet a zoo full of lesbians

And here is a sample of the output based on these sentences:

meet a zoo full of their monkeys lesbians and soy are you
a zoo full of their monkeys
on everything is the time for soy are good monkeys lesbians
time for all your base for you can meet a base are belong to us
good monkeys belong to us
base are belong to come to us
now is the time for all your base are you can meet a zoo
is a zoo
now is the time for soy are good lesbians and soy are good monkeys belong to the time for all monkeys
full of lesbians
base for all good monkeys
lesbians and soy lesbians to the time for soy lesbians