(No, I didn't eat chapter 6!)
Upon re-reading the first part of the chapter, I also recognised the interesting analogy Nelson makes with information processing. Particularly, the idea that a completely random sequence requires the most information to characterise it, and thus has the most disorder. Here, Nelson uses a weather example, where runs of weather can be described rather than each individual day. Although this was a nice example, I don't see why you can't represent the coin-toss data in the same way! Perhaps a better example for the random thing could be chosen; though it doesn't represent a completely random set of data, the length of people's forearms on a train is an example which is much harder to describe with runs than coin tosses (due to the continuous nature of the data measurement). Of course, you can describe this with a Gaussian distribution if the data are normal (as they generally are in real life), but for the purposes of this example, 57 or so arms is still not too large to describe in terms of a random length (57 is the number of coin tosses used in one of the figures). Or we could assume that a certain town has non-normally distributed arm lengths....
But (sorry James) this discussion of sequences/runs brought another thought to mind: if a sequence of coin tosses has no runs of similar results at all (e.g. HTHTHTHTH.... = (HT)^n), the sequence can be described (in this two-variable case) with two amounts (avoiding the annoying use of 'bits'!) of information: what the repeated element is and how many of them there are (plus an extra one for an odd number of results, if we don't assume vacuum and sphericity). Thus, the sequence requiring the most information to describe would be a trade-off between having no runs but still having the broad distribution of events (i.e. trade-off between HTHTHTH... and HTTHTTTHTTTT... which is hard to describe in terms of (HT)^n; note: just a simple example, not rigorous!). I found this an interesting concept.
I also 'stumbled across' the expanded explanation of one of the 'T2' segments on the first section: that the most information can be transferred with the most disordered sequence. I disagree that this is unintuitive, from Nelson's discussion, but others may (and are even allowed to) disagree with me. This is another interesting point, though: maximal disorder still requires some 'ordering' so that the message can be read at the end! This is, perhaps, another conceptual dilemma of disorder. Nelson's case in Ch6.1 is that disorder is the amount of information required to describe a system (basically) but the more statistical mechanics view is that disorder is (basically) the number of indistinguishable states that have the same distinguishing feature (e.g. number of microstates with one macrostate, energy). While these definitions are not opposed to each other, they do present two quite different ways of looking at disorder, and this is perhaps why disorder is a difficult concept to grasp.
PS: Has anyone else realised that dilemma = di-lemma = two lemmae? Or two lemmata if you are Greek. :)
No comments:
Post a Comment