Friday, 19 August 2011

When statistics becomes interesting - correlated events

In chapter 3 Nelson introduces some very basic statistical ideas: addition rule, multiplication rule, mean, standard deviation (and variance). However he missed the joys of conditional and joint probabilities. Perhaps they are not as pertinent to biophysics but in my opinion worth mentioning none the less!

The joint probability P(A,B) or “the probability of A and B occurring” is simply the product of P(A) and P(B) IF they are not correlated. If they are correlated [the interesting case!], then P(A,B) = P(A|B)P(B) = P(B|A)P(A) where P(A|B) is the conditional probability of A given B. That is, the probability of A occurring given that we’ve observed B [note that if A and B are independent then by definition P(A|B) = P(B) and we get that P(A,B)=P(B)P(A) ]. Unwittingly this gives us Bayes’ theorem: P(A|B)P(B) = P(B|A)P(A) => P(A|B) = P(B|A) P(A)/P(B).

The significance of this identity is very hotly debated by so called “Bayesians” and “frequentists”. [Interesting how readily people form factions and feverishly denounce the ideas and legitimacy of other equally zealous factions.] At the heart of the controversy is a subjective vs. objective interpretation of probability and whether or not one can reasonably choose values for the prior probabilities [that is, the quantity P(A)/P(B)]. It’s a fascinating debate!

To illustrate Bayes’ theorem I offer the following challenge question1:

You go to see the doctor about an in-growing toenail. The doctor selects you at random to have a blood test for swine flu, which for the purposes of this exercise we will say is currently suspected to affect 1 in 10,000 people in Australia. The test is 99% accurate, in the sense that the probability of a false positive is 1%. The probability of a false negative is zero. You test positive. What is the new probability that you have swine flu?

Hint: Another statistical identity is that P(A) = SUM[over all B] P(A|B)P(B) i.e. P(A) = P(A|B)P(B) + P(A| not B)P(not B)

----------------
1
Question was taken from MATH3104 lecture notes by Prof Geoff Goodhill


Tyranids.

Some of you will be aware of a science fiction game that is quite popular in these parts. It is called Warhammer 40,000 (or 40K for short) and amongst its eight-foot-tall, eugenically modified, xenophobic protectors of Mankind (Space Marines, for those who know, not Imperial Guardsman) there are a myriad of other intriguing biophysics problems (even Space Marines provide some interesting discussion points, like the fact that they can absorb someone's memories from eating a part of them—I can't remember which part, though).

The foremost of these is the Tyranids. Tyranids are a (fictitious, fortunately) race of space-faring organisms that operate under a 'hive mind'. Besides the spacefaring organisms, there are also all manner of scouting, eating, digesting and fighting organisms, each with its own degree of indpendence from the overarching hive mind. While there are plenty of things to discuss, the first I was thinking of is the fact that they are space faring.

Is a space-borne organism feasible? What would it require, regardless of any hive-mind activity. Of course, such an organism must be able to travel vast distances in space if it is to move between planets, let alone galaxies! (In the story, Tyranids invaded our galaxy from another). What kind of thermodynamic constraints are there to keep it alive? Speed, of course, is no problem in space until you have to slow down but what amount of energy is consumed when lifting off from a planet?

Eagerly awaiting your (gribbly) replies. I will think of more ideas after some brain recovery.

Semantics in Biophysics

James and I were discussing this last week: if biology is explainable by physics and chemistry is explainable by physics, are all fields essentially different levels of physics? And also, if physics is applied mathematics, does that mean that all sciences are derivable from mathematics?

But the real question is, is physics really applied mathematics? This is a question of applied semantics because one can consider physics to be the actual phenomena or to be the models of the phenomena. If one uses the definition that physics is the actual phenomena, then mathematics is used to model the physics; physics is not applied maths. If one uses the other definition, then physics is merely the mathematical description and so the answer to the opening question is yes.

This point is not to argue one way or the other, but rather to raise awareness of the dangers of misunderstanding. It is thus laudable to define one's terms at the start of a discussive argument (as opposed to a rather more unpleasant argumentative discussion); this also requires that all involved parties accept the others' definitions and at least try to see the argument from their point of view. That way, useless argument (that is really trying to find out the other's position and then make fuss to cover up one's tracks) is kept to a minimum and a real discussion of ideas can begin. Not to say either that all ideas are correct: but to give everyone a chance to see if anyone else's ideas are correct.

Today we were also discussing other semantic conventions concerning 'implicit' and 'explicit' hypothesis, and the exact meaning of being a 'scientist'. Is a scientist defined by their usage of 'explicit hypotheses' (I think we all agree that this is a no!), or by their problem solving technique (such as hypothesis testing, trial-and-error methods; any others you can think of?)? Does a scientist have to actually solve problems or can they merely be 'finding out stuff'? At the moment, I take the road where 'finding out stuff' is 'solving the problem' of 'what is there to be found?', but of course this is another contentious issue. What makes the difference between an engineer and a scientist? Can one be—(horror music)—both? Is 'scientist' even an accurate or useful description of someone's role in life or is it simply a discourse that we fall into?

And what is it about biophysics that makes these questions so apparent? And just what is 'biophysics'?

More Thoughts on Nelson Ch 2

Hello,
Although much has already been said about Ch2 of Nelson, I thought I would blog some!
I thought that this chapter was quite basic for someone who has studied biophysics from different angles (e.g. biochemistry) but it was nice to get an idea for the scales of biological systems. Despite the fact that the carbon atom is about the same size as a glucose molecule in Figure 2.3 (a), I think this figure is an excellent image to show the real difference in scales present in biological systems. This is what we must cope with when analysing these systems!

This sense of scale is helpful in understanding the actions of thingies like viruses (see my other posts in response to James) and when discussing the evolution of basic cellular features. For example, virus infection is much more analysable when you know that a virus is a large protein interacting with a massive cell membrane. Also, scale is an essential concept for biophysical study since some of the most interesting problems relate to scaling up from atomic interactions to ecology and decision making.

Any thoughts?

Barber on Photosynthesis

The following journal article I've reviewed this past week may well be the longest I've ever read - and it certainly took the better part of a week for me to read it, as 40 pages of tiny Times New Roman is not easily done in one sitting, regardless of caffeine consumption. 

Despite its length, I found "Biophysics of Photosynthesis" (Barber, 1978) to be a juicy, insightful and worthwhile read, with great relevance to topics to be covered in this course, and extrapolations on topics covered previously in BIPH2000. 

The paper covers the studies done of the cardinal process of photosynthesis from the very beginnings of the understandings of O2 production via photolysis of water in 1937, to thecurrent understandings of the inner workings of the Thylakoid membranes and chemiosis-assisted ATP synthesis. 

The paper begins by roughly going over the central dogma of photosynthesis from splitting water via light energy to make high energy chemical products (in the light reactions) which are subsequently used in dark reactions (ie the Calvin cycle) to synthesis carbohydrates.

The author then delves into the logistics of light harvesting and the various pigments involved. As seen on many spectra regarding photosynthesis, the complete range of photosynthetic accessory pigments cover nearly the entire visible spectrum regarding absorption. With chlorophyll a alone, the mechanics of photosynthesis would be no where near as high yielding in terms of energy, as it only absorbs in the red and blue portions of the spectrum. 

Barber then continues into the quantum behaviour of the pigments, stating that highly organic molecules show broad bands of absorption on spectra because of the bonding nature in said molecule - the π and π* orbitals experience "changes in vibrational quantum states". 

Barber also elaborates on the photosynthetic unit - a required number of pigment molecules for each reaction centre to achieve charge separation. In 1932 research was conducted by Emerson and Arnold using short flashes of light, concluding that approximately 2400 chlorophyll a molecules were needed for the oxidation of one water, and with four steps in the light reactions in oxygen liberation, suggesting that 600 chlorophyll a molecules are needed to bring about a charge seperation (Barber, 1978). 

Barber also discusses different types of coupling between energy donor and acceptor possibly observed regarding electronic excitation energy - strong, weak and very weak. The energy transfer rates are dependent on the type of coupling. 

The rest of the paper covers a huge array of different topics of photosynthesis, including differences between singles and triplet states in pigment excitation and how this determines fluorescence or phosphorescence, various models of the photosynthetic unit, the "Z" scheme and ATP production via a proton gradient (which provides the electric potential for ATP synthase). 

In closing, I did find this read to be useful in broadening what has already been learnt regarding photosynthesis. There are also many useful resources in the appendix regarding energy transfer mechanisms that may be of use. 

This journal article, again, certainly is not for the faint of heart, but a very worthwhile bit of bed time reading.

Thursday, 18 August 2011

Derivations of the Boltzmann Distribution

After having read a few different derivations of the Boltzmann distribution I have come to appreciate that some are far more intellectually satisfying than others! I will outline a few derivations below and comment on them, before discussing some of the applications of the Boltzmann distribution that usually go unnoticed.

The first derivation is from Schroeder. This begins by placing a small system (say, one atom) in thermal contact with a much larger reservoir and establishing that the probability of the system being in some (quantum) state s is proportional to the multiplicity of the reservoir (fixing the total energy). Then he uses S = klnw to rewrite this in terms of the entropy, and applies the thermodynamic identity (at fixed volume and particle number). This gives the proportionality stated by Nelson. To normalise the distribution, one simply needs to sum over all of these Boltzmann factors (over all states s, not energy levels!).

This is not very mathetically rigorous, but it does lead to an intuitive understanding of where the distribution comes from. I find this derivation useful in that it highlights the importance of the reservoir in determining the state of the system.

The second is provided by Kittel, and involves a similar process. Rather than rewrite the entropy using the thermodynamic identity, Kittel expands it as a Taylor series and uses the definition of temperature (actually, this is just a slighlty more mathematical way of doing what Schroeder did).

I like the added rigour, but find that it is a little too abstract in that the role of the reservoir is less explicit.

My favourite derivation of the Boltzmann distribution (to date) comes from a quantum mechanics textbook! Sakurai and Neopolitano derive the Boltzmann distribution by maximising the entropy of a system subject to the constraints of constant (expectation value of) energy and particle number. This is achieved by the method of Lagrange multipliers.

This is also one of the most peculiar derivations: note that there is no reference to an external reservoir! The Lagrange multiplier method also doesn't immediately identify the temperature as an important factor controlling the shape of the Boltzmann distribution: this must be added in from further arguments. I think it's beauty derives from the fact that only very basic assumptions have been made (i.e. fixed energy and particle number).

Where does the Boltzmann distribution pop up, even when we might not expect it?
-the concentration of vacancies/impurities in a crystal
-the rate at which a chemical reaction proceeds
-the temperature dependence of the viscosity of a fluid
-the decreasing density of the atmosphere with altitude
-the equilibrium distribution of colloidal particles
-the cooling of an atomic gas cloud by evaporation (to produce BEC!)
-any others?

My only question is why we spend so much time learning the Boltzmann distribution when the majority of life processes occur at constant pressure and temperature (rather than volume and temperature)?

Tuesday, 16 August 2011

Avast ye dirty biological macromolecules!

Those who have seen me in the past few days will have realised that my body is currently playing unwilling host to a menagerie of viral and bacterial invaders, putting their feet on the couches, leaving the toilet seat up and generally making a mess of my finely-tuned metabolism.

Anyone who has been similarly afflicted will be familiar with the uncanny correlations between the light/dark transitions in the external environment and the behaviour of the piratical microbes (and viruses) within you. For instance, an increase in your internal body temperature and rate of coughing appears to be brought on by dusk.

I was wondering whether these effects are actually due to the invaders themselves or some form of perverse response mechanism on the part of my body, which inevitably leads us to ask if viruses are living `beings' or `creatures'.

A quick Google yields a number of interesting posts made by people from all walks: specialists, laypeople and people who just seemed to be there to laugh or jeer at everyone else (especially common on the less formal pages).

The general consensus appears to be that viruses are not alive, in the sense that we all heard in first year biology (viruses are obligatory intracellular parasites that cannot reproduce or metabolise when separated from a host). The basic argument is that since viruses cannot undertake the processes we normally associate with an `unambiguously alive' cell, they cannot be alive. Articles which expand on this are from Microbial Life Teaching Resources, Scientific American and Virology Down Under (here at UQ).

The most interesting page I found was relating to a virophage: a virus which infects other viruses! If a virus can infect another and produce a noticeable effect, does that not imply that they both themselves are alive? Or is this just an exceedingly complex chemical reaction? Am I?

A second interesting page looked at the possibility of the eukaryotic nucleus arising from a virus rather than a bacterium. Does anyone know of a virus that has a lipid membrane instead of a protein coat?

Unfortunately, actually searching the scientific reason behind why you feel worse at night just turns up a lot of articles on stigma in post-treatment patients and anxiety in adolescents. I shall have to wait, unless you have ideas.

Until then, goodnight to you from Captain Red-cilia and his parrot Polly.
James

PS: I shall deliver my verdict once I have mulled things over for a while longer!