Tuesday, 27 September 2011

*INSERT HUMOROUS BUT RELEVANT TITLE HERE*

I thought the title "The Hammett Equation" was a little bland for our blog, so I've left it up to your imaginations. Considering our last Friday meeting, I figured it would be a good idea to a little reading on the Hammett Equation.


Basically, the equation expresses an approximately linear relationship between the equilibrium constants and reaction rates of certain types of reactions, marrying up chemical equilibrium and kinetics quite nicely. Recall that the equation is as follows:
 \log \frac{K}{K_0} = \sigma\rho        or     \log \frac{k}{k_0} = \sigma\rho.

(where 'K' is the equilibrium constant for an equilibrium reaction where the substituent is R, 'Ko' is a reference constant where the substituent R is a hydrogen atom,  σ is the substituent constant, and where ρ is the reaction constant [which depends only on the type of reaction]. The lower case 'k' and 'ko' are the analogues for a reaction which uses benzene derivatives.) 


Hammett's paper entitled: "Some relations between reaction rates and equilibrium constants" has useful information and examples regarding the equation. (1)


There is no known universal relation that ties up both rate and equilibrium, but Hammet's does come close regarding acid/base reactions and those involving benzene derivatives. There are other relations that depend on rate and concentration terms, for example: 


log(k) = x log(K) + log(G)


where, G and x are constants, k is the rate constant, and K is the constant of ionization.  This relation is relevant to the following type of reaction: 
HA + S ---> Products +HA 
which can be seperated into the steps: 
1. HA + S <--> SH+ + A-   and
2. SH+ + A- ---> Products + HA. (where 2 is a rapid step). 

Since 2. is so rapid, the measured rate is that of 1. 


An example reaction is: 
RCOOCH3 + N(CH3)3 ---> RCOO- + N(CH3)4+


The paper shows that plotting log(k), rate constant, against log(K), the ionization constant of N(CH3)3, somewhat linear behaviour is observed. Reactions showing the hydrolysis of para/meta ethyl benzoate derivatives and the hydrolysis of para ethyl phenylacetate derivates also show linear curves when the data is plotted. The rates of the aforementioned two reactions differ due to the steric hindrance in para ethyl phenylacetate. 


The paper ends discussing the following reaction: 
AB + C ---> A + BC
A theory was proposed by London et al (1929) suggesting simultaneous proceeding of combining C and B with the dissociation of A and B. The height of the potential energy peak must be overcome while the kinetic energies of AB and C combined, must decrease. This process is a function of the dissociation energies. A relation between the height of the potential energy peak and the dissociation energies has not be established according to the paper. (Hammett, 1935). 


Quite some time has passed since the paper was written but it raises some interesting points regarding reaction types and where or not they follow the Hammett equation. 


References:
(1). Hammett, Louis (1935) "Some relations between reaction rates and equilibrium constants". Dept of Chemistry, Colombia University. American Chemical Society, pp.125-136.

Saturday, 24 September 2011

Amphiphilic Self-assembly

In chapter 8 Nelson briefly touches on the self-assembly of surfactants and phospholipids, a topic covered in some detail in a nano-science course I took last semester (CHEM3013). In this post I thought I'd extend amphiphilic self-assembly a little.

The self-assembled structures covered by Nelson are not only observed with surfactants and phospholipids, they are found for almost all amphiphilic molecules in solution (amphiphilic meaning that the molecules have both polar and non-polar regions). The structures that amphiphilic molecules form are the result of the free energies associated with the solute and solvent system. The figure below illustrates how varied these structures can be when you change the relative concentrations of surfactant, polar solvent and non-polar solvent. (Note that what isn't included in this figure is the dependence of structure on surfactant chain length, headgroup properties or counter-ion concentration.)



As demonstrated by the next figure (below), these structures can be used as nano-engineering templates. (Apologies for the image quality, blogger seems to be reducing the resolution for some reason.)



Another flavour of amphiphile often used in nano-engineering are block copolymers, these are polymers which have both polar and non-polar monomer units arranged in blocks. The blocks can be joined in various topological arrangements (i.e. combs, stars, branches etc.) and will self-assemble into a vast number of structures depending on their environment. The figure below shows some of the structures observed for a simple linear block copolymer.

A lot of research is being done at the moment into modifying the solubility properties of the copolymers by slightly altering their environment (e.g. pH, temperature.) That way the structure of a block copolymer in solution can be switched by small changes to the surroundings. If this can be achieved in biological conditions, many believe it could provide a viable method for targeted drug delivery.

Friday, 23 September 2011

Problem Set 2

The problem set questions for chapter 6 have now been marked. Once Seth has approved them I will send out the commented pdf's to their respective authors (and yes James your hard copy has also made it into the digital age thanks to my trusty scanner!) Even though I haven't signed any confidentiality or non-disclosure agreements I am a very honorable person and will thus not be leaking your assignments to Assange.

There were two questions which no two people answered in the same way, never the less I through most were correct. They were 6A c. and 6B.

6A c. essentially asked you to show that at maximum entropy T1 = T2 for connected systems 1 and 2. The solution given by Nelson is a single line and combines equation 6.8 (p.203) with the result from 6A b., namely that E/N = 3kT/2, to get 0=1/T1 - 1/T2.

6B is something of a controversial question for me. The first two sentences make sense: show that entropy is extensive while temperature is intensive. The next two sentences on the other hand seem to be asking a completely different question. And even after going through Nelson's solution I still have no idea why he's used the approach he has. Given the ambiguity of the question (and Nelson's solution) I've effectivley marked it based on the first two sentences alone. That is, showing entropy to be extensive and temperature intensive.

The Concept of an Activation Energy

What is an activation energy? This week we talked about reaction equilibria and rates, which are related by detailed balance.

What is an "activated population"? The population at the barrier top? What if we take another approach and say simply that the activated population is the population of "reacting reactants".

The following link is to a paper that takes this definition, assumes that a linear rate relationship exists, and derives the existence of a barrier. This is opposite to the normal direction, where one often starts with the assumption of a barrier.

Of course, it relies on detailed balance as a core assumption, just as any argument going the other way must also.

Levine, R. (1979). Free energy of activation. Definition, properties, and dependent variables with special reference to “linear” free energy relations. The Journal of Physical Chemistry, 83(1), 159–170. doi:doi: 10.1021/j100464a023


Here is a (very profound) quote:

"The mathematical definition is given a physical inter- pretation by the assumption that in a series of “similar” reactions, structural changes in the reactants change the reaction rate primarily via the change in the dG ” of the process. This interpretation is saved from being a circular one (“a series of similar reactions is one for which the Brdnsted relation is a useful measure”, “the Bransted slope is a useful measure for a series of similar reactions”) by the empirical observation that there are indeed many known examples of the utility of the concept."

In fact, I would assert that most of what we do involves arguing in circles (wherein the circuity is often given the lofty name of "internal logical consistency"), and that the only thing that ever saves us is the empirical observation that what we are doing is sometimes useful.

Thursday, 22 September 2011

Two-State Systems

The first part of this week's installment of Nelson got me thinking about some of the two-state or two level models which we see in physics...

The first is of course the two-state chemical reaction, where the states are initial and final conformations, as discussed in the text.

Another good one is the double-well system, where we literally have two potential wells that overlap (so there is an unstable equilibrium somewhere between them). The stereocilia that we looked at last week had this property. It turns out that quite a lot of systems have it too.

One class of examples is the artificial double well, used in optical tweezing and atom trapping. The idea is that light is used to create a spatially-dependent potential energy, so that particles are drawn toward the region of maximum intensity (for molecules this occurs if the light is below resonant frequency) or repelled from it (as happens if the light frequency is above resonance). The particles jump from trap to trap because of thermal agitation, or, if cold enough, purely because of quantum tunneling (which makes these systems a little different from their classical cousins). We can see weird effects due to this: periodic transfer of the population between the wells (`Rabi regime'), or if interactions are strong enough we get macroscopic quantum self-trapping, where the population of one well is maintained above that of the other even with no asymmetry of the actual applied potential (the internal interaction potential becomes strongly asymmetric since it depends on the number density squared).

In non-biological systems we tend to see more in the way of extended collections of wells, as in a solid chunk of material. Here the potential is periodic (essentially), which accounts for the conduction and heat capacity properties of some solids.

From a more biophysical standpoint we have the (strongly asymmetric) double wells associated with free energies of things such as protein folding and the dephosphorylation of ATP.

Does anyone have any others to add? I'm sure that there are plenty I have missed!

Tuesday, 20 September 2011

Horrible Hair


It’s no secret that the majority of women colour their hair. Fully embracing my lack of y-chromosome, I have been colouring my hair for years. Recently, my peers, work colleagues, and biophysics crew may have noticed that I’m sporting a slightly more tangerine mop, no thanks to a certain apprentice hairdresser who shall remain nameless. So, how does it work? I’m glad you asked. 

As we all know, hair is comprised of keratin, like our fingernails. The colour is a result of the ratio we have of expressed proteins – eumelanin and phaeomelanin. Eumelanin expresses dark pigments like brown or black, while the phaeomelanin is responsible for the blonde and red colours. When we get to a ripe old age, and these melanins are no longer present at all, that’s when hair goes grey/white, unfortunately. Hair colouring has been going on for hundreds of years, with previous mechanisms consisting of simple coating the shaft of the hair in a natural pigment, for example henna. These methods are ofcourse temporary.


Hydrogen peroxide is strips the hair of its colour, in an irreversible reaction. When hydrogen peroxide is applied to the shaft in an alkaline solution (usually the weak base, ammonia), the H2O2 oxidizes the melanin molecule, liberating sulphur.  The oxidised form of either melanin is colourless. The hair does not become 
 
transparent of course, as keratin is naturally light yellow, so bleached hair is often yellow-blonde. Why the need for the basic environment? Ammonia is needed otherwise the lightening agent would be unable to react with the pigments which lie in the cortex of the shaft, under the hard cuticle of the hair. The basic environment  allows for openings between the keratinocytes of which the cuticle is comprised.  Once the hair is stripped of its natural pigment, new colours can be deposited into the hair shaft, and new chemical bonds can be catalysed between the colourant and the cortex of the shaft. Treatments are subsequently applied to close the cuticle, “sealing” the colour in the cortex. (Helmenstine, AM, Ph.D, 2011).

Saturday, 17 September 2011

DLVO

During our discussion this week I mentioned DLVO theory, named after the work of Derjaguin and Landau, Verwey and Overbeek. The theory, (well chemist call it a theory but physicists would probably call it a model) describes the potential of two colloidal particles in solution as a function of their separation distance. The total potential is expressed as the sum of the attractive and repulsive potentials, where the attractive term is solely due to van der Waals forces and the repulsive term due to the interaction of the particle's diffuse double layers.

The derivation of both the attractive and repulsive terms involve a number of approximations, without which the final result would be truly horrendous and unusable. However, in spite of the approximations, it works surprisingly well both as a qualitative guide and at making quantitative predictions. Due to it's successes it's very popular with colloidal chemists and nano-tech people; so popular in fact it even has it's own facebook page!

If you would like more information send me an email and I'll send you some lecture notes, the information I've found on the web is rather scattered and not very helpful.

Molecular Crowding

I was surprised to read in Chapter 7 that molecular crowding can help the self-assembly of macromolecules, as I seemed to remember reading that crowding promotes the formation of cytotoxic protein aggregates. After a brief search of the literature it turns out (surprise, surprise) that the issue is somewhat complicated.

It seems that the key factor is the size of the molecules which are doing the crowding. My confusion was due in large part to a clash of terminology. Crowding as described by Ellis and Minton 2006 is problem encountered by protein's as they fold in a cellular medium and is due to the presence of other macromolecules. It has also been suggested that protein chaperones play a key role in abating the effects of crowding on protein folding.

I suspect that the crowding effect described by Nelson and that by Ellis and Minton exist on a continuum, with some cellular constituents providing a "positive" contribution to folding while others are more inhibitory. I would also expect that both effects have a strong dependence on the respective concentrations.



------------------------
Ellis, R.J. & Minton, A.P. Protein aggregation in crowded environments. Biol. Chem.
387, 485–497 (2006).

Thursday, 15 September 2011

We have a room!

Hi everyone. We've been booked into the room down the BEC end of the Annexe (where we ended up last week) for the rest of semester.
See you there tomorrow.

Wednesday, 14 September 2011

Fluorescence Imaging Pop Science Article

Hi all. I have (very nearly) completed my article on fluorescence microscopy, as promised. I hope you will find it informative and entertaining!

There will hopefully be another picture or two on the way. If you read it before the due date I would love to hear your thoughts so it can be improved.

Monday, 12 September 2011

Some Basic Hamiltonian Mechanics

Alrighty, now that I have the power of Google Docs at my command I shall post some more things...
It occurred to me on Friday that I was probably the only one in the room (apart from Seth) that had encountered the generalised equations of motion that we were talking about (at least as part of coursework), so I thought that maybe a brief introduction to classical mechanics would be appreciated. This is basically drawn from PHYS2100 course material and some reading that I did during Summer (research project). I'd appreciate thoughts!

Nelson, Ch 6 Ruminations

(No, I didn't eat chapter 6!)
Upon re-reading the first part of the chapter, I also recognised the interesting analogy Nelson makes with information processing. Particularly, the idea that a completely random sequence requires the most information to characterise it, and thus has the most disorder. Here, Nelson uses a weather example, where runs of weather can be described rather than each individual day. Although this was a nice example, I don't see why you can't represent the coin-toss data in the same way! Perhaps a better example for the random thing could be chosen; though it doesn't represent a completely random set of data, the length of people's forearms on a train is an example which is much harder to describe with runs than coin tosses (due to the continuous nature of the data measurement). Of course, you can describe this with a Gaussian distribution if the data are normal (as they generally are in real life), but for the purposes of this example, 57 or so arms is still not too large to describe in terms of a random length (57 is the number of coin tosses used in one of the figures). Or we could assume that a certain town has non-normally distributed arm lengths....

But (sorry James) this discussion of sequences/runs brought another thought to mind: if a sequence of coin tosses has no runs of similar results at all (e.g. HTHTHTHTH.... = (HT)^n), the sequence can be described (in this two-variable case) with two amounts (avoiding the annoying use of 'bits'!) of information: what the repeated element is and how many of them there are (plus an extra one for an odd number of results, if we don't assume vacuum and sphericity). Thus, the sequence requiring the most information to describe would be a trade-off between having no runs but still having the broad distribution of events (i.e. trade-off between HTHTHTH... and HTTHTTTHTTTT... which is hard to describe in terms of (HT)^n; note: just a simple example, not rigorous!). I found this an interesting concept.

I also 'stumbled across' the expanded explanation of one of the 'T2' segments on the first section: that the most information can be transferred with the most disordered sequence. I disagree that this is unintuitive, from Nelson's discussion, but others may (and are even allowed to) disagree with me. This is another interesting point, though: maximal disorder still requires some 'ordering' so that the message can be read at the end! This is, perhaps, another conceptual dilemma of disorder. Nelson's case in Ch6.1 is that disorder is the amount of information required to describe a system (basically) but the more statistical mechanics view is that disorder is (basically) the number of indistinguishable states that have the same distinguishing feature (e.g. number of microstates with one macrostate, energy). While these definitions are not opposed to each other, they do present two quite different ways of looking at disorder, and this is perhaps why disorder is a difficult concept to grasp.

PS: Has anyone else realised that dilemma = di-lemma = two lemmae? Or two lemmata if you are Greek. :)

Candidly Candida

Hello everyone,
Many will have heard of Candida albicans, the most widespread fungal pathogen among humans and most commonly known as the fungus that causes oral and vaginal thrush. In these normal cases, the fungus is generally non-fatal but C. albicans has a more sinister role in hospitals: it infects patients in hospitals, usually via plastic surfaces that are implanted into the body (like catheters, prosthetic joints, heart implants, etc.). According to the ABC news article, nearly half of the more severe cases result in death - even without fatal consequences, I'm sure we all agree that a fungal infection in your heart is going to be unpleasant! Of course, as is generally the case, the most severe infections can't be treated effectively (of course, being a fungus = eukaryotic cell, C. albicans can't be treated with antibiotics), but some researchers in the UK have characterised a set of glycoproteins (glycoprotein = sugar + protein, for those unaware) that allow C. albicans to infect humans so effectively. They are called Agglutinin-like sequence (Als) glycoproteins and they allow C. albicans to adhere to a wide variety of host cell surface proteins (a major virulence factor of C. albicans. There are multiple Als proteins, and they may be further modified with glycans (i.e. sugars) as well.

The researchers of the paper, Salgado et al. (2011), solved the structures of Als bound with a target protein to investigate the specific mechanism of binding which allows such broad specificity. The main feature of Als proteins is a lysine-59 residue, which is buried in a cleft. The positive side chain of the lysine forms a salt bridge with a carboxylate (COO-) C-terminus of an incoming peptide. This salt bridge stabilises the incoming protein in place and allows the nearby water molecules and protein side chains to form hydrogen bonds with the incoming peptide, 'locking' it in place; the water molecules are important to the broad specificity of Als because they can rearrange to allow more or less space for side chains on the incoming peptide and can form hydrogen bonds in more optimal locations. Thus, the multiple hydrogen bonds keep C. albicans tightly attached to the target cell. This mechanism also provides an idea for a method to inhibit C. albicans infection: by blocking the lysine-59 residue, the whole Als protein binding mechanism is inhibited.

The main toxic effects of C. albicans come from its ability to form 'biofilms'. These are collections of fungal cells that bind together into 'nanoadhesomes' (term from the paper). These biofilms are often implicated in the virulence of bacterial and fungal infections. The biofilm formation is also linked with Als function; another domain of Als facilitates these interactions between different cells.

Als is a busy protein: it has also been suggested that Als can bind to fucose (a sugar) and fucose-related structures (e.g. glycoproteins), further increasing its ability to adhere to cells.

Finally, upregulation of Als (i.e. stimulus to make the cell produce more Als) occurs with upregulation of a nearby (on the chromosome) protease: the authors suggest that this could result in a 'tag-team' effect where the protease 'liberates' the C-terminus of a target protein and the Als binds it, allowing for more binding sites and stronger adhesion.

This is an interesting protein, especially from the author's perspective, because Als doesn't require surface complementarity to bind, just a free C-terminus - this is actually quite unknown in other binding interactions. Also, Als is very flexible, not just in ligand specificity but also physically, giving it the ability to change conformation easily to incorporate more ligands.

Sunday, 11 September 2011

Feedback on Assignment 1

Hi all: I have marked assignment one and will give these to Seth before returning them on Friday. I thought it would be nice to have some general feedback, with more detailed feedback on your individual assignments.

I will put it on here just as soon as I figure out how... any hints? (There's a blurb about classical mechanics to come too).

PS: I have now figured out how to load PDFs to Google Docs... here is the feedback.

Algal Power

I stumbled across some interesting research that's taken place as a collaboration between researchers from Clemson University and Georgia Tech. Unfortunately, i was unable to find the literature for it anywhere, so I was forced to scrounge around on ScienceDaily and New Scientiest-type sites for information.


Basically, a material called alginate has recently been extracted from fast growing brown algae for its property that it contains uniformly distributed carboxylic group, and has been used as a "binder" material in lithium ion battery electrodes. In short, binder materials suspend graphite (or possibly in future, silicon) particles in space so that they're able to interact with a liquid electrolyte. 


Researchers decided that it was logical to study animals that inhabit highly ionic environments for possible new binding materials. 


So far anodes are manufactured from carbon. There are hopes that silicon anodes may increase efficiency by up to ten times but so far no silicon anode has proven stable (as allowances must be made for expansion and contraction of Si particles (?)). 


This research opens the door to many possibilities such as cheaper, longer lasting batteries for mobiles phones, notebooks etc, as well as more energy storage. The need for use of toxic materials in battery manufacture may also be eliminated. 

Friday, 9 September 2011

The Estimation of Inverse Temperatures and Other Lagrange Multipliers

This paper discusses the mathematical formulation of the problem of estimating the temperature of a system from a sampling of its energies:

Tikochinsky, Y., & Levine, R. D. (1984). Estimation of inverse temperature and other Lagrange multipliers: The dual distribution. Journal of Mathematical Physics, 25(7), 2160–2168.

Chapter 6 Exercises

Your turn 6A, 6B, and 6D
Questions 6.6 and 6.7

Enjoy!

-Seth

Occam's Rug?

Looking through the latest volume of the Annual Review of Biophysics I came across an interesting piece by Ido Golding: Decision making in living cells: lessons from a simple system. The article looks in great detail at the E. Coli and lambda bacteriophage system, showing that experimental results fit very nicely with theoretical and simulation based predictions. Instead of describing the paper in any detail (a job done very well by the paper itself) I thought I'd highlight some remarks made by Golding which caught my attention.

One of the papers core arguments (as per the title) is that much can be learned about decision making at the cellular level from a simple model system. However, the closing paragraph reads as follows:

...before applying the principles learned here to higher organisms, we must beware: As physicists studying living systems, we are always in jeopardy of over assuming universality. Against that ever-present temptation, one has to keep in mind the distinction between Occam’s razor and Occam’s rug: The former, of course, is the guiding rule for physicists, who will always choose the simplest, most universal explanation for an observed phenomenon. The principle of Occam’s rug, on the other hand, states the following: When studying a living system, a simple elegant narrative often implies that too much of the data was swept under the rug. In other words, always be wary of claims of simplicity and universality in biology.

I suspect the tension between Occam's razor and Occam's rug is at the heart of all biophysical models.

Thursday, 8 September 2011

Mutual Information

Chapter 6 of Nelson introduces Shannon’s formula as a method for determining the information entropy contained within a string of words; an analogous problem to quantifying the entropy of a physical system. I was therefore not surprised to learn that Claude E. Shannon drew on much of the work of Boltzmann and Gibbs when devising his information theory. However, instead of focusing on this aspect of Shannon’s work (i.e. the connection with thermodynamics) I thought I’d give a brief overview of another of his legacies, namely, mutual information. What connection could this possibly have to biophysics I hear you ask? Well it is of critical importance to understanding the encoding, decoding, sending and receiving of information in the brain; a problem being tackled by biophysicists around the globe. (Not necessarily on a neurophysiology level but at a “we’re not scared of the maths” level.)

Shannon proposed that the key features of any information transmission system are:








where the noise source is modeled as an external rather than internal entity, but it is effectively the same thing.

As Nelson points out, the information entropy H for a distribution P(r) is a measure of the amount of information one can expect to gain on average from a single sample of that distribution. Suppose we have a distribution of Quantitatively this takes the form:




Lets take r to represent the received signal at the destination. This description of information only holds if there is perfect transmission, but all transmission introduces noise which in turn causes a loss of information so something is missing. One method of quantifying the noise entropy is to calculate how much of the response entropy comes from the variability in the response to the same signal, averaged across all signals. The entropy of responses to a given signal is given by:




and now averaged over all stimuli:





The mutual information is then given by I = H – H_noise which can be shown to be:






This result has a very nice property, note that if the signal and response are independent then P(r,s) = P(r)P(s) and since log(1) = 0 thus I = 0; which is what we would expect.

It should be stressed that there are many nuances to “optimal” information processing in our brains as different types of signals have different priorities. For example, while walking through a jungle at night, mistaking a cluster of leaves for a tiger is much safer than mistaking a tiger for a cluster of leaves. Bias and redundancy are as much a feature of our information processing system as optimizing mutual information.