Wednesday, 31 August 2011

Nanodiamonds for Biological Imaging

This week I am away because I am attending the IQEC CLEO Pacific Rim conference in Sydney (international laser physics, optics and quantum optics conference). This morning I went along to a talk (by one Varun Sreenivasan) about imaging within live cells using colour centres in nanodiamonds, and thought it was rather interesting and neat. The basic concept is to replace organic dyes and fluorescent proteins with luminescent diamonds!

Diamonds naturally contain some concentration of impurity atoms which are captured during formation of the crystal. For artificial diamonds, usually synthesised by methane vapour deposition or detonation of an explosive compound, the principle impurity is nitrogen. If the crystal is irradiated with ionising radiation (gamma or alpha particles in particular) carbon atoms are displaced from the crystal, leaving vacancies: at high temperatures these are able to diffuse. If a vacancy is `captured' by a nitrogen impurity (which are covalently incorporated into the crystal) the new compound entity is referred to as a NV centre.

NV centres are neat because they essentially behave like an atom. By this I mean that they have transitions which are in the optical frequency range, allowing for optical detection of the centres. Specifically, excitation in the green (533nm) gives fluorescence in the red (630nm). They also have an interesting electronic structure in that, instead of singlet ground and excited states with a triplet intermediate, they have triplet ground and excited states with a singlet intermediate. A singlet state involves two electrons with a total spin angular momentum of zero, for a total of one spin projection state, whereas the triplet has net spin one, so has three spin projection states. This means that the NV centre has interesting spin properties which can be exploited for imaging and magnetometry (as it turns out, they might also be useful in nanothermometry and single-spin sensing, or miniaturised NMR/MRI!).

The advantages of NV nanodiamonds over traditional fluorophores are threefold. Firstly, the nanodiamond is very inert chemically and biologically, meaning that they are not cytotoxic or carcinogenic like existing options. Secondly, the surface chemistry of nanodiamonds is flexible, so that there are opportunities to attach specific proteins, functional groups or the like to them. Finally, the nanodiamonds are small, bright and quite photostable: they do not bleach under continued exposure, like a fluorescent protein, or blink (have irregular variations in emission intensity) to the same extent as a quantum dot (it is thought that reduced blinking in NV diamonds is because electronic excitations are localised to the NV site, whereas in a quantum dot the excitation is delocalised across the whole dot).

This presentation concerned labelling nanodiamonds with somatostatin (a regulator which interacts with GPCRs to help drive blood pressure homeostasis) to cause specific cells to endocytose them. The way in which this was done was to use a `lego-like' approach that can be readily extended to other functionalising groups/compounds/regulatory molecules. Rather than rely on covalent attachment or weaker adsorption, a protein-protein interaction was used to attach molecules to the crystal surface. The proteins barstar and barnase interact quite strongly (for a non-ionic, non-covalent bonding interaction) and `clip together', forming the basis of a method allowing attachment of different compounds to the diamond surface. This is quite stable and can be extended relatively easily to a variety of compounds of interest.

When somatostatin binds to the cell membrane of the target and initiates endocytosis the entire diamond is drawn inside (these are 30-40nm in size, although many people are now looking at sizes of 4-5nm) and the three-dimensional position of the crystal can be tracked. In the presence of a magnetic field the spin-field interactions mean that even the orientation can be tracked! Another talk expanded on this... but I will leave that for another time!

As a final comment: I thought it was impressive that the body will actually renally clear these nanodiamonds so long as they are below 8nm in length!

The whole idea is quite interesting and provides a neat quantum/biology interface too. Watch this space!

Special Mould

I remember when one of the pipes in the kitchen leaks, I always turn to epoxy to seal the leaking area. Now imagine this being applied in a real human body. A new heat-sensitive gel and glue combo has been introduced in the realm of cardiovascular surgery,. The special mould enables blood vessels to be reconnected without puncturing and sticking a needle and thread into it. The reason behind the creation of this substance is the difficulty to suture minuscule (~ 1 mm wide) blood vessels.

The process of the discovery of the glue is as follows:

"Sutures work by stitching together sides of a blood vessel and then tightening the stitch to pull open the lumen, or the inner part of the vessel, so the blood can flow through. Gluing a vessel together instead would require keeping the lumens open to their full diameter — think of trying to attach two deflated balloons. But dilating the lumen by inserting something inside introduces a wide range of problems, too.

Gurtner initially thought about using ice to fill up the lumen instead, but that meant making the vessels extremely cold, which would be too time-consuming and difficult on the operating table. He approached an engineering professor, Gerald Fuller, about using some kind of biocompatible phase change material, which could easily turn from a liquid to a solid and back again. It turned out Fuller knew of a thermo-reversible polymer, Poloxamer 407, that was already FDA approved for medical use.

Working with materials scientists, the team figured out how to modify the polymer so that it would become solid and elastic when heated warmer than body temperature, and would dissolve into the bloodstream at body temperature. In a study on rat aortas, the team heated it with a halogen lamp, and used the solidified polymer to fill up the lumen, opening it all the way. Then they used an existing bioadhesive to glue the blood vessels back together.

The polymer technique was five times faster than the traditional hand-sewing method, the researchers say. It even worked on superfine blood vessels, just 0.2 millimeters wide, which would not work with a needle and thread. The team monitored test subject rats for up to two years after the polymer suturing, and found no complications."

Based on stem cell research, I don't see why it wouldn't come into this discovery as well.

Thoughts?

Source:

http://www.popsci.com/science/article/2011-08/new-gel-glue-method-rejoins-cut-blood-vessels-better-stitches

Liquefaction: Cremation by liquid

Apparently, cremation devices uses lots of energy and releases a fair amount of carbon emissions, thus a new alternative to cremation is introduced (known as Resomator).

Description on how the device works is as follows:

"Created by a Glaswegian company, the Resomator submerges bodies in a potassium hydroxide solution in its steel chamber, then pressurizes (to about ten atmospheres) and heats (to over 350 degrees F) the solution for about three hours. After that, the resulting liquid is simply poured into the regular sewage system--it apparently poses no environmental risk and has passed Florida's undoubtedly strict laws for this sort of bio-disposal. Bones remain and are pulverized to ash in the usual way, and any metal bits (including mercury and any prostheses) are retained to be disposed of or recycled in a more responsible way."

I find this intriguing that it takes a huge pressure and temperature for human flesh to be "dissolved" in the solvent. I'm imagining that our muscle and skin ends in a flaky/ liquid-like state(the complement one makes after eating a freshly sliced salmon fillet) and easily detached like when a chicken is cooked/boiled. Disgusting, but I'm sure there are lots of things to discuss about with this topic.

Thoughts?

Source:

http://www.popsci.com/technology/article/2011-08/florida-funeral-home-debuts-alternative-cremation-liquefaction



Purcell's Paper

A lecturer of mine (Timo Nieminen) put me onto a transcript of a talk originally given by E. Purcell in 1976, entitled "Life at low Reynolds number". This surprisingly conversational piece lays out quite a lot of the physics of cellular motion which we have already encountered in our biophysics `careers' thus far. Nonetheless, he draws some interesting analogies and provides some great examples of biophysical reasoning.

The content begins with a discussion of what it means to be at low Reynolds number and progresses to tackle nonreciprocal motion (the Scallop Theorem) and the advantages of swimming to bacteria. The very final section was most interesting, because it describes why there appears to be a minimum `run' distance in the bacterial `run and tumble' exploration of the world.

Recall that bacteria undergo chemotaxis i.e. they are able to respond to chemical gradients in their surroundings by undergoing a biased random walk up or down the concentration gradient. The bias is introduced by changing the path length distribution so that if the gradient is favourable the `run' phase lasts longer than usual, and if it is unfavourable it is shorter. Observations show that there is a lower limit to the length that is moved even if the gradient is unfavourable. Why?

Purcell goes on to show that if you can't outrun diffusion then there is no point to moving at all... which implies that there is some minimum distance that you must move in order to even sense that the chemical gradient exists! He shows that this is the minimum step size found experimentally.

There is also some neat discussion of the discovery of rotary motors in bacteria, which is such a well-accepted result nowadays that I hadn't even realised that there was controversy about it not long ago.

This is a well-written and interesting piece which I believe neatly sums up what biophysics is all about! For your interest, Purcell was also instrumental in the invention of NMR.

Saturday, 27 August 2011

Haemochromatosis: Iron, provider and poisoner

Hello everyone,
Dr Seth and I were discussing a disease known as haemochromatosis on Friday, after the lecture. Essentially, this disease results in iron-overload disorder, a potentially fatal condition. Haemochromatosis can have various causes, but it is generally observed in (human) people with a homozygous cytosine-282-tyrosine mutation in the so-called HFE (High Fe/iron) gene. This results in the body perceiving that it is perpetually deficient in iron, and so iron uptake is continually maximal.
This in itself would not be a major problem if the body had mechanisms to excrete iron: this is not the case, however! In humans, iron cannot be removed from the body except by its use in proteins (etc.). Evolutionarily speaking, this is likely due to the fact that our bodies need to do a lot of work to extract iron from the normal form of iron found in our environment, rust. Most of our iron must thus be reduced before it can be incorporated into the body. Some creatures (e.g. crustaceans) have blue blood because they use copper instead of iron as their oxygen carrier; copper is found in the pure form much more readily than iron, so this is a more efficient metal to use. Thus, in people with iron-overload disorder, iron levels keep building. This can result in death (more on that later).
The body uses iron primarily as an oxygen carrier in haemoglobin but it also has other roles. One of the major side effects of iron presence in the body is the creation of free radicals. Iron in the 2+ ionic form is particularly good at reducing hydrogen peroxide, a by-product of many cellular reactions. The chemical reaction can be summarised as Fe(II) + H2O2 --> Fe(III) + OH- + ·OH. The middle dot next to the second hydroxyl group signifies that this is a free radical group: the hydroxyl group now has a single free electron. Free radical groups are very dangerous to the body because they can interfere with many, many reactions (I will describe some of these in another post); their main problem arises from the fact that they can create other free radicals, exacerbating the issue. This is known as Fenton chemistry or oxidative stress; free radicals can cause DNA damage and cellular dysfunction because they are so reactive and interfere with so many reactions.
So, the problem with excess iron is the extra creation of these free radicals! This is often why anti-oxidants are so famous; they are supposed to stop this from happening in normal life. Fortunately, the body is very good at doing this on its own, except when there is so much iron present that the body cannot cope with it. Thus, most medications for iron-overload disorder (other than flabotomy) rely upon chelating the excess iron: if there is little Fe(II) iron floating around, then the body can cope with it.
Josh H
PS: You may marvel at my title, if you wish. :)

(:

Administrative test. Enjoy your day.

Friday, 26 August 2011

beaten at the post !

This is the calculation I did before I saw Jame's post:

Using my Basal Metabolic Rate (as calculated by: http://health.discovery.com/centers/heart/basal/basal.html ) I get a value for k of 3.6. Taking solar energy flux to be 1370 Wm^-2 and rho equal to that of water (1000kgm^-3) I calculate r to be about 28m. Which corresponds to a mass of 3x10^7 Kg (or if you prefer Blue whale units) 140 Blue whales.

(Note if I use k=5 I get the same results as James. Damn you James for posting the solution before me!)

On the size of Beasty

Wendy, you missed some quality biophysics today!

The argument basically followed from Josh's earlier post, entitled `Tyranids'. I had commented on this that it might be favourable for a large creature to have a sub-divided circulatory system.

As it turns out, for a creature composed of many sub-entities with space-filling fractal transport networks (as in BIPH2000) you get no decrease in metabolic rate with sub-entity number. The total metabolic rate of such a creature scales as the number of sub-entities to the power of one quarter, so B actually increases!

Our second major question is how large can an organism (affectionately known as Beasty) be if it is a) spherical b) completely powered by photosynthesis (assumed to be perfectly efficient)? Furthermore, we assume that the energy flux is equal to that observed on Earth (1.4kW.m^-2) and that Beasty has the same density (\rho) as water (at least on average).

We recall that B = kM^0.75, where k is a constant of proportionality which we assume is the same for all creatures. From the resting metabolic rate of a human (around 100W) we find that (assuming a mass of 70kg) k takes a value of approximately 5W.kg^-0.75 (not far off what we assumed for Beasty by other methods... around k=10).

Thus if we set the total metabolic rate B equal to the cross-sectional area of Beasty multiplied by the energy flux (\phi) we can rearrange for the radius of the sphere.
r =(\pi\phi/k)^{4}(3/(4\pi\rho))^{3} ~ 8m
From this we find the mass to be M~2*10^6kg, or around 2000 tonnes. This is about ten times heavier than a blue whale!

It is interesting that we get such a result based on only a few assumptions, and that it doesn't seem too(oooo) ridiculous. I encourage you to check my working!

If Seth seems to recall me telling him something different, then he recalls correctly: I had made a mistake with exponents and estimated Beasty's maximum mass to be a meagre twenty tens (one tenth of a blue whale).

Thursday, 25 August 2011

Protein binding and folding

BIPH2000 briefly covered the topic of protein folding. If I recalled it right, large proteins required the help of another protein to nudge them into the final native conformation (referred as chaperonins). With this in mind, I searched for an article that talks about these kind of proteins, but it is fascinating that this conformation occurs due to the folding in conformation with the binding of the protein to another protein (in this case an enzyme).

The news and views article entitled "Proteins hunt and gather" reports about proteins, including some involved in critical aspects of biological regulation and signal transduction, are stably folded only in complex with their specific molecular targets. An initial encounter complex, formed through weak interactions, facilitates the formation of a partially structured state, which makes a subset of the final contacts with the target. The conformation allows an efficient search for the final structure adopted by the high-affinity complex. (The figure on the right shows how this conformation occurs).


Taken from the article:

- "Initial, weak, nonspecific interactions, either short or long range, are believed to enhance binding kinetics between well-folded proteins by constraining the diffusional search for a binding site. The results of Sugase et al indicate that this mode of action extends to binding events involving intrinsically disordered proteins. Moreover, coupled folding and binding may further restrict diffusional search within partially structured tethered intermediate states. Notably, this mechanism implies a stepwise reduction in configurational entropy as energetically favourable interactions are formed..."

Thoughts?

Source:
Eliezer, D. and A. G. Palmer (2007). "Biophysics: Proteins hunt and gather." Nature 447(7147): 920-921.


Wednesday, 24 August 2011

Diffusion in Forgetful Fluids

Nelson makes quite a lot out of the fact that thermal fluctuations in a fluid are uncorrelated with their past behaviour. This is true for a simple Newtonian fluid where the current behaviour depends only on the thermal statistics. Such a fluid is "forgetful"...

It is possible to treat Brownian motion in a fluid which has some form of memory. These are called viscoelastic fluids because their flow is determined by how much energy they store (elasticity) compared to how much they dissipate (viscosity). You don't have to look far to find viscoelastic fluids: there is a variety of them in your eyes (e.g. hyaluronic acid, vitreous humor) and up your nose (mucus!). As it happens, viscoelastic fluids abound in biology, especially when one considers all plasma membranes have some manner of viscoelastic properties and that the inside of a cell is a complex architecture of protein fibres on many different length scales.

How do we treat diffusion in such fluids? We use a generalised Langevin equation! This is essentially Newton's law written in a disguised fashion. Firstly, set the total force equal to ma, as usual. We can then write the net force as a sum of a thermal term, f, a potential term, -grad(U), and a dissipation term. The dissipation is represented as an integral (over all previous times) of the microscopic memory function \zeta(t-t') with the velocity at all previous times (essentially a convolution of the particle velocity and the fluid's behaviour).

Physically, the memory function describes how similar the fluid is to how it was a time t' ago i.e. it is an autocorrelation of the thermal force function. In simple fluids it is just a Dirac delta because the fluid only behaves like it is now... now!

It turns out that this concept can be extended to rotational diffusion too (centre of mass and orientation undergo a random walk) and the maths remains essentially analogous.

Anyone interested should check out some lecture slides presented by Thomas Mason (he is definitely one of the big names in this area of research!).

Econophysics

Now I knew there were physicist working on Wall Street, but Econophysics... really? I was similarly amazed to find the Hyperion International Journal of Econophysics & New Economy and that in June this year the International Conference on Econophysics (ICE) was convened (strangely one website indicates that it was held in Greece and another in China, perhaps they didn’t think Greece’s current economic state made for a good backdrop.)

I stumbled upon the term while investigating the applications of random walks in economics, mentioned briefly in section 4.3.2 (within which Nelson flatteringly calls investors independent biological subunits.) Initially I was sceptical that analysing economics from a physics perspective could be very useful, but after reading some discussion papers on the topic I think it probably is.

The first attempts to develop econophysics models focused on trying to find direct analogues for fundamental physical concepts such as the thermodynamic laws, but these attempts failed miserably. However, when the tools physics has developed (or borrowed from pure mathematics) to understand phenomenon such as noise, chaos, non-equilibrium dynamics etc. are used with caution, they seem to be very useful. The difficulty is of course that “there are no universal rules in the behaviour of markets; any apparent stability can only be metastability. A point nicely illustrated by David Viniar, Goldman’s chief financial officer: “We are seeing things that were 25-standard deviation events, several days in a row.” (Note that 25-sigma events are only supposed to happen about once every 100,000 years.) Mr Viniar, I suspect your models may not be entirely accurate...

The most convincing argument I came across for the legitimacy of econophysics was made by Doyne Farmer: “The typical view among social scientists is that one should focus on documenting and explaining differences. Physicists have jumped in and said, ‘Before we do that, let’s spend some energy on first trying to understand if there is any way in which everything is the same’.” An argument which I think has direct relevance to biophysics.

Monday, 22 August 2011

Neurotransmitters in a synaptic junction

In the previous week while searching for papers for the literature review in PHYS3900, I came across an interesting article entitled " A solvable model for the diffusion and reaction of neurotransmitters in a synaptic junction" by Barreda et al.

The initial problem of the researchers had was the diffusion and reaction of transmitter acetylcholine in neuromuscular junctions and diffusion and binding of calcium in the synaptic clefts have only been modeled by finite-difference and finite-element solutions, thus the paper presented an analytical solution to a model of the interaction of acetylcholine with the neuromuscular junctions and calcium with the synaptic cleft.

In relevance to the course, the solution involves the use of the diffusion equation

where D is the diffusion constant and C(r,t) is the concentration of the ions with respect to distance and time. The paper goes by defining the boundary conditions and solving the equation using Laplace transform, thus leading to an expression of the total flux through the sink on the post synaptic face. (I didn't bother posting the equations used by the authors, since some of the expressions they used were not explained or they expected you to have a strong background in what they were implementing.) Using the parameters of the model to the flux equation, they acquired a diffusion constant ranging from 0.25 - 6 x 10^5 nm^2/s, in which according to them is comparable to the solutions obtained from the finite - difference or finite - element methods. By showing the capability of analytical solution, they hope that this method would provide a new avenue for modelling biochemical transport.

Thoughts?


Here's the bibliography just in case you guys want to look at the solution
Barreda, J. and H.-X. Zhou (2011). "A solvable model for the diffusion and reaction of neurotransmitters in a synaptic junction." BMC Biophysics 4(1): 5.


Milky-Silky-Sturdy Skin

An article appeared in the Daily Mail online recently, which provoked some interest. The article claimed that of late, some Dutch researchers have grown human skin cells which, like Superman without the Kryptonite susceptibility, are bulletproof.

Upon reading this line, I immediately envisaged John Connor euphoric, now with an effective strategy against Skynet.


The project was named "2.6g, 329m/s" after the mass and velocity 0.22 calibre long rifle bullet. Essentially, the gene for making spider silk protein was incorporated into the genome of some goats, and so when these goats produced milk, that same silk protein was found in the milk, liquid form. Subsequently, the protein was purified  and spun into strands and thread.

A mesh type structure was constructed with this extracted silk, and the researchers managed to grow a culture of human epithelial skin cells on said matrix. A bullet was fired at this skin setup and apparently, the silk matrix supporting the skin made it impenetrable to bullets.













The article also suggested that the objected of this project was the possibility of incorporating this same silk protein gene into the human genome. The idea is to replace the keratin in skin with the silk protein - ergo, bulletproof humans. Cue, John Connor.

Upon researching, some relevant articles on the bioengineering of the fundamentals of the project were found.

It's been known for quite some time that spider silk exhibits tremendous strength: it has 5 times the tensile strength of steel, much more elasticity than nylon and is tougher than Kevlar (Scholastic News, 2002). Initially, spider genes were put into cow and hamster cells and it was found that these cells produced spider silk proteins. These proteins formed a liquid phase and were subsequently squeezed into a strand.

Some potential uses outlined in some of these past journal articles found were biodegradable fishing line, soft armor, tennis strings and applications in microsurgery (providing very fine but strong thread for suturing).

I then began to wonder why there was such frequent use of mammals to generate these proteins. It seemed illogical to go to the effort of growing cells or putting the genes into the genomes of livestock in order to make the silk, when spider's seem to this of their own accord everyday.

After reading some more, the answer became quite obvious - mass production. The idea of spider farms had been attempted, but proved unsuccessful - the spiders ate each other. Goats, on the other hand, generally don't do this. (Current Science, 2002).

It turns out that mammals and spiders produce milk and silk proteins (respectively) in relatively the same manner, hence the logic in using goats to produce the silk-protein infused milk. Mammals produce milk, and spiders produce silk proteins in skin-like cells and the products are then held in a lumen of some sort to minimise shear-stress. (Design News, 1998).

The idea of population impervious to bullets does sound a like science-fiction, but I wonder if it really would do any good. I'm sure it's possible that, even with a bulletproof skin, a hit by a bullet would result in some sort of internal bleeding.

Thoughts?

Wendy.

Milky-Silky-Sturdy Skin

An article appeared in the Daily Mail online recently, which provoked some interest. The article claimed that of late, some Dutch researchers have grown human skin cells which, like Superman without the Kryptonite susceptibility, are bulletproof.

Upon reading this line, I immediately envisaged John Connor euphoric, now with an effective strategy against Skynet.

The project was named 2.6g, 329m/s 

Expectation value of a single particles random walk

Chapter 4 of Nelson introduces the idea of a random walk as a way of describing (amongst other things) the motion of colloidal particles. Reading this section reminded me of a discussion I had on the topic with a chemistry lecturer last semester. The question I had was, “Given that I know the position of a colloidal particle at time t, what is the expectation value of the particle’s position at time t+Δt?” Or perhaps more precisely, “...what are the points of equal maximum expectation at time t+Δt?”

I had expected the lecturer to respond that the expectation value for a single particle will remain the position at time t; the only thing that changes is the uncertainty. However the answer I got was along the lines of, “The expectation value evolves according to the formula: r=sqrt(kDt)” [where k is equal to 2 times the dimensionality]. Thus in 2D, the points of equal maximum expectation lie on a circle which has radius proportional to sqrt(Δt).

I’m fairly sure this is incorrect, and I also have an idea as to why [r should be mean(r)] but it’s still difficult to get my head around.

What do you think? Is there something obvious I’m missing?

Saturday, 20 August 2011

Prosecutor’s Fallacy

Following on from my previous post, I will do some ranting and raving about possibly the most common and insidious statistical mistake. Namely: P(A|B) = P(B|A). A less abstract expression of this equality lies at the heart of the “Prosecutor’s Fallacy”.

Here is the setup:

  • The defendant’s DNA matches that found at the crime scene; a match obtained from a search of a DNA registry
  • The probability of a match between the DNA at the scene and a random person is 1 in a million (i.e. false positives is 1/million)
  • The defendant maintains they are innocent of the charges

Thus: “Because the story before the court is highly improbable, the defendant’s innocence must be equally improbable.” The prosecutor is effectively arguing: “the likelihood of this DNA being from any other person than the defendant is 1 in a million”.

If we let “positive DNA match” = M and “guilt” = G then what is in question is the probability of guilt given a match: P(G|M). The prosecutor's argument is that because the chance of an erroneous match is small P(M|not G)=1/million [or alternatively, the probability of a match given that you are guilty is large P(M|G)=million-1/million] thus the probability of being guilty given a match is large. Hence the prosecutor is arguing that P(G|M)=P(M|G).

Now being well versed in Bayes’ theorem we know this cannot be the whole story. Because P(G|M) = P(M|G)P(G)/P(M).

What the prosecutor is arguing is equivalent to claiming that: the probability that you speak English, given that you are Australian, is equal to the probability that you are Australian, given that you speak English. Which is absurd.

If one assumes that the defendants prior probability of guilt is 1 in a million i.e. P(G) = 1/million and that the probability of a match given that the defendant is guilty is 1 i.e. P(M|G)=1 and work through the math, it turns out P(G|M) = 0.5. Not really “beyond reasonable doubt”. Note however, that if the prior is increased to 1/10,000 [perhaps the defendant went to school with the victim] then P(G|M) = 0.99.

So, how does one quantify P(G)?

If you’re interested there is an equally fascinating “defense attorney’s fallacy”; it is possibly the reason O. J. Simpson is a free man.

-------------------------

Adapted from MATH3104 lecture notes by Prof Geoff Goodhill