Saturday, 12 November 2011

The dreaded question from last year...

Hi all. I promised myself that I would look at this question again sometime this semester and prove to myself that I could do it... Of course, I am talking about the ratchet questions from last year's BIPH2000 exam.

The first part of the question goes like this: if I have a G-ratchet, draw the free energy landscape. This will be one-dimensional. Since the rod is so long and has only one degree of freedom (the axial direction) the entropy is constant. If we choose to measure energies such that G=0 when none of the springs is depressed we will find that the graph looks just like Nelson's plot of the potential.

The next question asks for the steady-state probability distribution. Consider a single step of length L in either direction: neither gives a net free energy change, no matter what the starting point was. Thus there is no tendency to move in either direction more often than the other, which is the kinetic interpretation of equilibrium. We know that this means the distribution is just the Boltzmann distribution.

Since the rod is semi-infinite the probability of being found in any position approaches zero in the equilibrium state. However, for each length [0,L] we can write
.
Now if the width of the springed ratchet head is l, and the maximum (compressed) energy is epsilon, the potential takes the form
 
Since this is equilibrium we have the expectation that j = 0.

The final part of the question asks what would happen if the ratchet were tightly coupled to an ion transporter. Since this is tight coupling the rod is in equilibrium when the ions are, and since we have already seen that the rod alone has no net driving force we need only consider the ions. (By the way, the free energy landscape is again one-dimensional, because of the tight coupling, and drops by some amount with every rightward step).

So, to halt the rod we just need to apply the Nernst voltage. If this is the voltage across the membrane (there are no counterions mentioned, so no need to worry about depletion layers or screening) then we find the field to be
,
(t is the thickness of the membrane) and this must be directed from left to right in order to stop the flux of ions.

Any takers? I get E~10^7 V/m.

Friday, 11 November 2011

Reflections...

For those who are interested: my reflections on BIPH3001 (the assignment from last week).
Seth: I noticed a mistake in equation four and just had to fix it (there was a missing factor of mu)!

Policy for late assignments

No more assignments will be accepted past 9am Monday morning. I will be sending out emails to let people know if they still have outstanding work.

Thursday, 10 November 2011

The Mystical Transfer Matrix

Hi all. The blog has been very quiet (I hope exams are going well for you all).

Anyhow, I decided I'd sit down and figure out what on earth the transfer matrix method was all about (this is given on pg 360 of Nelson). I had previously glossed over this portion of the text because I didn't understand at all what the transfer matrix did (or why it's even called the transfer matrix) and hadn't the time to ponder it deeply.

So... here's my take on things, with some assistance.

Suppose I have a single link in my polymer chain (not a very exciting polymer!). By Nelson's notation, my partition function is
 Now, let's write this as
just for the fun of it (this will be useful in a second!).

Suppose I add a second monomer, so my `polymer' is now a dimer. There are four ways this can happen, depending on the states of the existing monomer and the new one (call them 'up' and 'down', or u and d). Thus I can have uu, ud, du and dd. Each of these options contributes to the partition function of the dimer. Since two options go with having the first monomer 'up' (uu and ud), and two with 'down' (du and dd) I can write a matrix equation, which for now I'll represent as
.
This needs filling in. Let's do the uu term. If the first monomer is 'up' and I add another 'up' then the term needs to be multiplied by . Repeating the process for the other combinations yields
.
We call the matrix in the middle the `transfer matrix', T, but it would probably be more sensible to call it the `extension matrix', since it accounts for all possible ways in which the system may be extended by one monomer.

Now, suppose I take my dimer and add another monomer, to make a trimer. Since the interactions are local (they only depend on the nearest-neighbours) there are again only `four' ways in which I can do this (the rest of the possible configurations are already counted in the partition function of the dimer). Thus I simply get
.
It's now quite simple to see that for a polymer of N residues the partition function is
,
which is a really neat result.

To summarise: the transfer (extension) matrix finds the terms which must be added to the partition function every time a new monomer is added, so the polymer partition function may be built up by repeated application of T. Enjoy!

By the way, the equations here were rendered with an online LaTeX interpreter which outputs images: very useful and fast!

Wednesday, 9 November 2011

Microscopic Model of Osmosis

Hi all. BIPH study is now underway...
I was re-reading the section in Nelson about osmotic pressure as the rectification of Bronwian motion (remember all of us walked away from that chapter feeling a little nonplussed?) and decided that I'd like to understand what on earth he was going on about.

My biggest issue was the way that osmotic flow was ascribed to entrainment of fluid allowing solvent to be 'sucked' across the membrane: surely the situation is symmetrical, because the particles also entrain fluid as they head toward the membrane (therefore they presumably pull fluid toward the membrane just as fast as they suck it away).

I found a very nice, readable and detailed paper which goes through the same sort of reasoning as Nelson, but I think a bit more maths helps to understand it a lot better (this is also a good reference for basic low-Re fluid mechanics).

The basic ideas are all the same as Nelson, but there was one line that really grabbed me;
"... the membrane acts on the fluid using the particles as its agent[s]."
The argument goes like so: the fluid is responsible for excitation of Brownian motion (creation of fluctuations) and the dissipation of energy, so every force which acts on the particle also induces fluid motion. At the membrane there is (on average) only a net force on the fluid when the particles bounce off the membrane, so they really do 'suck' solute through the membrane by rectifying the Brownian motion of the fluid using the particles as agents!

Saturday, 5 November 2011

Neglecting Inertia

I was thinking about the differences between classical and quantum stat mech...

When we talk about a bunch of particles diffusing under the influence of an external force we use the Smoluchowski equation, with the force corresponding to the negative gradient of some potential energy. We then proceed to say that the concentration at equilibrium is proportional to exp(-U/kT). I had always thought that by doing this we were essentially assuming that the particles have negligible inertia i.e. we are operating in the low-Re regime (since I would have thought that the concentration would be proportional to exp(-(K+U)/kT) if this weren't the case, K being the kinetic energy).

It turns out that I was wrong:
concentration = integral of the distribution function over all momenta,
so the momentum is `integrated out', contributing only to the constant sitting out the front of exp(-U/kT).

For those doing statmech: why don't I need a degeneracy function i.e. the density of states?
The answer lies in the interpretation;
N = integral of g*f over energies = integral of concentration over space = integral of f over phase space;
so the degeneracy is automatically accounted for by the fact that one integrates over a volume in phase space, rather than over a line (in semi-classical mechanics this is where the density of states, g, comes from in the first place: it is the volume of a thin slice of phase space with energy e to e+de, divided by the number of uncertainty-limited cells that fit in that slice).

It's worth noting that we will still be in the low-Re regime anyway: if the force-velocity response is linear it implies the drift velocity is less than or comparable to the thermal velocity, which means that for most things (proteins in water, etc.) Re will be small.

Good luck with exams!

Friday, 4 November 2011

Announcement courtesy of Seth

To the people who haven't finished marking the assignments (including yours truly), we should submit the marks before the exam period begins which is before Monday. The same applies to our end of course summary, blog comments and so on, which was meant to be due today.

Here's his reply to me:

"The sooner the better, but I absolutely want it before the exam period starts.....anything submitted after the beginning of the exam period will not be counted."

Good luck everyone! :D

Wednesday, 2 November 2011

Chemical Kinetics as Maximum Entropy Inference


At some point in the semester, I attempted to show that an exponential decay formula (e.g. for kinetics of unimolecular decay) could be motivated by MaxEnt reasoning, and quickly got confused trying to prove myself.

I have found that this is part of Chapter 15 in the book that I used to teach BIPH2000 this year. This is the first time I have seen this in an undergraduate textbook (or indeed any).

Here is how you do it. You are doing maximum entropy over a sample space of trajectories. Each trajectory is associated with a time-dependent probability p(t), indicating the probability that the reaction has taken place. The constraints are given as integrals over time. You find the probability distribution that maximizes the entropy under the following constraints:

1. Normalization (the integral of p(t) over all time is one)
2. Average lifetime (the integral of t*p(t) is equal to the lifetime, call it tau)

The result is a probability distribution that is exponentially decaying in time with a time constant given by the Lagrange multiplier in the optimization.

The idea of doing MaxEnt over trajectories is an idea ascribed to Jaynes (he called it "maximum caliber"): Jaynes (1980) Annu. Rev. Phys. Chem. 31. 579-600.

It has recently been revisited by e.g. Dill and Stock:

Ghosh, K., Dill, K., Inamdar, M., Seitaridou, E., & Phillips, R. (2006). Teaching the principles of statistical dynamics. American Journal of Physics, 74(2), 123–133.

Otten, M., & Stock, G. (2010). Maximum caliber inference of nonequilibrium processes. The Journal of Chemical Physics, 133(3), 034119–034118.

Stock, G., Ghosh, K., & Dill, K. A. (2008). Maximum Caliber: A variational approach applied to two-state dynamics. The Journal of Chemical Physics, 128(19), 194102. doi:10.1063/1.2918345


Tuesday, 1 November 2011

Everything I've ever learnt is broken..?

Hmmm....

So here's my gripe from earlier today. We all know that the canonical ensemble has energies assigned according to the Boltzmann distribution. We also know that this is derived by maximising the entropy subject to the constraint of constant energy... but isn't the canonical distribution the one which minimised the free energy?

The trick is this: to derive the Boltzmann distribution we consider a subsystem, treating the rest of the (much larger) system as a reservoir at fixed total energy, and maximise the total entropy. In this way we get the distribution which minimises the free energy of any given subsystem: voila! This also makes it obvious why the microcanonical and canonical ensembles must become equivalent as the size of the system becomes larger: neat!

Now the apparent contradiction is gone, and I understand the ensembles a little bit better: everything isn't broken.

Melanoma

Here in Australia, a simple sunburn is often the least of our worries. Australia's long hours mean we record some of the highest numbers of melanoma cases in the world, which makes it the fourth most prevalent cancer in this country. Thus it's fitting that an Australian team are part of a new study that has successfully identified four key genetic markers that increase the likelihood of developing the skin cancer responsible for over 1000 deaths a year.

The study was conducted by a team from the Queensland Institute of Medical Research, along with a group from the University of Leeds, whose task basically involved surveying 2168 Australians of European descent (for melanoma is far more prevalent in this ethnic group than any other), and then sorting through half a million genetic markers known as single nucleotide polymorphisms, or SNPs. These SNPs flag a difference in a single nucleotide of DNA, and can also alter the function of certain strands of DNA, hence their importance in a lot of disease research.

Lead scientist in the Australian study mentioned that crunching through that many pieces of very specific genetic material came with its own share of problems. About half a million SNPs are being looked at, since many things are being tested thus there are lots of SNPs look significant by chance."

The other research group also found an SNP that may be part of a genetic string the body uses to repair damaged. DNA. Given that melanomas often stem from damaged genetic code and tissue corruption, genes like these may be crucial in working out how to fight cancerous growths like melanomas.

Source:

http://www.abc.net.au/science/articles/2011/10/10/3334971.htm

Ultrasound: key to healing some complex problems

Doctors in Scotland are using ultrasound to help patients with severe bone fractures, and finding it speeds recovery time by more than one-third. The ultrasonic pulses induce cell vibration, which doctors say stimulates bone regeneration and healing.

The technology is similar to the type used on pregnant women to image growing fetuses, but it uses a different sound frequency and a different pulse rate. One of the patients received the treatment for a shattered ankle he sustained after falling six metres.

A water-based gel is applied to a transducer, which then goes inside the strap and stays there for 20 minutes. The patient doesn't feel anything, but the sound waves penetrate tissue to stimulate cellular activity.

Typically, this type of injury would take six to 12 months to heal properly - if it does at all - but his foot healed after four months. Evidence suggests the ultrasound speeds up healing by about 40 percent.

Scottish doctors first developed ultrasound as a diagnostic tool in the 1950s, using adapted sonar technology from Glasgow's Western Infirmary. It is now used for a wide range of diagnostic and therapeutic purposes (heal punctured lungs and it has the potential to break up blood clots, among other uses). But the said group is the first we've seen to use it for bone regeneration.

Apparently the treatment remains fairly expensive, so for now it's only being used in complex fractures. But as ultrasound devices get smaller and cheaper, future fractures may be much faster to heal.


Source:

http://www.bbc.co.uk/news/uk-scotland-glasgow-west-15262297

New Instrument for Surgery

A new kind of medical device, aimed at reducing incision-site infections that result from surgical procedures, has been created. But rather than battling microorganisms with pharmaceutical cocktails or some kind of post-surgical treatment, Nimbic Systems' Air Barrier System (ABS) keeps surgical sites free of bacteria and other bugs by creating a cocoon of purified air around the incision site for the duration of the surgery.

Infections picked up in the operating room can be as serious than the condition being treated by the surgery itself. Aside from threatening patient health, they are also costly. For example, a bad Staphylococcus infection can drive treatment costs into the six figures per patient. ABS aims to curb those infections by ensuring that pathogens never get close to a potentially susceptible site during surgery.

It's a relatively simple setup: a reusable blower unit feeds filtered air into a sterile, disposable nozzle that is fixed in place adjacent to the incision. ABS then keeps a constant flow of purified air streaming onto the site, creating a kind of air cushion that keeps whatever microbes might be lurking around the O.R. at bay. This is particularly critical for long-duration surgeries, like procedures that implant prostheses or things like spinal or vascular operations.

As such, ABS got its first approval for use in hip replacement surgeries, where the patient's body is open to the ambient air for an extended period. In trials, ABS proved to reduce the presence of bacteria and other micro-stuff by more than 84 percent. Trials later this year will explore whether ABS can achieve similar results for spinal and femoral operations.


Source:

http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=121878&org=NSF

Eww! Worms!

A research team from the University of Melbourne has completed a study into the genome of the giant intestinal roundworm, a parasite that kills several thousand people a year, and has infected about a sixth of the world's population.

The study analysed the genome of a related worm, the pig roundworm, to learn more about the roundworm that infects humans, which commonly grows up to 30 cm in length. By collating genetic data, the team hoped to find out more about the way the worm works, and identify easy and cheap methods to fight it.

From the genome sequence, the group has identified five high priority drug targets that are likely to be relevant for many other parasitic worms. This may lead to new treatments that are urgently needed, for genome-guided drug target discovery is ideal for identifying targets that selectively kill the parasite and not the host.

The said disease is rare in developed Western nations, but is extremely prevalent in areas such as South-East Asia, and particularly Africa.

Source:

http://newsroom.melbourne.edu/news/n-672


DNA and Childhood

Findings published today in the International Journal of Epidemiology suggest that socio-economic status and living standards early in life may actually cause changes in the DNA that is carried in life, regardless of how one's living conditions change along the way. Some adult diseases--type 2 diabetes, coronary heart disease, etc.--have been linked to socio-economic disadvantages in early life, but it is not really well known on why or how.

The study group's sample size is admittedly small, but what they found was significant. In 40 research patients in the UK that are participating in an ongoing study that has documented many aspects of their lives, researchers looked at differences in gene methylation. Methylation is an epigenetic modification to one’s DNA that changes a gene’s activity, generally reducing that activity within the genome. Various factors can influence methylation, including environmental conditions.

In their sample, the researchers looked at DNA taken from the subjects at age 45. They chose subjects that had come from either very high or very low standards of living, and they looked at differences in DNA methylation across some 20,000 genes. They found that 1,252 methylation differences were associated with socio-economic circumstances in early life while just 545 were associated with socio-economic circumstances in adulthood, suggesting that where you come from really does make an impact on the very fiber of your biological being.

Moreover, the methylation patterns were clustered together in large swaths of DNA, suggesting an epigenetic pattern linked to humans’ early environments. If some diseases are linked to a person’s early upbringing, and we can see where there are changes happening in the DNA during early life, then we can narrow the window on where in the genome things like coronary heart disease and diabetes take root. Future research could peg where certain methylation differences are associated with specific diseases, then target those areas with drugs or other treatments.

Source:

http://scienceblog.com/48584/your-dna-may-carry-a-%E2%80%98memory%E2%80%99-of-your-living-conditions-in-childhood/


Josh's Slides for BIPH Talk

Great minds think alike... here are my slides (in convenient PDF format!) from our enlightening BIPH talks. Please, enjoy their bland, metal appearence, and marvel at the only colour picture in the whole paper (that of cancerous mice)!

Link: https://docs.google.com/present/view?id=dcpscm3n_0k8zbjrgk
If that one has dodgy bits (not sure if the uploading is fully working...) here is a PDF version that is a little larger:
Link: https://docs.google.com/open?id=0BxWocosCk1vyMTUyNTQ0YzYtOGUxOC00YmY3LTk3MDEtYjc1OTBhNzAxMzY1

Josh H

Borg and Other Hive Minds (OHMs)

OHMs now does not only mean 'On Her Majesty's Service'! As I'm sure most of you are aware, many science fiction stories feature some kind of 'hive mind'. This is generally described as being a kind of 'gestalt psychic field', controlled by some locus (whether or not it is an individual or a sum of the whole) and exerting some level of control over the cognition of its individual parts. Whilst this is a cool-yet-disturbing idea, do you think it is physically possible? If some kind of 'psychic field' existed, then this would be a simple matter, but is there another physical phenomenon that explains this? One idea (besides another 'psychic field') is a possible usage of electromagnetic fields that link brains/bodies together.

This could partially explain close proximity actions, such as touching, having greater influences on a relationship. This would be mixed with other stimuli that only rely upon perception; so perhaps there is some kind of 'psychic field' built into (community-living/loving) humans, such that certain stimuli have an unconscious effect on us, causing us to act in a society-like manner? Science fiction has once again explored this idea for us; an example is Isaac Asimov's Foundation series (2nd Foundation, for those who know).

This may not be such a silly idea, either. In a National Geographic (last year sometime, I think), there was an article about domestication. It turns out that domestication has some kind of genetic basis: the researchers tamed Russian snow foxes (I think they were snow foxes...) to be either domesticated or anti-domesticated (the sort that start to growl and scratch when a human just walks into a building) simply by selecting pups that acted in a friendly or non-friendly manner. The domesticated foxes ended up being like doggies or kitties in friendliness! (Word choice deliberate). The also noticed phenological and behavioural changes (besides friendliness) such as spotted coats and curly tails. Thus, some kind of genetic change is happening to domesticated animals that makes them domesticated: even after some generations without human contact, descendents of domesticated animals are domesticatable still, whilst offspring of undomesticated animals aren't any friendlier when raised by domesticated parents. They are not yet sure if these genetic changes are a result of selecting for desirable qualities (e.g. cuteness selection drives tail curling) or whether phenological changes are due to selecting for behavioural traits (are coat spots genetically linked to friendly behaviour? And what does this mean?).

Another question: are humans the ones being domesticated to animals? Do our pets know just which actions please us and do they use these traits to gain a survival advantage?

Perhaps the greatest question, however, is about us: to what extent have we domesticated ourselves to form civilisation?

And is this 'domestication' essential to form surviving communities?
Josh Harbort, your Locus.

4-Brain Theory: Resistance is Jungian (or futile?)

If one searches for Jungian psychological theory and neuroscience, one will inevitably come across the four-brain neuro-psychology theory.

Essentially, what this theory says is that the cerebrum of brains primaril function in a four-lobed set-up, two in front and two at the back. Communication happens between the front-left and rear-left quadrants (et simile for the right side) and between the front-left and front-right quadrants (also similarly for the rear half). It is hypothesised that communication between the diagonally opposite lobes (e.g. front right to rear left) is less efficient/slower because the singals must first travel from the front right to the rear right or front left before travelling to the rear left lobe.

Each lobe also has a specialty (although I do forget exactly what each bit does... it will arise with studious application of Google and Wikipedia...) and humans, apparently, show differing capacities for each region. For example, one person may have a dominant front-right lobe. If this were the case, and if (as a completely arbitrary example) the front-right lobe controlled logical forethought and mathematical ability, that person would exhibit great strengths in those areas, but would be weakest in abilities associated with the rear-left lobe (e.g. empahthy and subconscious inhibitions, as an arbitrary example). Some people might have strengths in two areas (e.g. right side, rear), which are generally adjacent, whilst others may have strengths in three adjacent areas (with the dominant region usually being the central region). A few rare cases are those who have strengths in all four regions—these people (from psychological studies) tend to be good at organising large organisations but can be indicisive and find decision making hard. So, whilst some people have particular strengths in some areas, each combination of lobe strengths has its own good and poor points, none being 'better' than any other. This is an interesting idea which illustrates that people have different intelligences rather than inferior/superior intelligence, a case which I believe is much more realistic.

The punchline of this idea is that when people are forced into using their 'inferior' brain regions, they operate at a muchly reduced efficiency and actually can suffer some psychological harm. The hypothesised reason for this is that in the stronger lobes, signals can travel with less resistance than in the weaker lobes. This has the effect of making some lobes more efficient than others, resulting in the greater observed cognitive strength of those lobes.

Introversion and extroversion are not really related to this theory (although one of the lobe-strengths has similar effects). People are naturally aroused = awake) to different degrees, based on their brains and based on the time of day, amount of sleep, etc.. The main theory behind the intro- and extroversion is that introverted people are naturally very aroused and so take in information more easily. They thus attempt to find areas with less stimulus, so that they are not overwhelmed by too much stimulus. Extroverted people are less aroused and so seek out more stimulus to wake themselves up; thus they function well in loud environments because they are awake enough to efficiently perform their tasks. People naturally lie on a spectrum between these two, but you can become more 'extroverted' if you are tired, for example, and you require arousal.

I found this quite an interesting theory that does explain many of the psychologically observed phenomena. I was slightly skeptical about the 'brain resistance' idea, however: they way it was written, mostly, but also I'm not sure if this would be a feasible way to look at brain structure. What does everyone else think? Also, what does everyone think of intro/extroversion? This is still an active area of research, so interesting ideas are sure to keep coming.

Josh Harbort

My Project: PHYS3900

My project for the Physics and Biophysics capstone course was on modelling the cell membranes of mammal cells and fungal cells, and comparing them.

The rationale follows (with a story!). Certain drugs show excellent specificity for killing fungal cells and not mammal cells. While this is easily understood upon cursory investigation (after all, fungi are markedly different to mammals in appearence), it does not hold up under a more detailed examination: essentially, mammal and fungal cells are almost identical in composition! However, there is one major component of each cell type that differs from the other: the type of sterol included in the bilayer membrane. Mammal cell membranes (as you probably know) have cholesterol, with an abundance of around 30-50% mol (they are a major component!). Fungal cells have a similar abundance of sterol, but of a different kind: they include ergosterol rather than cholesterol.

Punchline time: it is hypothesised that the small differences between cholesterol and ergosterol are enough so that anti-fungal drugs kill the tinea on your feet (if you're unfortunate to have tinea) but not you. What makes this even more unlikely is the fact that the bit that is most different, the methyl group on the tail (those extra double bonds don't really do much different between the two sterols) is buried in the middle of the bilayer. See the picture below; ergosterol is the one with red bits and the differences from cholesterol are shown in red. So how does it affect the interactions of drug molecules with the surface of the membrane bilayer?
Figure 1: Comparison of cholesterol (left) with ergosterol (right). Differences are shown in red on ergosterol. Note that formation of additional double bonds means that a hydrogen atom has been removed from the involved carbon atoms.

This question is hard to answer with traditional experimental techniques, such as NMR or x-ray crystallography, because these techniques either cannot show an atomic level of detail (NMR just takes an average of everything in the sample) or cannot reliably be used to image flexible biological membranes (lipid bilayers are notoriously hard to crystalise, let alone a whole biological membrane!). This is where my project comes in: we can use molecular dynamics (MD) simulations to atomistically model complex lipid bilayers and get the atomic-scale picture we need to discern the actual effects of the different sterols on the bilayer.
MD simulations (Martin and Wayne have both done them!) essentially use Newtonian mechanics (and a bunch of other assumptions, like mean field approximations) to model bonds and non-bonded interactions. This method requires many other approximations to ensure the observed nature of chemical species is seen in the simulations; the absence of quantum mechanical constraints limits their accuracy in this way. Nonetheless, if the simulations work, they are an excellent method for examining the physical properties of a system, and can provide much information that cannot be obtained from physical model systems.

In my project, I simulated a bilayer composed of a 1:1 mixture 2-palmitoyl-1-oleoyl-sn-3-glycerol-phosphocholine (POPC) and cholesterol, representing animal cells (my supervisor actually performed this simulation, but I analysed it) and a bilayer composed of 1:1 POPC and ergosterol, representing fungal cells. Unfortunately, if one has an incorrect topology for a molecule, the simulations crash (usually due to all of the atoms in the molecule flying off in opposite directions and increasing the energies to very silly heights), and this is what happened with my ergosterol simulation. This means that Professor Mark's group has some work to do to improve the model for ergosterol; it turns out that it is a lot less similar to cholesterol in terms of topology than we would think. Cholesterol was seen to have a big effect on the bilayer structure, though, including thickening it, allowing more access to the methyl chains of POPC

So, although my original aim remains unfulfilled, I was able to participate in active research!
Josh Harbort

My PHYS3900 Feature Article—Antibiotic Crisis: Look to the... Drugs?

Antibiotic Crisis. The very words bring terror to thousands of medical practitioners, leading to their waking nightmares at work.... This is the piece of assessment I handed in for the literature review/feature article for the capstone course. It seemed interesting at the time (which was the dying minutes of 3 days before it was due...) and hopefully, it is not too wishy-washy. I suppose our biophysics class can tell if it's biophysics-y enough.... :)

Here's the link: https://docs.google.com/open?id=13eErNnCs6aiclQlKkWW2_PBNY516vGsto9Bz1104wjyHqvluE0FL5FT2WNqq

Josh H.

Vulcan Physiology: Copper and Logic based

Most of you will know of the copper-based blood of Vulcans. This is a relatively common oxygen carrier amongst certain forms of life on Earth. However, none of these life forms is humanoid! Obviously, were we to interact with copper-blooded aliens, we would face some cultural difficulties. For example, would copper aliens require prawns (and other copper-blooded food) in much greater quantities than we require, since they need more copper than iron? And, assuming that they have the same mechanism as copper-blooded organisms on Earth, would they be able to cope with the large amounts of iron in our food? I suppose that it is likely that they would have some iron-based proteins, considering how common iron is in the universe.

Also, would their blood be green? If they were the same as organisms on Earth, I suspect it would be reminiscent of 'ichor' (that is, Greek god blood) because it could be clear when (de)oxygenated (can't remember which) and purple otherwise, or it could be purple and blue as in other organisms.
Would it be possible for humans to develop into copper-blooded organisms? I would suggest that it would require a lot of transgenic stuff, and I don't know how an embryo would be grown (since iron-blooded mothers would be incompatible with copper-based children). Would this have a beneficial effect on our metabolism?

And, the favourite of the Original Star Trek Series: Spock. Could copper-blooded and iron-blooded humanoids actually mate successfully?

Josh Harbort

Celluase in Humans

As we should all be aware, humans can't digest wood, nor can any mammal (as far as I know) without the aid of symbiotic gut bacteria. Ruminants get around this by having four stomachs, one of which houses all of the cellulasing bacteria. 2-stomach mammals also have a dedicated chamber for cellulase-bacteria (e.g. horses, pigs), although I can't recall their specific digestion processes at the moment. The only reason why cellulose can't be digested by glycolysis enzymes is that cellulose molecules are linked by two positions on the glucose molecule that can't be accessed by glycolysis enzymes, so another enzyme is required. One problem with cellulase bacteria is that they tend to eat all the good stuff (e.g. the glucose from the cellulose) and leave behind only their by-products (this is how four-chambered ruminants live). So, one must wonder, would it be beneficial for mammals to produce cellulase themselves? The enzyme is not substantially different from starch glycolysis enzymes in structure, and I would be fairly certain that mammal cells could be easily transformed to incorporate it into the genome.

One problem is that rather tough teeth are required for chewing plant matter, but many mammals already possess these (and humans could just drink pulped plant/eat soft plants like we already do).
Another reason that mammals might not have intrinsic cellulase is the effect of non-soluble fibre: we all have heard that it is good for our digestive tract. But perhaps another solution could be found? For humans, at least, increased water in the diet would alleviate some of the problems, suggesting that tropical humans would be able to cope with the loss of fibre (also keeping in mind that it is likely that not all fibre would be broken down by the body, since there is so much cellulose: we would need bigger guts for it to take longer to leave the body, and then a new gut might not require fibre). How do other animals cope with the fibre problem?

If humans could digest cellulose, would that substantially change our energy levels? Perhaps it would result in sugar overloads.

Lactase is another of these sugar enzymes that most mammals do not possess (except for most humans). If humans have lactase, why can we not have cellulase too? Does anyone know of any reasons why adult mammals lose their lactases?

Also, if humans could express cellulase, where would the best place to express it be? Obviously, we don't want bacteria eating all of the released glucose. Presumably, if it was expressed in the same region as glucose is taken up, we could avoid this problem. Thus, we could test the effect of including cellulase by using tablets... a rather uninvasive treatment (so long as they dissolve at the right spot). This could be useful as a short term ability, for soldiers in the field, for example.

Josh Harbort

Space Ork Physiology

Another race from Warhammer, the Orks are described as being an integrated symbiotic organism, mixing 'genetic traits of both animal and fungal life forms'. According to the background, Orks (which have green skin) have algae that flows through their circulatory system and is a part of their digestive tract: apparently, the action of the algae to repair damaged tissue makes the Orks quite resilient. Orks are also reputed to reproduce via spores. These spores develop into cocoons which then hatch different organisms closely related to Orks—which organism develops depends on the conditions. Some of the other organisms are squigs (dog-sized, agressive and very bitey!), gretchin (smaller, weaker versions of Orks), snotlings (smaller, weaker versions of gretchin) or 'normal' fungi.

So, would this system be feasibly possible? I think that the release of spores is sensible, and possibly the differentiation between final organisms, but could spores develop into an animal-like creature? And could algae/fungi and animals exist as an integrated symbiotic organism?

Another feature of Orks is their 'genetic memory'. Apparently, Orks (who are not very bright, except when finding new ways to kill stuff) make up for their lack of intelligence by possessing some kind of genetic memory, allowing them to build all manner of destructive and dangerous devices. Incidentally, Orks are also said to create some kind of gestalt psychic field, although this is probably a discussion best left for another blog post....

Josh Harbort

HeLa Turbine: Unlimited Power!

In a science fiction game I have, there is a project a civilisation can construct called the HeLa Turbine. The game describes it as essentially using 'cellular energy' to provide power on an empire-wide scale (similar to a Hoover Dam power source in magnitude). The cells that are used to construct the Turbine are from the HeLa mutant human cell lineage (hence the name). Would this actually be a feasible? I can imagine the use of cells to create a large potential gradient but I'm not sure if it would be as efficient as other processes (e.g. solar power), especially considering the efficiency of large cell culture batches for producing proteins.

Josh Harbort

dsRNAi

For our final BIOL1040 content lecture, Dr Botella was telling us a bit about the research that he and his group have been involved in over the years. One of the (many) interesting stories he told was of self-vaccinating plants. While the plants must actually be genetically transformed for this to work (they can't do that themselves, at least not voluntarily), once they are transformed with the right DNA sequences, they can provide their own defence against invading parasites. This also has big implications for human/animal immunity, too.

dsRNAi refers to the main molecule used with this method, double stranded DNA (interference). Essentially, when a plant or animal cell detects an unprotected double stranded RNA segment, it cuts up the segment into smaller pieces (small nucleotide repeats/SNPs if I recall correctly) and then separates the double stranded RNA back into single strands again. One strand is then used to remove other RNA copies from the cell (stopping existing transcription) and the other is used to find the gene in the DNA and silence it (generally through methylation of the accompanying histones).
It is possible to artificially stimulate this process by inserting a reversed copy of a gene directly behind the original gene. The RNA produced will thus have two complimentary halves, and will form the double stranded RNA complex, allowing the experimenter to silence the copied gene.

What's more is that if a gene not used by the host organism but crucial to a parasite's survival is inserted along with its reverse, any parasite that feeds on the host's cells will take up the double stranded RNA and its cruicial gene will be silenced. The host thus produces its own vaccine, and the dsRNAi can be transferred between cells as well as produced in any cells that are transformed with the silenced gene. So, as long as the cells can be transformed, any eukaryotic cell can be treated with this vaccine. This means that retroviruses are a key delivery method while plants can be transformed with agrobacterium.

Dr Botella also said that we should see an 'explosion' in medical treatments based on this idea. The reason for the stymied research: two big companies were fighting over the patents, and a US big (insert actual word) attorney has ruled in favour of one company.

There is something potentially quite scary about using this new RNAi system (we need to make a snazzy name for it...): since humans rely upon the correct operation of certain genes to remain living, it would be very easy to use the viral vector method to silence our genes, and we have very little natural defence against it. Is there a method to counter this threat? One benefit of the great mutability of viruses is that they probably would differentiate into non-deadly strains as they mutate. Would we be able to develop resistance to this (and could we demonstrate it in a lab)? Of course, you would need to include the virus replication sequence as well as the gene silencing RNA to be really deadly, and this limits greatly the size of the gene you can knock out. But, perhaps you could use another (retro) virus with just packaging information to infect the host and the host will manufacture the dsRNAi-weapon virus by itself! Before dying, which may be another way to keep us alive (cells die before transmitting virus).

Food (and agrobacteria) for thought,
Josh Harbort

Monday, 31 October 2011

Slides, for your enjoyment

Hi all. Here's my slides if you're at all interested by what I was talking about: references are attached! I really do recommend looking at the first reference on microtubule thermodynamics... it will blow your mind!

Tuesday, 25 October 2011

EXAM LOCATION

The exam will be at 11am (officially, 11:15) in room 6-407. That is, it will be in the classroom we have been using. This should make you all more comfortable (although we may adopt a less Arthurian table arrangement).

Allergy as a guardian

A new study has found more experimental data that seems to support the theory that something about the way the body typically reacts to allergies also guards the brain against devloping tumours.

The study drew on data from tens of thousands of people who had previously participated in four other broad health studies.

They found that patients whose bloodstreams were filled with immunoglobin E (or IgE), which is the antibody the body produces to fight many allergies, were also statiscally speaking less likely to developing a brain tumour, and if they did, would tend to survive longer.

“In terms of fighting the cancer or preventing it from growing, people who have allergies might be protected. They might be able to better to fight the cancer,” said one of the study co-authors.

The results need to be taken with a grain of salt, it seems, with a couple of annomalies appearing in the data.

For instance, while the appearance of tumours lessened significantly for patients with slightly elevated levels of IgE, the same was not true for those patients with much higher levels of the antibody, which means there is either a failure somewhere in the data, or we simply don't understand enough about why IgE appears to have this protective effect.

Also, of the thousands of people included in the data, only 159 went on to develop brain tumours. These people were compared against a control group of over 500.

Still, this study is not without precedent. Earlier this year, a team at Chicago conducted a similar study into the glioma-fighting qualities of allergies, and also found that it had a positive effect on patients.

In any case, while it's not clear how or even whether it's this allergy-produced antibody doing the good things, it seems more and more likely that allergies might actually have an upside.


Source:

http://news.brown.edu/pressreleases/2011/10/allergies

BRCA1: Tumour suppresor gene for breast cancer?

The protein encoded by the tumour-suppressor gene BRCA1 may keep breast and ovarian cancer in check by preventing transcription of repetitive DNA sequences. This explanation brings together many disparate theories about how the gene functions and could also shed light on how other tumour suppressors work.

Since the discovery in the mid-1990s that defects in BRCA1 strongly predispose women to breast and ovarian cancer, researchers have suggested numerous ways in which the protein might stop cells from becoming cancerous. Some have focused on its ability to repair DNA damage, whereas others have studied how it regulates cell-cycle checkpoints, transcription or cell proliferation. But until now, no unifying theory of how these different functions might prevent breast and ovarian cancer has emerged.

The group studied cells from mice that lack the Brca1 gene. Along with the usual problems attributed to defects in BRCA1 (in areas such as cell-cycle regulation and DNA repair), the researchers also found in these cells a surprising paucity of 'heterochromatic centres' — dense packages of normally untranscribed, repetitive sequences of DNA near a chromosome's centromere. Instead, these DNA regions were highly active, churning out large numbers of RNA transcripts called satellite repeats.

In normal cells, the BRCA1 protein keeps these regions silent by tagging histones, or DNA packaging proteins, with a molecule called ubiquitin. When the researchers added artificial ubiquitin–histone complexes to the mutant cells, the cells recovered, suggesting this was indeed the Brca1 gene's core function. Conversely, flooding normal cells with satellite repeats brought on hallmarks of genomic instability such as chromosome breaks and accumulating mutations – all thought to be features of BRCA1 loss in cells.

People have found BRCA1 in many places, doing many things. All these processes involve heterochromatin, so maybe we have one mechanism that allows the explanation of a large number of observations people have made about BRCA1.

It's still not clear how this mechanism could explain the tumour suppressor's specificity to breast and ovarian tissue, but those tissues are, for some reason, especially sensitive to the loss of BRCA1 function.

Another research group had satellite repeats produced in many different types of tumour tissue, including those that lack BRCA1 mutations, suggesting that multiple pathways have gone awry in cancer to impair heterochromatin maintenance. However, the explanation is still not sufficient to give a clear answer to some puzzling questions on how satellite repeats could have such wide-reaching effects on cellular processes.

Patients with BRCA1 defects initially have one mutant copy of the gene and one normal copy, then lose the functioning copy as the tumour develops. But because the researchers studied cells completely lacking Brca1 , the findings do not explain why patients are initially predisposed to tumours, but rather why tumours develop after the functioning copy of BRCA1 shuts down. Thus there is a need to understand how one defective Brca1 copy can predispose people to tumors.


Source:

Zhu, Q., G. M. Pao, et al. (2011). "BRCA1 tumour suppression occurs via heterochromatin-mediated silencing." Nature 477(7363): 179-184.

Ting, D. T., D. Lipson, et al. (2011). "Aberrant Overexpression of Satellite Repeats in Pancreatic and Other Epithelial Cancers." Science 331(6017): 593-596 %R 510.1126/science.1200801.

Fatherhood: Makers of Men

Men may not go on a hormonal rollercoaster with their pregnant partners, but once the baby shows up, the levels of testosterone drop. The hormone drop is due to the property of testosterone, which tends to boost behaviors linked to competing for a mate, risky activities that may conflict with the responsibilities of fatherhood. The biggest testosterone drops were observed in fathers of newborns and those highly invested in child care. The finding that fathers are hardwired to care for children adds to previous cultural models of human evolution, which traditionally depict the mother as being hardwired for hands-on child care.

The study followed 465 men participating in the Cebu Longitudinal Health and Nutrition Survey, started in the Philippines in 1983, when the participants were 1 year old. At age 21.5 (in 2005), the researchers tested the single male participants' testosterone levels when they woke and when they went to sleep. The measurements were repeated at age 26 (in 2009), when about half of the participants had become fathers.

Men who stayed single showed a small age-related decline of about 12 to 15 percent in the male sex hormone, while the testosterone levels of new fathers — those with a baby between 1 month and 1 year — on average dropped about 30 percent. Hormone levels in fathers of newborns (1 month and younger) dropped four to five times lower than levels in single men levels and twice as much as fathers of older children.

Newborn babies come with really intense physical, emotional and psychological changes. This is seen in men's biology responding to that, in line with what is expected in men trying to transition into this new role of being a father to a newborn.

As to the effect of the lowered testosterone, the researchers can't be sure. There could be effects on libido and muscle mass, though they are probably mild, since the participants' levels are still within the normal range.

Lower testosterone could influence the amount of time a man spends with his family, essentially by tempering his urge to go out and reproduce. Higher testosterone has been associated with increased risk-taking and competition with other males. This could be why testosterone levels are even lower with increased child care investment.

The finding may also explain why having a partner and becoming a father are good for a man's health and longevity. This could be somewhat mediated by the changes in testosterone levels. Some researchers believe testosterone lowers immune function: Higher testosterone levels may interfere with the immune system's ability to fight off infection. If this is true, lowering testosterone could be an investment in men's health. The researchers plan on following up with these men at around age 30.

Source:

Gettler, L. T., T. W. McDade, et al. (2011). "Longitudinal evidence that fatherhood decreases testosterone in human males." Proceedings of the National Academy of Sciences %R 10.1073/pnas.1105403108.

Inheritability of Longetivity

In October 2009 Stanford University geneticist Anne Brunet was sitting in her office when graduate student Eric Greer came to her with a slightly heretical question. Brunet's lab had recently learned that they could lengthen a worm's lifetime by manipulating levels of an enzyme called SET2. "What if extending a worm's lifetime using SET2 can affect the life span of its descendants, even if the descendants have normal amounts of the enzyme?" he asked.

The question was unorthodox, Brunet says, "because it touches upon the Lamarckian idea that you can inherit acquired traits, which biologists have believed false for years." The biologist Jean-Baptiste Lamarck theorized in 1809 that the traits exhibited by an organism during its lifetime were augmented in its offspring; a giraffe that regularly stretched its neck to eat would father calves whose necks were longer. The idea was largely discredited by Darwin's theory of evolution, first published in 1859. More recently, scientists have begun to realize that an organism's behavior and environment may indeed influence the genes it passes to its offspring. The heritability of those acquired traits is not based on DNA, but on alterations in the molecular packaging that surrounds a gene. When Greer approached Brunet in 2009 with his question about worms and SET2, such "epigenetic" inheritance had only been discovered for simple traits such as eye color, flower symmetry and coat color.

The study used Caenorhabditis elegans worms with very low levels of SET2. The enzyme normally adds methyl molecules onto DNA's protein packaging material. In doing so, the enzyme opens up the packaging material, allowing the genes to be copied and expressed. Some of those genes appear to be pro-aging genes. The group knocked out SET2 by removing genes that code for it. This had the effect of significantly lengthening the worms' life spans, presumably because those pro-aging genes were no longer expressed.

Next, the long-lived, enzyme-lacking worms mated with normal ones. The offspring had the regular genes for making SET2, and even expressed normal amounts of the enzyme, but they lived significantly longer than control worms whose parents both had regular life spans. The life-extending effect carried over into the third generation, but returned to normal by the fourth generation (in the great-grandchildren of the original mutant worms). For the first few generations, having a long-lived ancestor increased life expectancy from 20 days to 25, extending a worm's longevity by 25 to 30 percent on average.

The group have not yet determined the exact mechanism for the lifetime extension, or which molecules are at work. The knowledge that epigenetics can impact a complex trait like life span has scientists curious to find out what other kinds of traits—such as disease susceptibility, metabolism and developmental patterns—are epigenetically heritable. Because epigenetic effects can be modified by environmental stimuli; it is possible that some of these traits "could be determined, at least in part, by the environment and lifestyle choices of parents, grandparents or even great-grandparents."

The study’s results are also exciting because the genes that code for the life-lengthening SET2 enzyme exist in other species, including humans. Brunet says she wants see if the results can be replicated in vertebrates, such as fish and mammals. Those questions will not be answered for many years, because it is unknown whether the SET2 complex has the same function in other species, and because those species have longer generational time frames.


Source:
Greer, E. L., T. J. Maures, et al. (2011). "Transgenerational epigenetic inheritance of longevity in Caenorhabditis elegans." Nature advance online publication.

Facebook and the brain

Scientists have discovered a link between Facebook friends and brain size, suggesting social networking could possibly change your brain - or that some people are just 'hard-wired' to make more friends.

Four brain areas, which are known to play a role in memory, emotional responses and social interactions, are larger in those with longer friend lists. But the English scientists say that so far, it i snot possible to say whether Facebook is a brain-changer.

"The exciting question now is whether these structures change over time -- this will help us answer the question of whether the Internet is changing our brains."

The team of scientists used magnetic resonance imaging to study the brains of Facebook-addicted 125 university students. They then cross examined the brain scans with those of a further 40 not-so-addicted, or non-using, students.

What they found was a correlation between the number of Facebook friends and the amount of 'grey matter' in the amygdala, the right superior temporal sulcus, the left middle temporal gyrus and the right entorhinal cortex. Or, in layman's terms, some fairly useful parts of the brain.

Grey matter is the layer of tissue where mental processing occurs, and its thickness, specifically in the amygdala, was also linked to the number of 'real-word' friends a person has. In the other three areas, the connection between grey matter thickness and friends applied to online only.

Technically, if the research is proven, we could scan a person's brain and find out how long they spend interacting online, and how long they communicate in the real world.


Source:

http://www.newsdaily.com/stories/tre79h89l-us-facebook/

New treatment for haemophaelia

Haemophilia is a group of hereditary genetic disorders that impair the body's ability to control blood clotting or coagulation, which is used to stop bleeding when a blood vessel is broken. Treatment to this disorder is usual costly, thus research on a new protein treatment is needed to solve this problem.

A research team at the Children's Hospital in Philadephia, USA found that a custom built variety of coagulation factor Xa (FXa) was effective in controlling and clotting bleeding in haemophilliac mice. It appeared to do so more safely, and with greater long term prospects.

Traditionally, haemophilliacs require fairly regular infusions of clotting proteins in order to avoid uncontrolled bleeding.

Often, the body perceives these proteins as foreign organisms that need to be eliminated, which creates an immune response that more or less makes protein treatment useless.

At this point, the only other option for sufferers is spend a small fortune on drug treatments instead, and even this isn't guaranteed to work.

The thrust of this new treatment, according to the study team, is to take the naturally occurring Fxa and reshape the actual protein into a customised shape, which not only means that it stays in the system for long and requires a lesser quantity of protein, but it also means doctors can much more easily control the way the protein interacts with other physiological systems, making it more predictable and safe to use that the unpredictable natural FXa.

"Our designed variant alters the shape of FXa to make it safer and efficacious compared to the [naturally occurring] wild-type factor, but much longer-lasting in blood circulation," said the head researcher.

Monday, 24 October 2011

Sneaky Shrimp Smash and See Sircularly

Yes, I am aware that it is spelled with a `c'. As Josh has already alluded to, I have had this draft sitting around for a while now...

Mantis shrimp are about as extreme as a small crustacean can get! Not only are they beautifully coloured, they can generate shock waves, smash glass and see circularly polarised light. They even feature in a trading card game. Needless to say they have also been the subject of many great articles.

The punch of the mantis shrimp is so forceful that it is capable of generating cavitation. Plenty of people talk about it but no-one really seems to explain how it is capable of punching so quickly, especially underwater! I did track down one guy who does discuss this, and the paper that it's all drawn from. Essentially, the mantis shrimp is able to get enough energy into the punch by pre-tensioning a resilient elastic mechanism within its arm. This means that it doesn't have to be able to generate a huge amount of power: it can tension the elastic relatively slowly, then release the whole lot of energy very quickly (this is the idea behind the flywheel in more mundane mechanical systems).

Their claws can accelerate at 100 thousand metres per square second and reach a top speed comparable to your car on the highway: the forces involved range into the thousands of newtons, which isn't bad considering they reach only a few dozen centimetres length (around 25cm). The rapid motion of the mantis shrimp's arms means that the shear rate at their surface is huge: so huge that the pressure difference between fluid streams around the claw (definitely high Re) are sufficient to cause cavitation.

Cavitation is the formation of a gas bubble inside a body of liquid as a result of rapid motions. It can be heard when you do donuts in your boat (not that I've done this.... *ahem*) and can cause quite a bit of damage: the rapid collapse of cavitation bubbles generates shockwaves that can cause pressures of up to 9MPa (with a bubble only 2.5mm across!). The collapse of these bubbles could be considered to be a great example of an irreversible adiabatic process... the compression of the gas raises its temperature to around 5000K, allowing it to briefly fluoresce, in a process collectively referred to as sonoluminescence.

The net effect is that a prey item gets smashed with a knobbly calcified arm, then battered by broadband sound at 240dB, then eaten! This seems a little excessive for catching and eating snails... hardly the most nimble prey.

In addition to their deadly cutlery, mantis shrimp are able to distinguish between circular and linear polarisations of light. They do this by passing light through a cell which acts like a broadband quarter waveplate, taking circularly polarised light in and giving linear polarisations out. As it happens, this waveplate is `better' than our own manufactured waveplates, in the sense that it outperforms any human-manufactured material over the visible spectrum (waveplates are generally manufactured with a specific wavelength or very narrow range of wavelengths in mind, so don't work well at other frequencies). Even more interesting, the cell itself is also a photoreceptor! This is just the beginning when it comes to mantis shrimp vision... but I'll leave that to those better qualified.

By the way, the actual shrimp is fluorescent too!

Enjoy!