Saturday, 12 November 2011

The dreaded question from last year...

Hi all. I promised myself that I would look at this question again sometime this semester and prove to myself that I could do it... Of course, I am talking about the ratchet questions from last year's BIPH2000 exam.

The first part of the question goes like this: if I have a G-ratchet, draw the free energy landscape. This will be one-dimensional. Since the rod is so long and has only one degree of freedom (the axial direction) the entropy is constant. If we choose to measure energies such that G=0 when none of the springs is depressed we will find that the graph looks just like Nelson's plot of the potential.

The next question asks for the steady-state probability distribution. Consider a single step of length L in either direction: neither gives a net free energy change, no matter what the starting point was. Thus there is no tendency to move in either direction more often than the other, which is the kinetic interpretation of equilibrium. We know that this means the distribution is just the Boltzmann distribution.

Since the rod is semi-infinite the probability of being found in any position approaches zero in the equilibrium state. However, for each length [0,L] we can write
.
Now if the width of the springed ratchet head is l, and the maximum (compressed) energy is epsilon, the potential takes the form
 
Since this is equilibrium we have the expectation that j = 0.

The final part of the question asks what would happen if the ratchet were tightly coupled to an ion transporter. Since this is tight coupling the rod is in equilibrium when the ions are, and since we have already seen that the rod alone has no net driving force we need only consider the ions. (By the way, the free energy landscape is again one-dimensional, because of the tight coupling, and drops by some amount with every rightward step).

So, to halt the rod we just need to apply the Nernst voltage. If this is the voltage across the membrane (there are no counterions mentioned, so no need to worry about depletion layers or screening) then we find the field to be
,
(t is the thickness of the membrane) and this must be directed from left to right in order to stop the flux of ions.

Any takers? I get E~10^7 V/m.

Friday, 11 November 2011

Reflections...

For those who are interested: my reflections on BIPH3001 (the assignment from last week).
Seth: I noticed a mistake in equation four and just had to fix it (there was a missing factor of mu)!

Policy for late assignments

No more assignments will be accepted past 9am Monday morning. I will be sending out emails to let people know if they still have outstanding work.

Thursday, 10 November 2011

The Mystical Transfer Matrix

Hi all. The blog has been very quiet (I hope exams are going well for you all).

Anyhow, I decided I'd sit down and figure out what on earth the transfer matrix method was all about (this is given on pg 360 of Nelson). I had previously glossed over this portion of the text because I didn't understand at all what the transfer matrix did (or why it's even called the transfer matrix) and hadn't the time to ponder it deeply.

So... here's my take on things, with some assistance.

Suppose I have a single link in my polymer chain (not a very exciting polymer!). By Nelson's notation, my partition function is
 Now, let's write this as
just for the fun of it (this will be useful in a second!).

Suppose I add a second monomer, so my `polymer' is now a dimer. There are four ways this can happen, depending on the states of the existing monomer and the new one (call them 'up' and 'down', or u and d). Thus I can have uu, ud, du and dd. Each of these options contributes to the partition function of the dimer. Since two options go with having the first monomer 'up' (uu and ud), and two with 'down' (du and dd) I can write a matrix equation, which for now I'll represent as
.
This needs filling in. Let's do the uu term. If the first monomer is 'up' and I add another 'up' then the term needs to be multiplied by . Repeating the process for the other combinations yields
.
We call the matrix in the middle the `transfer matrix', T, but it would probably be more sensible to call it the `extension matrix', since it accounts for all possible ways in which the system may be extended by one monomer.

Now, suppose I take my dimer and add another monomer, to make a trimer. Since the interactions are local (they only depend on the nearest-neighbours) there are again only `four' ways in which I can do this (the rest of the possible configurations are already counted in the partition function of the dimer). Thus I simply get
.
It's now quite simple to see that for a polymer of N residues the partition function is
,
which is a really neat result.

To summarise: the transfer (extension) matrix finds the terms which must be added to the partition function every time a new monomer is added, so the polymer partition function may be built up by repeated application of T. Enjoy!

By the way, the equations here were rendered with an online LaTeX interpreter which outputs images: very useful and fast!

Wednesday, 9 November 2011

Microscopic Model of Osmosis

Hi all. BIPH study is now underway...
I was re-reading the section in Nelson about osmotic pressure as the rectification of Bronwian motion (remember all of us walked away from that chapter feeling a little nonplussed?) and decided that I'd like to understand what on earth he was going on about.

My biggest issue was the way that osmotic flow was ascribed to entrainment of fluid allowing solvent to be 'sucked' across the membrane: surely the situation is symmetrical, because the particles also entrain fluid as they head toward the membrane (therefore they presumably pull fluid toward the membrane just as fast as they suck it away).

I found a very nice, readable and detailed paper which goes through the same sort of reasoning as Nelson, but I think a bit more maths helps to understand it a lot better (this is also a good reference for basic low-Re fluid mechanics).

The basic ideas are all the same as Nelson, but there was one line that really grabbed me;
"... the membrane acts on the fluid using the particles as its agent[s]."
The argument goes like so: the fluid is responsible for excitation of Brownian motion (creation of fluctuations) and the dissipation of energy, so every force which acts on the particle also induces fluid motion. At the membrane there is (on average) only a net force on the fluid when the particles bounce off the membrane, so they really do 'suck' solute through the membrane by rectifying the Brownian motion of the fluid using the particles as agents!

Saturday, 5 November 2011

Neglecting Inertia

I was thinking about the differences between classical and quantum stat mech...

When we talk about a bunch of particles diffusing under the influence of an external force we use the Smoluchowski equation, with the force corresponding to the negative gradient of some potential energy. We then proceed to say that the concentration at equilibrium is proportional to exp(-U/kT). I had always thought that by doing this we were essentially assuming that the particles have negligible inertia i.e. we are operating in the low-Re regime (since I would have thought that the concentration would be proportional to exp(-(K+U)/kT) if this weren't the case, K being the kinetic energy).

It turns out that I was wrong:
concentration = integral of the distribution function over all momenta,
so the momentum is `integrated out', contributing only to the constant sitting out the front of exp(-U/kT).

For those doing statmech: why don't I need a degeneracy function i.e. the density of states?
The answer lies in the interpretation;
N = integral of g*f over energies = integral of concentration over space = integral of f over phase space;
so the degeneracy is automatically accounted for by the fact that one integrates over a volume in phase space, rather than over a line (in semi-classical mechanics this is where the density of states, g, comes from in the first place: it is the volume of a thin slice of phase space with energy e to e+de, divided by the number of uncertainty-limited cells that fit in that slice).

It's worth noting that we will still be in the low-Re regime anyway: if the force-velocity response is linear it implies the drift velocity is less than or comparable to the thermal velocity, which means that for most things (proteins in water, etc.) Re will be small.

Good luck with exams!

Friday, 4 November 2011

Announcement courtesy of Seth

To the people who haven't finished marking the assignments (including yours truly), we should submit the marks before the exam period begins which is before Monday. The same applies to our end of course summary, blog comments and so on, which was meant to be due today.

Here's his reply to me:

"The sooner the better, but I absolutely want it before the exam period starts.....anything submitted after the beginning of the exam period will not be counted."

Good luck everyone! :D

Wednesday, 2 November 2011

Chemical Kinetics as Maximum Entropy Inference


At some point in the semester, I attempted to show that an exponential decay formula (e.g. for kinetics of unimolecular decay) could be motivated by MaxEnt reasoning, and quickly got confused trying to prove myself.

I have found that this is part of Chapter 15 in the book that I used to teach BIPH2000 this year. This is the first time I have seen this in an undergraduate textbook (or indeed any).

Here is how you do it. You are doing maximum entropy over a sample space of trajectories. Each trajectory is associated with a time-dependent probability p(t), indicating the probability that the reaction has taken place. The constraints are given as integrals over time. You find the probability distribution that maximizes the entropy under the following constraints:

1. Normalization (the integral of p(t) over all time is one)
2. Average lifetime (the integral of t*p(t) is equal to the lifetime, call it tau)

The result is a probability distribution that is exponentially decaying in time with a time constant given by the Lagrange multiplier in the optimization.

The idea of doing MaxEnt over trajectories is an idea ascribed to Jaynes (he called it "maximum caliber"): Jaynes (1980) Annu. Rev. Phys. Chem. 31. 579-600.

It has recently been revisited by e.g. Dill and Stock:

Ghosh, K., Dill, K., Inamdar, M., Seitaridou, E., & Phillips, R. (2006). Teaching the principles of statistical dynamics. American Journal of Physics, 74(2), 123–133.

Otten, M., & Stock, G. (2010). Maximum caliber inference of nonequilibrium processes. The Journal of Chemical Physics, 133(3), 034119–034118.

Stock, G., Ghosh, K., & Dill, K. A. (2008). Maximum Caliber: A variational approach applied to two-state dynamics. The Journal of Chemical Physics, 128(19), 194102. doi:10.1063/1.2918345


Tuesday, 1 November 2011

Everything I've ever learnt is broken..?

Hmmm....

So here's my gripe from earlier today. We all know that the canonical ensemble has energies assigned according to the Boltzmann distribution. We also know that this is derived by maximising the entropy subject to the constraint of constant energy... but isn't the canonical distribution the one which minimised the free energy?

The trick is this: to derive the Boltzmann distribution we consider a subsystem, treating the rest of the (much larger) system as a reservoir at fixed total energy, and maximise the total entropy. In this way we get the distribution which minimises the free energy of any given subsystem: voila! This also makes it obvious why the microcanonical and canonical ensembles must become equivalent as the size of the system becomes larger: neat!

Now the apparent contradiction is gone, and I understand the ensembles a little bit better: everything isn't broken.

Melanoma

Here in Australia, a simple sunburn is often the least of our worries. Australia's long hours mean we record some of the highest numbers of melanoma cases in the world, which makes it the fourth most prevalent cancer in this country. Thus it's fitting that an Australian team are part of a new study that has successfully identified four key genetic markers that increase the likelihood of developing the skin cancer responsible for over 1000 deaths a year.

The study was conducted by a team from the Queensland Institute of Medical Research, along with a group from the University of Leeds, whose task basically involved surveying 2168 Australians of European descent (for melanoma is far more prevalent in this ethnic group than any other), and then sorting through half a million genetic markers known as single nucleotide polymorphisms, or SNPs. These SNPs flag a difference in a single nucleotide of DNA, and can also alter the function of certain strands of DNA, hence their importance in a lot of disease research.

Lead scientist in the Australian study mentioned that crunching through that many pieces of very specific genetic material came with its own share of problems. About half a million SNPs are being looked at, since many things are being tested thus there are lots of SNPs look significant by chance."

The other research group also found an SNP that may be part of a genetic string the body uses to repair damaged. DNA. Given that melanomas often stem from damaged genetic code and tissue corruption, genes like these may be crucial in working out how to fight cancerous growths like melanomas.

Source:

http://www.abc.net.au/science/articles/2011/10/10/3334971.htm

Ultrasound: key to healing some complex problems

Doctors in Scotland are using ultrasound to help patients with severe bone fractures, and finding it speeds recovery time by more than one-third. The ultrasonic pulses induce cell vibration, which doctors say stimulates bone regeneration and healing.

The technology is similar to the type used on pregnant women to image growing fetuses, but it uses a different sound frequency and a different pulse rate. One of the patients received the treatment for a shattered ankle he sustained after falling six metres.

A water-based gel is applied to a transducer, which then goes inside the strap and stays there for 20 minutes. The patient doesn't feel anything, but the sound waves penetrate tissue to stimulate cellular activity.

Typically, this type of injury would take six to 12 months to heal properly - if it does at all - but his foot healed after four months. Evidence suggests the ultrasound speeds up healing by about 40 percent.

Scottish doctors first developed ultrasound as a diagnostic tool in the 1950s, using adapted sonar technology from Glasgow's Western Infirmary. It is now used for a wide range of diagnostic and therapeutic purposes (heal punctured lungs and it has the potential to break up blood clots, among other uses). But the said group is the first we've seen to use it for bone regeneration.

Apparently the treatment remains fairly expensive, so for now it's only being used in complex fractures. But as ultrasound devices get smaller and cheaper, future fractures may be much faster to heal.


Source:

http://www.bbc.co.uk/news/uk-scotland-glasgow-west-15262297

New Instrument for Surgery

A new kind of medical device, aimed at reducing incision-site infections that result from surgical procedures, has been created. But rather than battling microorganisms with pharmaceutical cocktails or some kind of post-surgical treatment, Nimbic Systems' Air Barrier System (ABS) keeps surgical sites free of bacteria and other bugs by creating a cocoon of purified air around the incision site for the duration of the surgery.

Infections picked up in the operating room can be as serious than the condition being treated by the surgery itself. Aside from threatening patient health, they are also costly. For example, a bad Staphylococcus infection can drive treatment costs into the six figures per patient. ABS aims to curb those infections by ensuring that pathogens never get close to a potentially susceptible site during surgery.

It's a relatively simple setup: a reusable blower unit feeds filtered air into a sterile, disposable nozzle that is fixed in place adjacent to the incision. ABS then keeps a constant flow of purified air streaming onto the site, creating a kind of air cushion that keeps whatever microbes might be lurking around the O.R. at bay. This is particularly critical for long-duration surgeries, like procedures that implant prostheses or things like spinal or vascular operations.

As such, ABS got its first approval for use in hip replacement surgeries, where the patient's body is open to the ambient air for an extended period. In trials, ABS proved to reduce the presence of bacteria and other micro-stuff by more than 84 percent. Trials later this year will explore whether ABS can achieve similar results for spinal and femoral operations.


Source:

http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=121878&org=NSF

Eww! Worms!

A research team from the University of Melbourne has completed a study into the genome of the giant intestinal roundworm, a parasite that kills several thousand people a year, and has infected about a sixth of the world's population.

The study analysed the genome of a related worm, the pig roundworm, to learn more about the roundworm that infects humans, which commonly grows up to 30 cm in length. By collating genetic data, the team hoped to find out more about the way the worm works, and identify easy and cheap methods to fight it.

From the genome sequence, the group has identified five high priority drug targets that are likely to be relevant for many other parasitic worms. This may lead to new treatments that are urgently needed, for genome-guided drug target discovery is ideal for identifying targets that selectively kill the parasite and not the host.

The said disease is rare in developed Western nations, but is extremely prevalent in areas such as South-East Asia, and particularly Africa.

Source:

http://newsroom.melbourne.edu/news/n-672


DNA and Childhood

Findings published today in the International Journal of Epidemiology suggest that socio-economic status and living standards early in life may actually cause changes in the DNA that is carried in life, regardless of how one's living conditions change along the way. Some adult diseases--type 2 diabetes, coronary heart disease, etc.--have been linked to socio-economic disadvantages in early life, but it is not really well known on why or how.

The study group's sample size is admittedly small, but what they found was significant. In 40 research patients in the UK that are participating in an ongoing study that has documented many aspects of their lives, researchers looked at differences in gene methylation. Methylation is an epigenetic modification to one’s DNA that changes a gene’s activity, generally reducing that activity within the genome. Various factors can influence methylation, including environmental conditions.

In their sample, the researchers looked at DNA taken from the subjects at age 45. They chose subjects that had come from either very high or very low standards of living, and they looked at differences in DNA methylation across some 20,000 genes. They found that 1,252 methylation differences were associated with socio-economic circumstances in early life while just 545 were associated with socio-economic circumstances in adulthood, suggesting that where you come from really does make an impact on the very fiber of your biological being.

Moreover, the methylation patterns were clustered together in large swaths of DNA, suggesting an epigenetic pattern linked to humans’ early environments. If some diseases are linked to a person’s early upbringing, and we can see where there are changes happening in the DNA during early life, then we can narrow the window on where in the genome things like coronary heart disease and diabetes take root. Future research could peg where certain methylation differences are associated with specific diseases, then target those areas with drugs or other treatments.

Source:

http://scienceblog.com/48584/your-dna-may-carry-a-%E2%80%98memory%E2%80%99-of-your-living-conditions-in-childhood/


Josh's Slides for BIPH Talk

Great minds think alike... here are my slides (in convenient PDF format!) from our enlightening BIPH talks. Please, enjoy their bland, metal appearence, and marvel at the only colour picture in the whole paper (that of cancerous mice)!

Link: https://docs.google.com/present/view?id=dcpscm3n_0k8zbjrgk
If that one has dodgy bits (not sure if the uploading is fully working...) here is a PDF version that is a little larger:
Link: https://docs.google.com/open?id=0BxWocosCk1vyMTUyNTQ0YzYtOGUxOC00YmY3LTk3MDEtYjc1OTBhNzAxMzY1

Josh H

Borg and Other Hive Minds (OHMs)

OHMs now does not only mean 'On Her Majesty's Service'! As I'm sure most of you are aware, many science fiction stories feature some kind of 'hive mind'. This is generally described as being a kind of 'gestalt psychic field', controlled by some locus (whether or not it is an individual or a sum of the whole) and exerting some level of control over the cognition of its individual parts. Whilst this is a cool-yet-disturbing idea, do you think it is physically possible? If some kind of 'psychic field' existed, then this would be a simple matter, but is there another physical phenomenon that explains this? One idea (besides another 'psychic field') is a possible usage of electromagnetic fields that link brains/bodies together.

This could partially explain close proximity actions, such as touching, having greater influences on a relationship. This would be mixed with other stimuli that only rely upon perception; so perhaps there is some kind of 'psychic field' built into (community-living/loving) humans, such that certain stimuli have an unconscious effect on us, causing us to act in a society-like manner? Science fiction has once again explored this idea for us; an example is Isaac Asimov's Foundation series (2nd Foundation, for those who know).

This may not be such a silly idea, either. In a National Geographic (last year sometime, I think), there was an article about domestication. It turns out that domestication has some kind of genetic basis: the researchers tamed Russian snow foxes (I think they were snow foxes...) to be either domesticated or anti-domesticated (the sort that start to growl and scratch when a human just walks into a building) simply by selecting pups that acted in a friendly or non-friendly manner. The domesticated foxes ended up being like doggies or kitties in friendliness! (Word choice deliberate). The also noticed phenological and behavioural changes (besides friendliness) such as spotted coats and curly tails. Thus, some kind of genetic change is happening to domesticated animals that makes them domesticated: even after some generations without human contact, descendents of domesticated animals are domesticatable still, whilst offspring of undomesticated animals aren't any friendlier when raised by domesticated parents. They are not yet sure if these genetic changes are a result of selecting for desirable qualities (e.g. cuteness selection drives tail curling) or whether phenological changes are due to selecting for behavioural traits (are coat spots genetically linked to friendly behaviour? And what does this mean?).

Another question: are humans the ones being domesticated to animals? Do our pets know just which actions please us and do they use these traits to gain a survival advantage?

Perhaps the greatest question, however, is about us: to what extent have we domesticated ourselves to form civilisation?

And is this 'domestication' essential to form surviving communities?
Josh Harbort, your Locus.

4-Brain Theory: Resistance is Jungian (or futile?)

If one searches for Jungian psychological theory and neuroscience, one will inevitably come across the four-brain neuro-psychology theory.

Essentially, what this theory says is that the cerebrum of brains primaril function in a four-lobed set-up, two in front and two at the back. Communication happens between the front-left and rear-left quadrants (et simile for the right side) and between the front-left and front-right quadrants (also similarly for the rear half). It is hypothesised that communication between the diagonally opposite lobes (e.g. front right to rear left) is less efficient/slower because the singals must first travel from the front right to the rear right or front left before travelling to the rear left lobe.

Each lobe also has a specialty (although I do forget exactly what each bit does... it will arise with studious application of Google and Wikipedia...) and humans, apparently, show differing capacities for each region. For example, one person may have a dominant front-right lobe. If this were the case, and if (as a completely arbitrary example) the front-right lobe controlled logical forethought and mathematical ability, that person would exhibit great strengths in those areas, but would be weakest in abilities associated with the rear-left lobe (e.g. empahthy and subconscious inhibitions, as an arbitrary example). Some people might have strengths in two areas (e.g. right side, rear), which are generally adjacent, whilst others may have strengths in three adjacent areas (with the dominant region usually being the central region). A few rare cases are those who have strengths in all four regions—these people (from psychological studies) tend to be good at organising large organisations but can be indicisive and find decision making hard. So, whilst some people have particular strengths in some areas, each combination of lobe strengths has its own good and poor points, none being 'better' than any other. This is an interesting idea which illustrates that people have different intelligences rather than inferior/superior intelligence, a case which I believe is much more realistic.

The punchline of this idea is that when people are forced into using their 'inferior' brain regions, they operate at a muchly reduced efficiency and actually can suffer some psychological harm. The hypothesised reason for this is that in the stronger lobes, signals can travel with less resistance than in the weaker lobes. This has the effect of making some lobes more efficient than others, resulting in the greater observed cognitive strength of those lobes.

Introversion and extroversion are not really related to this theory (although one of the lobe-strengths has similar effects). People are naturally aroused = awake) to different degrees, based on their brains and based on the time of day, amount of sleep, etc.. The main theory behind the intro- and extroversion is that introverted people are naturally very aroused and so take in information more easily. They thus attempt to find areas with less stimulus, so that they are not overwhelmed by too much stimulus. Extroverted people are less aroused and so seek out more stimulus to wake themselves up; thus they function well in loud environments because they are awake enough to efficiently perform their tasks. People naturally lie on a spectrum between these two, but you can become more 'extroverted' if you are tired, for example, and you require arousal.

I found this quite an interesting theory that does explain many of the psychologically observed phenomena. I was slightly skeptical about the 'brain resistance' idea, however: they way it was written, mostly, but also I'm not sure if this would be a feasible way to look at brain structure. What does everyone else think? Also, what does everyone think of intro/extroversion? This is still an active area of research, so interesting ideas are sure to keep coming.

Josh Harbort

My Project: PHYS3900

My project for the Physics and Biophysics capstone course was on modelling the cell membranes of mammal cells and fungal cells, and comparing them.

The rationale follows (with a story!). Certain drugs show excellent specificity for killing fungal cells and not mammal cells. While this is easily understood upon cursory investigation (after all, fungi are markedly different to mammals in appearence), it does not hold up under a more detailed examination: essentially, mammal and fungal cells are almost identical in composition! However, there is one major component of each cell type that differs from the other: the type of sterol included in the bilayer membrane. Mammal cell membranes (as you probably know) have cholesterol, with an abundance of around 30-50% mol (they are a major component!). Fungal cells have a similar abundance of sterol, but of a different kind: they include ergosterol rather than cholesterol.

Punchline time: it is hypothesised that the small differences between cholesterol and ergosterol are enough so that anti-fungal drugs kill the tinea on your feet (if you're unfortunate to have tinea) but not you. What makes this even more unlikely is the fact that the bit that is most different, the methyl group on the tail (those extra double bonds don't really do much different between the two sterols) is buried in the middle of the bilayer. See the picture below; ergosterol is the one with red bits and the differences from cholesterol are shown in red. So how does it affect the interactions of drug molecules with the surface of the membrane bilayer?
Figure 1: Comparison of cholesterol (left) with ergosterol (right). Differences are shown in red on ergosterol. Note that formation of additional double bonds means that a hydrogen atom has been removed from the involved carbon atoms.

This question is hard to answer with traditional experimental techniques, such as NMR or x-ray crystallography, because these techniques either cannot show an atomic level of detail (NMR just takes an average of everything in the sample) or cannot reliably be used to image flexible biological membranes (lipid bilayers are notoriously hard to crystalise, let alone a whole biological membrane!). This is where my project comes in: we can use molecular dynamics (MD) simulations to atomistically model complex lipid bilayers and get the atomic-scale picture we need to discern the actual effects of the different sterols on the bilayer.
MD simulations (Martin and Wayne have both done them!) essentially use Newtonian mechanics (and a bunch of other assumptions, like mean field approximations) to model bonds and non-bonded interactions. This method requires many other approximations to ensure the observed nature of chemical species is seen in the simulations; the absence of quantum mechanical constraints limits their accuracy in this way. Nonetheless, if the simulations work, they are an excellent method for examining the physical properties of a system, and can provide much information that cannot be obtained from physical model systems.

In my project, I simulated a bilayer composed of a 1:1 mixture 2-palmitoyl-1-oleoyl-sn-3-glycerol-phosphocholine (POPC) and cholesterol, representing animal cells (my supervisor actually performed this simulation, but I analysed it) and a bilayer composed of 1:1 POPC and ergosterol, representing fungal cells. Unfortunately, if one has an incorrect topology for a molecule, the simulations crash (usually due to all of the atoms in the molecule flying off in opposite directions and increasing the energies to very silly heights), and this is what happened with my ergosterol simulation. This means that Professor Mark's group has some work to do to improve the model for ergosterol; it turns out that it is a lot less similar to cholesterol in terms of topology than we would think. Cholesterol was seen to have a big effect on the bilayer structure, though, including thickening it, allowing more access to the methyl chains of POPC

So, although my original aim remains unfulfilled, I was able to participate in active research!
Josh Harbort

My PHYS3900 Feature Article—Antibiotic Crisis: Look to the... Drugs?

Antibiotic Crisis. The very words bring terror to thousands of medical practitioners, leading to their waking nightmares at work.... This is the piece of assessment I handed in for the literature review/feature article for the capstone course. It seemed interesting at the time (which was the dying minutes of 3 days before it was due...) and hopefully, it is not too wishy-washy. I suppose our biophysics class can tell if it's biophysics-y enough.... :)

Here's the link: https://docs.google.com/open?id=13eErNnCs6aiclQlKkWW2_PBNY516vGsto9Bz1104wjyHqvluE0FL5FT2WNqq

Josh H.

Vulcan Physiology: Copper and Logic based

Most of you will know of the copper-based blood of Vulcans. This is a relatively common oxygen carrier amongst certain forms of life on Earth. However, none of these life forms is humanoid! Obviously, were we to interact with copper-blooded aliens, we would face some cultural difficulties. For example, would copper aliens require prawns (and other copper-blooded food) in much greater quantities than we require, since they need more copper than iron? And, assuming that they have the same mechanism as copper-blooded organisms on Earth, would they be able to cope with the large amounts of iron in our food? I suppose that it is likely that they would have some iron-based proteins, considering how common iron is in the universe.

Also, would their blood be green? If they were the same as organisms on Earth, I suspect it would be reminiscent of 'ichor' (that is, Greek god blood) because it could be clear when (de)oxygenated (can't remember which) and purple otherwise, or it could be purple and blue as in other organisms.
Would it be possible for humans to develop into copper-blooded organisms? I would suggest that it would require a lot of transgenic stuff, and I don't know how an embryo would be grown (since iron-blooded mothers would be incompatible with copper-based children). Would this have a beneficial effect on our metabolism?

And, the favourite of the Original Star Trek Series: Spock. Could copper-blooded and iron-blooded humanoids actually mate successfully?

Josh Harbort

Celluase in Humans

As we should all be aware, humans can't digest wood, nor can any mammal (as far as I know) without the aid of symbiotic gut bacteria. Ruminants get around this by having four stomachs, one of which houses all of the cellulasing bacteria. 2-stomach mammals also have a dedicated chamber for cellulase-bacteria (e.g. horses, pigs), although I can't recall their specific digestion processes at the moment. The only reason why cellulose can't be digested by glycolysis enzymes is that cellulose molecules are linked by two positions on the glucose molecule that can't be accessed by glycolysis enzymes, so another enzyme is required. One problem with cellulase bacteria is that they tend to eat all the good stuff (e.g. the glucose from the cellulose) and leave behind only their by-products (this is how four-chambered ruminants live). So, one must wonder, would it be beneficial for mammals to produce cellulase themselves? The enzyme is not substantially different from starch glycolysis enzymes in structure, and I would be fairly certain that mammal cells could be easily transformed to incorporate it into the genome.

One problem is that rather tough teeth are required for chewing plant matter, but many mammals already possess these (and humans could just drink pulped plant/eat soft plants like we already do).
Another reason that mammals might not have intrinsic cellulase is the effect of non-soluble fibre: we all have heard that it is good for our digestive tract. But perhaps another solution could be found? For humans, at least, increased water in the diet would alleviate some of the problems, suggesting that tropical humans would be able to cope with the loss of fibre (also keeping in mind that it is likely that not all fibre would be broken down by the body, since there is so much cellulose: we would need bigger guts for it to take longer to leave the body, and then a new gut might not require fibre). How do other animals cope with the fibre problem?

If humans could digest cellulose, would that substantially change our energy levels? Perhaps it would result in sugar overloads.

Lactase is another of these sugar enzymes that most mammals do not possess (except for most humans). If humans have lactase, why can we not have cellulase too? Does anyone know of any reasons why adult mammals lose their lactases?

Also, if humans could express cellulase, where would the best place to express it be? Obviously, we don't want bacteria eating all of the released glucose. Presumably, if it was expressed in the same region as glucose is taken up, we could avoid this problem. Thus, we could test the effect of including cellulase by using tablets... a rather uninvasive treatment (so long as they dissolve at the right spot). This could be useful as a short term ability, for soldiers in the field, for example.

Josh Harbort

Space Ork Physiology

Another race from Warhammer, the Orks are described as being an integrated symbiotic organism, mixing 'genetic traits of both animal and fungal life forms'. According to the background, Orks (which have green skin) have algae that flows through their circulatory system and is a part of their digestive tract: apparently, the action of the algae to repair damaged tissue makes the Orks quite resilient. Orks are also reputed to reproduce via spores. These spores develop into cocoons which then hatch different organisms closely related to Orks—which organism develops depends on the conditions. Some of the other organisms are squigs (dog-sized, agressive and very bitey!), gretchin (smaller, weaker versions of Orks), snotlings (smaller, weaker versions of gretchin) or 'normal' fungi.

So, would this system be feasibly possible? I think that the release of spores is sensible, and possibly the differentiation between final organisms, but could spores develop into an animal-like creature? And could algae/fungi and animals exist as an integrated symbiotic organism?

Another feature of Orks is their 'genetic memory'. Apparently, Orks (who are not very bright, except when finding new ways to kill stuff) make up for their lack of intelligence by possessing some kind of genetic memory, allowing them to build all manner of destructive and dangerous devices. Incidentally, Orks are also said to create some kind of gestalt psychic field, although this is probably a discussion best left for another blog post....

Josh Harbort

HeLa Turbine: Unlimited Power!

In a science fiction game I have, there is a project a civilisation can construct called the HeLa Turbine. The game describes it as essentially using 'cellular energy' to provide power on an empire-wide scale (similar to a Hoover Dam power source in magnitude). The cells that are used to construct the Turbine are from the HeLa mutant human cell lineage (hence the name). Would this actually be a feasible? I can imagine the use of cells to create a large potential gradient but I'm not sure if it would be as efficient as other processes (e.g. solar power), especially considering the efficiency of large cell culture batches for producing proteins.

Josh Harbort

dsRNAi

For our final BIOL1040 content lecture, Dr Botella was telling us a bit about the research that he and his group have been involved in over the years. One of the (many) interesting stories he told was of self-vaccinating plants. While the plants must actually be genetically transformed for this to work (they can't do that themselves, at least not voluntarily), once they are transformed with the right DNA sequences, they can provide their own defence against invading parasites. This also has big implications for human/animal immunity, too.

dsRNAi refers to the main molecule used with this method, double stranded DNA (interference). Essentially, when a plant or animal cell detects an unprotected double stranded RNA segment, it cuts up the segment into smaller pieces (small nucleotide repeats/SNPs if I recall correctly) and then separates the double stranded RNA back into single strands again. One strand is then used to remove other RNA copies from the cell (stopping existing transcription) and the other is used to find the gene in the DNA and silence it (generally through methylation of the accompanying histones).
It is possible to artificially stimulate this process by inserting a reversed copy of a gene directly behind the original gene. The RNA produced will thus have two complimentary halves, and will form the double stranded RNA complex, allowing the experimenter to silence the copied gene.

What's more is that if a gene not used by the host organism but crucial to a parasite's survival is inserted along with its reverse, any parasite that feeds on the host's cells will take up the double stranded RNA and its cruicial gene will be silenced. The host thus produces its own vaccine, and the dsRNAi can be transferred between cells as well as produced in any cells that are transformed with the silenced gene. So, as long as the cells can be transformed, any eukaryotic cell can be treated with this vaccine. This means that retroviruses are a key delivery method while plants can be transformed with agrobacterium.

Dr Botella also said that we should see an 'explosion' in medical treatments based on this idea. The reason for the stymied research: two big companies were fighting over the patents, and a US big (insert actual word) attorney has ruled in favour of one company.

There is something potentially quite scary about using this new RNAi system (we need to make a snazzy name for it...): since humans rely upon the correct operation of certain genes to remain living, it would be very easy to use the viral vector method to silence our genes, and we have very little natural defence against it. Is there a method to counter this threat? One benefit of the great mutability of viruses is that they probably would differentiate into non-deadly strains as they mutate. Would we be able to develop resistance to this (and could we demonstrate it in a lab)? Of course, you would need to include the virus replication sequence as well as the gene silencing RNA to be really deadly, and this limits greatly the size of the gene you can knock out. But, perhaps you could use another (retro) virus with just packaging information to infect the host and the host will manufacture the dsRNAi-weapon virus by itself! Before dying, which may be another way to keep us alive (cells die before transmitting virus).

Food (and agrobacteria) for thought,
Josh Harbort