Saturday, 6 August 2011

What's in a Model?

Why should we care about models?

Well I put it to you that models are at the heart of all scientific inquiry (apart from so called stamp collection.) And that all models attempt to do the same thing: predict the future. That is, predict the outcome of some experiment. A model's predictive power is perhaps the most obvious criteria against which it can be judged, not whether it is elegant or simple or "from first principals" but how well does it predict the future.

However, this cannot be the whole story. Quantum Electrodynamics (QED) agrees with all experimental observations we’ve cared to make, the most accurate of which has tested it to twelve decimal places (equivalent to measuring the distance from Brisbane to Perth with an accuracy of the thickness of a human hair.) Nevertheless, we don't use it to calculate atmospheric light scattering. Why not? It's the best show in town by many orders of magnitude!

We don't use QED to calculate light scattering for the same reason we don't use General Relativity to calculate the trajectory of the moon or use Quantum mechanics to determine the structure of a protein. It's impractical. Thus I propose that an equally important feature of any model is its usability. (Which gives me the perfect opportunity to sound very learned by quoting Voltaire "The perfect is the enemy of the good.")

From the perspective I'm proposing the only tools of science are models (and of course stamp collections which are also very important) and the criteria with which these models should be judged are accuracy and practicality.

To conclude I should probably confess that viewing science through this prism is a sort of defensive strategy I've adopted in order to avoid thinking about awkward questions. For example, how is it that a particle can exhibit field-like properties? From a model based perspective such questions become meaningless; if all you're judging your model on is how well it works, then whether it makes sense or if it represents true reality (whatever that means) is irrelevant. If I can predict the future to 12 significant figures who cares!

5 comments:

  1. I think this is a good point. In my opinion, the development of a physical theory should proceed in the following steps:
    1. Propose a mathematical model to explain ONE experiment (or a small group). Show that you can quantitatively reproduce the results to some margin.
    2. Worry about making the model consistent with other models.
    3. Provide an interpretation for the mathematical objects in the model. This should be informed by consistency with other models, where such can be established.

    This is contrary to a great deal of modern work. As I am most familiar with quantum chemistry, I know it best in this field. In this case, the general trend seems to be to always insist on solving the dynamics for the fundamental particles indicated in the standard model (which are almost always electrons, in chemical science). The problem with this is that you can spend a LOT of time and expense calculating with more and more degrees of freedom (as many as your computer allows) and still not get a better result! The literature is replete with this sort of thing. I have had people present talks to me where they advocate quantum computer research as a way to determine the medical properties of cholesterol! Now I would put to you that if you insist on describing the medical properties of cholesterol from the standard model of particle physics, you are missing a very big point.

    ReplyDelete
  2. (This was written before Seth's post...)

    Hi all,

    I agree with you for the most part Martin, but I think I'd add another two criteria. A scientific theory (or model,
    depending on how you use the word) is `good' if it has explanatory power, practicality, simplicity and wide applicability.
    These first two are essentially what you have suggested already, although I would debate that `practicality' is more of a
    context-sensitive criterion i.e. what is practical today with our computing ablity would have been unfeasible a decade ago.

    What do I mean by simplicity? I mean that a theory (model) should be built on as few assumptions as possible. Without this
    additional specification I could construct an arbitrarily good model by just appending `get-out clauses' onto an existing
    one. I suppose this is a case of Occam's razor; build a model using only as many elements as are required to give an
    `accurate' answer. Oftentimes this means making approximations, but this comes into the practicality idea discussed above.

    If I had to choose a single criterion for what constitutes a `good' model it would have to be wide applicability. If a
    theory is universal it is most likely correct, or correct to a good approximation. Wide applicability also implies context
    insensitivity (to a degree), and hence the number of fundamental assumptions must be fairly small and well-defined. A great
    example of this is thermodynamics, which describes a huge number of systems without recourse to detailed assumptions about
    any of them.

    This leads nicely into the discussion of whether a model is `realistic' or not i.e. is quantum mechanics just a nice model
    or is reality actually like that at those scales? I would note that some scientific theories do not really involve any
    models e.g. thermodynamics (as above). In a sense, neither does quantum mechanics,
    because we can derive all of the mechanics (Schrodinger equation, Heisenberg equation, Feynman integrals, etc.) from some
    very simple mathematical assumptions; the only `model' comes in when we begin considering a particular system e.g. model
    a diatomic molecule as a harmonic oscillator.

    As a fairly straight-and-narrow physics student I think I have changed how I think about theories like quantum mechanics. You here
    a lot of discussion about quantum mechanics forbidding `objective reality', so the object `might not really be there until
    you look for it'. After studying some quantum mechanics I have moved to the viewpoint that there is an objective reality,
    but it involves states which cannot (generally) be observed without disturbing them: this means reality exists in a well-defined
    state even in the absence of an observer (and undergoes deterministic time evolution within that state) but the state is
    altered by the actions of an observer, changing the objective reality.

    I still haven't managed to reconcile that with the everyday world though! Give me a few decades...

    Thoughts?
    James

    ReplyDelete
  3. Wow, sorry for the formatting (and the odd typo, like mixing up hear and here).

    ReplyDelete
  4. Re: James's post

    I agree that there is a place for considering a models simplicity and generality. However, the way I like to view the question of "what makes a good model" is as follows: Suppose I’m faced with a particular problem, for example, what is the trajectory of a GPS satellite? I don’t really care about whether the model I choose is simple and/or general, I just want to be able to calculate the position of my receiver to within ±5m. It turns out that in this instance to be accurate enough one must take relativistic effects into account, but I would argue that whether General Relativity is more "general" or "simple" than Newtonian gravity is incidental. I admit that this is probably a matter of perspective and that a string theorist may be more focused on the generality and simplicity (whatever that means in their context); although I suspect being string theorist, they will be otherwise occupied: “...there’s got to be an experimental prediction in here somewhere...”

    In 1877 Philipp von Jolly supposedly remarked, “...the edifice of theoretical physics is fairly complete. There will be a mote to wipe out in a corner here or here, but something fundamentally new you won’t find.” Ironically, his onetime student Max Planck played an instrumental role in proving him so very wrong indeed. I think this quote illustrates nicely the hubris in being too definitive about the “correctness” of the status quo. It seems that last century’s fundamental laws are often the next century’s good approximations.

    As to where the lines should be drawn around the terms: hypothesis, model, theory, law etc. I’m not really sure. I suspect if you get ten scientists in a room you’ll get fifteen slightly different answers. To me model is the least loaded and most ambiguous term so I use it to encompass the rest.

    ReplyDelete
  5. Hi Martin, I completely agree with your point of view. For me, models are meant to be debunked to make room for a better model. As an example, a molecule has been expressed from electron cloud models to shells and so on. Models are tools for convenience and to explain to fellow scientists your findings in experiments.

    ReplyDelete