• 07 SEP 09
    • 0

    #1118: Further comments on epidemiology and precaution

    From Frank Nadaud in France:

    I would like to react on the recent discussion about epidemiology and precaution.

    Let me take the hardline statistics point of view. Consider the Danish study.
    In this case, as said in another post, the corporate users have been retreived
    of the sample. In statistics, this kind of choice has a jargon name. It is
    called Selection Bias (cf James Heckman, ref underneath). A selection bias
    causes an unjustified truncature in the population under study, by excluding
    some part of it the bias may lead to severe distorsions in the results, or
    worse, totally flawed inferences and findings.

    To give you an idea of where a selection bias may lead, i will just give a
    famous example. January 28th, 1986, the Challenger space shuttle is launched
    and explodes after 73 seconds of flight.

    What is the point with selection bias ? The point is that after the accident,
    the enquiry commission summonned the best academic statisticians of the US to
    examine every statistical computation around the flight. They showed that, the
    booster seal that ruptured and doomed the flight where controlled in a totally
    flawed way. The quality control of the boosters seals was designed to test
    after each flight of the shuttle only the damaged seals and discard the non
    damaged ones (considered as non informative). The NASA engineers where basing
    their assessment on this flawed figure with less than 1% of risk. However the
    seal manufacturer engineers had serious doubts about the seals resistance,
    especially with a night where the temperature went to -17°C… But they could
    not express this clearly. Well, the academic experts did the good computations
    and they found about 13 % risk of explosion in flight, clearly an untolerable
    risk in space and especially for a manned flight !

    So, all of this occured because of an apparently “minor” selection bias
    consisting in not including the undamaged seals and computing on the damaged
    seals only and not on the whole population. If they had done so, as they
    should, they would have seen the relationship between damage and temperature.

    This is where selection bias may lead, period.

    Now, nearer of our subject, I remember of the late 70’s studies of the Polish
    Army and the US Navy. Two opposite conclusions: very bad effects for the
    first, no effect for the second. The difference was that the Polish army
    design was scientifically sound, while the US Navy design was seriously
    flawed, because the exposure levels were not correct, mixing pentagon officers
    with aircraft-carrier radar electronicians which sometimes worked on antennas
    WHILE THE RADAR WAS ON !!!

    So I let you draw the conclusions on the subject… And always look for a
    possible selection bias lurking somewhere with Challenger as the most telling
    example of what may happen: a disaster.

    Finally, I remember also that the late Neil Cherry wrote good papers on the
    subject of bias in studies on EMF exposure. He showed with examples how no
    effects results are manufactured. To sum up, there are three ways to do that:

    1) make a selection bias (Danish style)
    2) blend exposures levels (US Navy style)
    3) choose tests with bad power (a more technical point), ie: that favor the
    hypothesis you want to support.

    By mixing the three you can achieve the ever “no effect” conclusion.

    And for the skeptics about non thermal effects, just ask them if according to
    them, vision is a thermal or an athermal effect ! Enjoy their answer as it
    could be much interesting (and funny)!

    Sincerely

    Franck.

    Reference:
    “Sample Selection Bias as a Specification Error”
    James J. Heckman
    Econometrica, Vol. 47, No. 1 (Jan., 1979), pp. 153-161

    Leave a reply →