Mobile-phone health research

Terms

RR or Risk Ratio [aka Relative Risk]: If the probability of developing lung cancer among smokers in a life-time was 20% and among non-smokers 1%, then the relative risk ratio of cancer from smoking would be 20:1. If there is no increase in risk, the Ro would be 1. Some lobbyists will try to tell you that findings with an RR less than 3 can be ignored. But there are other factors to be considered. A robust finding (ie many studies) with an RR = 2 will still mean that users double their chance, and in large populations this might place millions of people at unnecessary risk. See more.

Confidence Level [or Interval]: Statisticians generally apply two different confidence claims to their study findings to express the likelihood of error. If the possibility of these result arising by chance is one-in-twenty similar studies [as determined statistically], then the finding is said to be at the 5% confidence level [and the finding has a 95% confidence interval]. At this level it is considered to be established, but require further investigation and confirmation. If a higher standard of 'proof' is achieved and the possibility of the result being a chance finding is only one-in-a-hundred, then it is said to be at the 1% level of confidence [99% CI], and is normally claimed as 'proved'. See more.

Level of significance: [as above] The medical research community generally holds that if the results of a study are unlikely to arise by chance in 95 studies out of 100, then it has a level of significance of 5%. At 99 out of a 100, the significance level is 1%. [ie. it is MORE significant] The more significant results can arise with very dramatic differences between the test and control groups with only a relatively few randomly selected subjects in the trial, or from very subtle-but-consistent differences in large numbers of subjects.

Odds ratio:the ratio of the odds of an event occurring in one group, to the odds of it occurring in another group. (Men to women, for example)

Meta-analysis: A statistical technique used mainly by regulators to combine a number of different (but closely related) studies to demonstrate the robustness of the findings. See next column and Wikipedia

Cohort: This is just a group or panel of subjects selected on the basis of shared characteristics.

Retrospective: A study that looks back on past experiences using existing statistical data.

Prospective: A study that recruits unaffected people with the aim of following them for a number of years to see what differences in their life-style or environment result in health problems. It is a longterm, or 'longitudional' study.

Epidemiology

Layman's guide to terms, problems and research types.

Epidemiology: is the statistical study of populations in order to identify possible causal factors in disease conditions. While epidemiology can't prove X caused Y in absolute terms alone, if the statistical strength of the relationship is reasonably high, and there are other studies or other reasons which support the contention, then it is reasonable to assume causuality as a precautionary measure.

However if the statistical links are weak, then at best we can likely say is that "X may be a cause of Y". In these cases other factors enter into contention: if the potential health risk are serious, or the potential for harm is widespread (or both) then it is still reasonable to advocate that the "precautionary principle" should apply. We should limit public exposure until the science is more robust. However "limit" does not necessrily mean "ban", and the positive benefits to the community of retaining the product must also be considered in balance.

Epidemiological studies can be of a number of different types:

  • Retrospective whole population studies. These look at the past records of public health to see if there are any statistical links between exposure to X and adverse health-condition Y. If the exposure to X is geographically localised (ie around a TV tower) we might expect a "cluster" - a higher-than average incidence of condition Y in that locality. However, clusters also arise at random in such studies, so it is common in large population studies of this kind to find false indicators of a connection.

  • Prospective population studies. These identify groups of normal individuals (usually in a town, suburb or region) -- or sometimes a 'cohort' (people who have related exposures - say, radio technicians) who are asked to join a study which might take many years. All relevant details of life-style, working conditions, body-type, family-history, exposure risks are recorded in medical examinations and interviews, and then the same people are checked every few years to see if a pattern of disease occurs. Such studies have been highly significant in confirming links, for instance, between smoking and heart-disease (in addition to various cancers). The problem with these studies is that they may take ten to twenty years to produce any results, and often a large number of those enrolled in the study move away, or die from other causes. These studies are costly.

  • Case-controlled studies. These are usually done through a hospital using patients who already have condition and look back to see if there are characteristics of these patients that differ from the normal population. The researcher will identify, say, 100 patients with a particular condition and enroll them in the study. Through interviews they will take full medical, family, and life-style/work details. Then they will attempt to match the primary patients with other patients (known as controls) who are in hospital for totally unrelated conditions -- they are usually accident victims. These controls are also interviewed in exactly the same way, using the same forms and same interviewers, and the second group is matched to the first, on a one-for-one basis (or sometimes two-for-one) in terms of age, sex, marriage status, life-style etc. So for every 40 year old male patient, clerical worker, non-smoker, with condition X, there will be one or two 40 year old male, non-smoking clers who don't have the problem but at in the hospital system for (say) a broken leg. Once the two matching groups have been identified and interviewed, the researcher will statistically analyse all the relevant data to see if there is any factor that standards out statistically as being present in the diseased group, but not (or to a lesser degree) in the control group. Such studies are relatively easy to do, and often provide good indications of potential causes, worthy of further study.See.

  • Cohort studies. These select a group of people who share common characteristics or experiences - trade group, exposure to a chemical, work place. The term is wide enough to include those born at a certain time, or go to a certain school, etc. The aim is to compare this group with the general population, or with another cohort recruited for other reasons. See.

  • Twin Studies. These looked at monzygotic (identical) twins (ideally those separated at birth), and compared them with non-identical twins of the same sex, to see if there were any obvious differences in disease incidence. The aim was to identify genetic and environmental factors in disease causation. Unfortunately, there was much rorting of the system, and these studies fell into disrepute.

Q. What does it take to be an epidemiologist?

A. A pocket calculator.
Unfortunately, virtually anyone can call themselves an epidemiologist, and, if you send the American Epidemiological Association a $100 or so, it will accept you as a member and consider publishing your research paper in their "peer-reviewed journal". You can then add "Member of the American Epidemiological Association" to your letterhead, and maybe get some cheap reprints of your study to use as a promotional flyer.

The fact is that some epidemiologist have the best university qualifications in advanced statistical techniques with an extensive background in biological research, while others have nothing more than a high-school diploma with basic maths, and a registered business name which suggests higher qualifications and a large skilled staff of dedicated professions. They use names like "Health and Environmental Research Associates" or "Consolidated Research Services". It also helps to have a web site and memberships in health-related committees, groups or societies (the more entrepreneurial of them start their own associations).

Q. What is the difference between an epidemiologist and a biostatistician?

A. Not much. The epidemiologist is mainly concerned with population studies, while the biostatistician looks more at the statistical work involved in laboratory research. But there is a strong cross-over, and there are shonks on both sides.

Q. How reliable are epidemiological studies?

A. As reliable as the person who designs and performs them.
Unfortunately, this is one area of medical/health research which also attracts the shonks and the charlatan: also the well-meaning zealot-crusaders with a cheap calculator and dodgy data, and the science-for-sale experts who will design a study to produce whatever results the funding corporation or trade-association requires ... provided the money is there.

Good epidemiology is an invaluable tool, and over the centuries, this kind of statistical research has probably saved as many lives through identifying public health risks as all the other forms of laboratory research combined. The first great public-health epidemiological study was done by Dr John Snow, who showed that a London cholera epidemic was caused by contaminated water. Before that, cholera killed hundreds of thousands of people every year around the world. Now it is confined only to the slum areas of underdeveloped countries. An enormous number of health problems and epidemic diseases have been identified by epidemiological studies.

Bad epidemiology is also rife, and often very difficult to identify. The worst area is probably that of modern processed food standards and nutrition, where vested interests in the form of trade associations,protect the business of everything from dairy-products to broccoli. They will all have their tame epidemiologists/nutritionists ready to run out a few quick studies whenever sales are threatened. These games are played by the seemingly good guys as well as the bad. (ie Newspaper headlines: "Broccoli Reduces Colon Cancer, say Experts") The evidence changes by the week, depending on which sensational story the local newspaper has culled from the latest trade press release.

Q. What happens when one epidemiological study produces directly contradictory results to another. How do we balance the weight between studies pointing to opposite conclusion?

A.This is actually a PR fiction. Epidemiological studies rarely (if ever) contradict each other, for the simple reason that, while it is possible to show a statistical link between X and Y, it is impossible to show NO statistical link. At best, you can only say "the design and conduct of this study failed to reveal any ... etc." which does not disprove the first finding, but simply fails to support it.

A 'no evidence' result could be due to the failure in the design, or inadequate numbers of subjects, or sloppiness in the way the study was conducted, or arise just by chance.

Q. How important is the need for replication? When a replication fails to duplicate the result, which study should be considered the most important?

A. No research finding should ever be considered as established without a) either a full replication which duplicates all of the original study conditions (but perhaps with a greater number of subjects), or b) closely related, and robust findings (but perhaps in other biomedical disciplines) which have produced parallel results. Exact replication isn't always needed.

There will always be a political cry for studies to be replicated whenever they indicate harmful potential in a product, and there can be little doubt that this is important. However often the only organisations able to fund such replications are governments or trade-associations with vested interests. Independent research organisations like universities (notoriously short of funds) need to make new discoveries for their survival, not confirm old ones. So their efforts tend to be directed toward leading-edge research, not replication.

This means that studies which reveal potential public health risk often go un-replicted -- and the industry involved with then cry loudly at every available opportunity that "It should not be treated seriously, because it hasn't been replicated." This has proved to be a highly effective way of stalling political action.

It is also important to recognised that governments often have vested interests in not getting involved. And, in an unfettered free-enterprise system, governments expect industries to look after themselves and their own problems -- including funding research into adverse effects of their produces [which is extraordinarily naive]. As a result, studies that turn up potential dangers to the public are rarely replicated unless they are so obvious and important that political pressure is bought to bear on the regulatory authorities. Even the,n the replication work is often done by supposedly 'independent' research groups,who are funded, directed, and therefore controlled by those with vested interests. The independence of a research scientist doesn't just extend to the duration of the one study, but it can also be compromised by the long-term expectations of future funding, and by the prospects of travel to international conferences, and by general political ideology. This applies also to research done in support of health- and environmental-activism.

Q. What value should we put on meta-analysis of epidemiological studies?

A. Meta-analysis is the name applied to a range of techniques used by epidemiologists to combine numerous other studies, and thereby create what appears to be a more robust finding. It is most often used by regulatory agencies in areas of health and environmental research, where they are attempting to arbitrate between various claims.

For instance, they might have on record a half-dozen small studies done at various universities around the world which show through similar epidemiological research, that the consumption of X is related to health condition Y. However, perhaps only one or two of these studies has been done with enough subjects to be statistically significant at the 1% confidence level [only likely to have arisen by chance once in every hundred similar studies]. The others may all be at the 5% or lesser levels of confidence, and some may have found no linkage whatsoever.

Clearly, those studies at the 5% level of confidence [only once in twenty likelihood of having arisen by chance] should carry some weight in regulatory determination even though none of them is considered 'proven'. So special statistical techniques are used to provide "weighting" to each of the many studies according to the quality of its findings, taking into account the differences in methodology. These are then consolidated to produce a single figure outcome: so in effect, meta-analysis attempts to treat a multiplicity of research studies as if they were all one.

The problem with meta-analysis is that it depends on many judgements and assumptions, and on the choice of statistical techniques applied. However it is obviously a useful tool when used by regulators (provided they aren't crusading). It is especially important because it brings into the regulatory picture the more trustworthy research done by independent research groups at universities. Because of the chronic lack of funding, these groups are more likely to produce marginal findings -- while industry research is generally well-funded and therefore more likely to produce 'robust' (but perhaps less-trustworthy) findings.

Without meta-analysis, many independent research findings would simply be ignored, and only the larger industry-funded projects would be available to the regulators in making their determination about rules.




Please e-mail comments, information and updates to DON MAISCH: