corner
Healthy Skepticism
Join us to help reduce harm from misleading health information.
Increase font size   Decrease font size   Print-friendly view   Print
Register Log in

Healthy Skepticism International News

September 2009

The Emperor’s New Analyses

Vance W. Berger, PhD (1)*, Anh-Chi Do (1), (2)

(1) Biometry Research Group, National Cancer Institute
Executive Plaza North, Suite 3131
6130 Executive Boulevard, MSC 7354
Bethesda, MD 20892-7354

(2) The College of New Jersey
Ewing, New Jersey 08628

*Corresponding author. Tel.: 301-435-5303; fax: 301-402-0816.
Email address: .(JavaScript must be enabled to view this email address)

For those of us who work on trying to improve science, on publishing better methods, on trying to perfect the way in which clinical trials are run or reported, we often find that although we wait patiently for the world to beat its path to our door, it doesn’t. Regardless of how reputable the journals we publish in, or how many places we are invited to lecture, our methods and ideas sit untouched. Calls for improvements and proposals of better methods-some of which are clearly superior to the ones in use-are disregarded. It is not that anyone disagrees with us; no, they simply ignore us. There seems to be no relationship between the superiority of a method and the frequency with which it is used. The poor “industry standards” are chosen time and time again; it is precedent against progress.

What, for example, justifies the use of permuted block randomization when it is so easily manipulated by way of prediction of future allocations, and thus renders allocation concealment impossible and increases the likelihood of selection bias? Why does such a procedure continue to be used when better methods, most notably the maximal procedure,* have been proposed [1]? What justifies the use of a chi-square test when the Fisher exact test** it is trying to approximate is readily available [2],[3]? How can an approximation be better than the very quantity it is trying to approximate? Clearly, it is not science that justifies the use of any such methods.

But the more important question, perhaps, is why are these practices not put to an end? Because medical journals cannot be bothered to do so. Nor, for that matter, can regulatory agencies. And so it follows that even the reviewers of trial quality rate trials with fatal flaws to be high quality. They can shirk their responsibility of rating trial quality by using such “standards” as the impotent Jadad score, which never met a trial it didn’t like [4]. Thus peer-review, which is supposed to weed out flawed research, becomes flawed in and of itself, when it resorts to uncritical acceptance of flawed trials on the basis of their being no more flawed than other trials. The complete lack of critical thinking and evaluation leaves vulnerable those who would trust the system of science and evidence-based medicine to protect them. Such abuses of science distort the research base to the point that an uncritical reading of the medical literature would give one a completely wrong impression.

But considering the larger context of the politicization of medicine today, the research abuse we see every day is but a piece of a terrible puzzle. Cheating goes on in every facet of the game, and in this game, even the judges and referees can’t be trusted.

This begs the question, what can be done? We can start with letter-writing campaigns to shame those who use the poor “industry standards,” and we can work to make our fighting numbers grow; as they do, our voices will be harder and harder to ignore. No ocean can be moved with a teaspoon, but perhaps if enough teaspoons are put to work together, the tides can be changed.

*Maximal procedure: minimizes predictability by using less restrictive randomization procedures while also retaining balance; “constructed by placing a uniform distribution on the maximal set of allocation sequences that satisfy [terminal balance] and [adherence to the maximal tolerated imbalance]” [1, pg 116]
**Fisher exact test: a statistical significance test used to analyze contingency tables when sample sizes are small. It allows an exact calculation of the significance of the deviation from a null hypothesis.

 

 

View/Hide References

 

HS Int News index

Page views since 15 March 2010: 5864

 

Comments

Our members can see and make comments on this page.

 

  Healthy Skepticism on RSS   Healthy Skepticism on Facebook   Healthy Skepticism on Twitter

Please
Click to Register

(read more)

then
Click to Log in
for free access to more features of this website.

Forgot your username or password?

You are invited to
apply for membership
of Healthy Skepticism,
if you support our aims.

Pay a subscription

Support our work with a donation

Buy Healthy Skepticism T Shirts


If there is something you don't like, please tell us. If you like our work, please tell others. The contents of this page are the author's views and do not necessarily reflect the position of Healthy Skepticism or other members of Healthy Skepticism.

  • E-mail
  • LinkedIn
  • Del.icio.us
  • Digg
  • Facebook
  • FriendFeed
  • Google Bookmarks
  • MySpace
  • Reddit
  • Slashdot
  • StumbleUpon
  • Tumblr
  • Twitter
  • Yahoo! Bookmarks