Group-randomized research sometimes uses inappropriate statistical analysis of a prevention effort
TUESDAY, March 25 (HealthDay News) -- The effectiveness of public campaigns or efforts to prevent cancer can often be overstated in certain kinds of cancer trials because of inappropriate statistical analysis, a new report claims.
The review, published in the March 25 online issue of the Journal of the National Cancer Institute, suggests that some of the 75 group-randomized cancer trials it studied may have reported these interventions were effective when in fact they might not have been.
"We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings," review author David Murray, chairman of epidemiology in the College of Public Health at Ohio State University, said in a prepared statement.
In the review, more than a third of the 75 trials contained statistical analyses that the reviewers considered inappropriate to assess the intervention being studied. Most of those studies reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers, the review authors stated.
"If researchers use the wrong methods, and claim an approach was effective, other people will start using that approach. And if it really wasn't effective, then they're wasting time, money and resources and going down a path that they shouldn't be going down," Murray said.
In group-randomized trials, researchers randomly assign identifiable groups to specific conditions and observe outcomes for members of those groups to assess the effects of an intervention under study.
For example, a group-randomized trial might study the use of mass media to promote cancer screenings and then assess how many screenings result among g
All rights reserved