So, Where's Your Proof?...

Scott Stevenson

Full Member
http://drscottstevenson.com/wordpress/2015/07/22/so-wheres-your-proof/

Science doesn’t “prove” anything – it simply provides data upon which a decision can be made. (I cringe when I see advertisements stating “clinically proven.” ) It is still up to the researcher / reader to decide the value of said research. One might provide proof in a court of law, or derive a proof as a mathematician. I’ll defer to physicists on this one, but one might even say that physics has proven the existence of a certain particle along the space-time continuum, i.e., that such a thing has at least at one time existed.



But in the biological sciences, there is convention to use inferential statistics, based on mathematical probabilities, to make decisions. At the root of many of these methods is a standardly accepted probability value (or p-value), typically 0.05 or sometimes 0.01, which means that the likelihood of the difference or statistical effect in question happening by random chance (not due to the intervention under scrutiny in the experiment) is <5% or <1%, respectively. Whether an effect is statistically significant is based on probability, not on some binary “yes” or “no” mathematical derivation.



My intent is not to make this a statistics lesson. (Kudos on making it through the above, by the way!) The important point that delineating whether or not substance X, treatment Y, training program Z or intervention ABC (the independent variable) improved performance, muscle size, strength, endurance or what have you (the dependent variable) in a given study may not tell the whole story. In fact, when the effect of an intervention is the same, simply having more subjects will reduce the p-value. A 2% difference may be meaningless in a practical sense and not statistically “significant” (p > 0.05) when 10 subjects were studied, Including 100 subjects in the study and the p value will be lower, meaning that one could claim statistical significance simply because more subjects were studied..



What does this mean? It means that, in a certainly sense, use just p-values to gauge the value and significance (see what I did there) of scientific findings does not tell the whole story. If you’re not a scientist, simply looking at the data – the averages and maybe even peeking at the variation among subjects by looking at the standard error bars – may tell you more than p-values. A 0.2 second improvement in 100m dash may have practical significance for a high level sprinter, but fail to reach statistical significance when few subjects were tested. For you a coach or trainer, it may be more convincing to find a p-value of 0.10 in connection with a 15% strength gain in a small study than a p value < 0.001 when group differences in strength are 5% in a large cohort-based analysis.



To get to this level of understanding however, you may have to read the study, or at least look at the pictures (figures). The abstract probably won’t cut it. (Yes, I did it again…) The knowledge you derive from science seems to have something to do with the effort you put into understanding it. (And now, I’m tempted to see if a study has examined this…)

-S
 
Sure statistics need interpreting properly but the whole way the scientific community deals with data needs a drastic overhaul.

Take negative studies for example: They're far outweighed by the number of positives simply because it's in the individual scientist's interest to come up with a positive result. Positives get published. Negatives don't. And if you don't get published, you don't make your name, you don't make your living.
Add to that, just about every study is sponsored in some way and many scientists are publishing papers where they have a conflict of interest, a duty to give the results in the sponsor's favour and the whole system is on the edge of corruption.
Basicallly, too many times we end up with manipulated studies before we even get to the statistics.

Studies are getting harder and harder to believe nowadays. Give it a few years and they'll be treated no differently to conspiracy theories.
 
It's an historic quirk of the US university system that funding is tied to published papers, whatever their merit, rather than being awarded by professorial boards.

Unfortunately this is creeping into newer educational establishments here in the UK.

Whenever reading and evaluating anything - anything - the most important thing is to assess the calibre of the authors, and not just their positions.

Fortunately with the internet this is easier to do than ever before.
 
Fortunately with the internet this is easier to do than ever before.

Researching the researcher!
But the way things are progressing there will be time later down the line where you'll have to investigate whoever gave that researcher accreditation - research the researcher's researcher!! And it will go on. You'll end up spending more time checking credentials that you will on the content of the paper! Sure, some background on a paper's author is always helpful but we're getting to the situation where we don't just need to know about academic prowess but we need to take in a load of other factors which should be totally irrelevant to science - what's the author's financial status, do they have a vested interest, who's sponsoring their research, etc. Without doing that you can quickly find yourself believing in fabrication. At best that's merely going to mislead you in following the wrong path and waste your time, at worst it can cost lives (Andrew Wakefield and his anti-vaccination paper springs to mind there).
It shouldn't be that way. You should be able to put some trust into published papers. As it is, the system is so bad, everything has to be taken with a pinch of salt. Even government health departments are at it, saying "This food is bad for you" one week and then doing a complete about face a month or two later. That's why I'm saying that the whole system needs an overhaul.

That's also why, in the iron game, I'm one of the advocates of "If it works for you, then do it". There's so much out there that's dodgy it's about the only thing left which is faithful and which you can put your trust in.
 
That awkward moment when you discover that the study you've been basing your entire career around is inherently flawed.....

Based my lifting career on a study that said Creatine would make me swoll.

Geeks-Are-Sexy.jpg
 
Researching the researcher!
But the way things are progressing there will be time later down the line where you'll have to investigate whoever gave that researcher accreditation - research the researcher's researcher!! And it will go on. You'll end up spending more time checking credentials that you will on the content of the paper! Sure, some background on a paper's author is always helpful but we're getting to the situation where we don't just need to know about academic prowess but we need to take in a load of other factors which should be totally irrelevant to science - what's the author's financial status, do they have a vested interest, who's sponsoring their research, etc. Without doing that you can quickly find yourself believing in fabrication. At best that's merely going to mislead you in following the wrong path and waste your time, at worst it can cost lives (Andrew Wakefield and his anti-vaccination paper springs to mind there).
It shouldn't be that way. You should be able to put some trust into published papers. As it is, the system is so bad, everything has to be taken with a pinch of salt. Even government health departments are at it, saying "This food is bad for you" one week and then doing a complete about face a month or two later. That's why I'm saying that the whole system needs an overhaul.

That's also why, in the iron game, I'm one of the advocates of "If it works for you, then do it". There's so much out there that's dodgy it's about the only thing left which is faithful and which you can put your trust in.
This forgets one of the most fundamental principles of the scientific method.

Experimental results should be repeatable. If they cannot be independently tested and verified by others then it's not safe to rely on the findings.

Personally when reading papers, I usually try to avoid reading the authors name, their institution or affiliations before addressing method and then results. The internal consistency of the information gets assessed, then it's consistency with ones's prior knowledge.

J
 
If they cannot be independently tested and verified by others then it's not safe to rely on the findings.

There's a huge problem with that.
As I stated in my first post - negative results are discarded. Positive results now make up 70-90% of all papers:

http://www.economist.com/news/brief...elf-correcting-alarming-degree-it-not-trouble

The negative results are much more trustworthy; for the case where the power is 0.8 there are 875 negative results of which only 20 are false, giving an accuracy of over 97%. But researchers and the journals in which they publish are not very interested in negative results. They prefer to accentuate the positive, and thus the error-prone. Negative results account for just 10-30% of published scientific literature, depending on the discipline. This bias may be growing. A study of 4,600 papers from across the sciences conducted by Daniele Fanelli of the University of Edinburgh found that the proportion of negative results dropped from 30% to 14% between 1990 and 2007. Lesley Yellowlees, president of Britain’s Royal Society of Chemistry, has published more than 100 papers. She remembers only one that reported a negative result.

There's nothing to stop researchers discarding negative results until they get the positive one which they want (remember it's the positives which make the cash). There's not that much out there which can be said to be genuinely independently tested and verified by others.
 
There's a huge problem with that.
As I stated in my first post - negative results are discarded. Positive results now make up 70-90% of all papers:

http://www.economist.com/news/brief...elf-correcting-alarming-degree-it-not-trouble



There's nothing to stop researchers discarding negative results until they get the positive one which they want (remember it's the positives which make the cash). There's not that much out there which can be said to be genuinely independently tested and verified by others.
I agree with your points regarding the problem of not publishing negative results. It's a shame and often causes misdirection and lots of redundant work to be done.

I suspect the problem with funding and motivation is people don't want to repeatedly test the findings of others. That said, there is plenty of independent studies being done around the world when there is a lot at stake. The studies and data I use day to day are repeatedly tested by independent groups around the world as well as cross checked against alternative methods. Single studies cannot and are not relied upon (although IME can often give interesting leads).

However the general public and mass media are quick to jump to conclusions and make unsafe extrapolations of single studies, or even on the abstracts of papers alone.

The daily express has made a business out of such unsafe extrapolations and their front page carries such medical stories almost weekly.

There are many problems with scientific publishing IMO - the constant inane pressure in academia to publish papers, which detracts from the real flow of investigation IME, funding issues, lack of incentive to publish negative results, abstracts or conclusions that are not supported by the data or the methods used, unsafe or non standard methods, the parasitic control of scientific publishing houses, the lack of real peer review by journal editors and panels. Those are just off the top of my head.

We also have a major problem with scientific understanding throughout society, a problem compounded by over confidence in people's own scientific abilities. Fictional TV programs and films shape people's understanding more than scientific studies do, and this is supported by an unholy alliance with commercial pseudo science pumped out over the Internet.

I recently went onto a large bodybuilding forum and saw an article a guy had published detailing the chemical nature of an ingredient often sold in bodybuilding supplements. IMO he was spot on, and I could tell that he knew his chemistry. The rest of the thread was filled with a mob style response - a mix of vested interests, overconfident egos spouting pseudo science and trolls. No civil challenges or analysis. Whilst an extreme demonstration, much of the general public is completely lost when it comes to science IMHO.

J
 
Tuning in, in advance, for the next lesson: on correlation and causation.
Hot weather and ice cream sales, versus ice cream sales and hot weather.

Let's have some thin plate spline interpolation and smoothing, too.

:thumb:
 
Many insightful comments here. (Thanks for chiming in, Guys.)

Ultimately, issues with failure to publish studies that reject a null hypothesis (or fail to show an effect of a given intervention) due to funding or tenure review pressures come down to human nature. In my experiences in academia, politics and egos, etc. are as large and, according to those I've met who have worked in corporate world, even "worse" sometimes that in other societal institutions.

The system of scientific training (at least in the biological sciences here in the US) contributes to the lack of replication studies, I believe. Thesis and dissertation projects are meant to train a future scientist. A replication study does not, in the eyes of many, help in learning the in's and out's of experimental design in the context of forwarding a body of literature by answering the next question or peering deeper into an existing one, so studies that are meant as a learning tool for students are very rarely purely replicative in nature.

With the political scene in mind, consider the scientist who calls into question an established paper and submits a replication study. This could be considered quite an affront by the scientist who published the original study, and possibly cause issues down the road when getting grant funding or other studies published if the scientist publishing a replication study make enemies in this way.

Go figure that human nature limits the progression of science... :)

-S
 
Recall the situation when Fleming discovered penicillin, and Watson and Crick DNA...

They had not not been given tenure because of the number of attractive experiments they proposed and ran, nor the number of papers they published, but for the quality of their undergraduate and graduate work on the recommendation of their professors.

Many of their calibre rarely published anything despite years of research - some never.

They would not have dreamed of publishing without being sure their results were credible and reliable through experiment and replication. Were there some cheats? No doubt, but few.

But their tenure, livelihood and reputation didn't depend on publishing, so the temptation to cheat was less.

The career imperative to publish was originally a US competitive phenomenon.

When they did publish papers, peer review was relatively limited, not the international battle it is today.

Did this make their discoveries less valid?

Publishing papers and peer review are big business for the technical publishers.

Have they materially promoted human knowledge?
 
...
Publishing papers and peer review are big business for the technical publishers.

Have they materially promoted human knowledge?
The quality assurance part of their publishing leaves a bit to be desired these days. Very few checks for internal consistency let alone critique of methods etc.

There have been some cases where publishers have sold journals (as brands as opposed to copies) to non publishing corporations who then publish claptrap to support their products.

I agree with other the sentiments of your post btw. Spot on.

J
 
If you've read this, it's quite revealing:

http://www.sciencemag.org/content/342/6154/60.full

"by John Bohannon
A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.

On 4 July, good news arrived in the inbox of Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara. It was the official letter of acceptance for a paper he had submitted 2 months earlier to the Journal of Natural Pharmaceuticals, describing the anticancer properties of a chemical that Cobange had extracted from a lichen.

In fact, it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

I know because I wrote the paper. Ocorrafoo Cobange does not exist, nor does the Wassee Institute of Medicine. Over the past 10 months, I have submitted 304 versions of the wonder drug paper to open-access journals. More than half of the journals accepted the paper, failing to notice its fatal flaws. Beyond that headline result, the data from this sting operation reveal the contours of an emerging Wild West in academic publishing."

--------------

-S
 
Back
Top