Page 2 of 2

Posted: Sun Feb 14, 2010 5:44 pm
by cah
Fogdweller, I found a quite comprehensive, non-mathematical explanation of the significance test or the "p-value":

http://www.sepp.org/Archive/controv/con ... hhoax.html

But be warned: If you don't want to loose the last little bit of trust in scientific studies, don't read.

Posted: Sun Feb 14, 2010 7:05 pm
by fogdweller
Sorry patientx, I did not mean to imply you were speculating. I reread my post and the questions were pretty unclear. I was interested in the statistical significance, which is usually expressed as an "n" number. I am not sure but I suspect for this number of observations and this difference (80% vs.24%) it would be huge. I just wondered if we had enough info to calculate that number, and if so, what it was. It seems we may not have enough info yet, so we will wait for the April data release which should have all those calculations included.

Re: Who is a statistician out there?

Posted: Sun Feb 14, 2010 11:57 pm
by NHE
I'm not a statistician. However, from what I remember of my biostatistics course I took in college, here is an explanation of the concept of the confidence interval. Statistics, and good experimental design, is about testing hypotheses. One usually states a null hypothesis and an alternate hypothesis. The null hypothesis typically states that there is no significant difference between two or more treatment groups. If a significant difference is found, then one can reject the null hypothesis and accept the alternate hypothesis. The latter typically states that there is a significant difference in some parameter between two or more treatment groups.

The following diagram depicts a normal distribution.
Image The numbers on the x axis indicate ± standard deviations. One standard deviation will be at the inflection point on the curve on either side of the mean. 95% of the data will lie between ± 2 standard deviations while 5% (2.5% at either end of the distribution) will be outside of this region. In statistics, alpha < or = 0.05 represents the point at which a calculated statistic is considered to be significantly different from the mean. In effect, the statistic will lie in the ± 2.5% of the distribution. When a researcher quotes P=0.001 or something like that, they are reporting on the probability of rejecting the null hypothesis. In effect, they are trying to persuade you that their data are "very significant." Some statisticians consider this to be nonsense even though it is the norm in peer reviewed research. Data are either significantly different or they are not. There is no such thing as "very significant."

By the way, all of those graphs you see in published research that use ± 1 standard deviation error bars to show that something is different are really demonstrating an investigator bias (i.e., trying to make the data look better than it really is). The error bars should always indicate a 95% confidence interval since that is what is used to determine significance.

Anyways, I hope this helps to clarify what is meant by a 95% confidence interval.

NHE

Posted: Mon Feb 15, 2010 2:06 am
by Johnson
Thanks NHE, and cah. Even my dominant right brain understood. (pictures always help. Grin)

Posted: Mon Feb 15, 2010 2:35 am
by ama
I worked for a while in biostatistics.
First with this data you cant do anything with normal distribution.
For confidence intervalls calculation you need the original data set.
What you can do is a chi square test.
With this test you can make a significance test between expected and observed numbers it not depends of normal distribution. Normal distribution does make sense only for interval scaled data. Here you dont have interval scaled data. This data has a binominal distribution.
But by the way regarding the scientific methology it dont make much sense to do such a recalculation. Although all the cochrane-studies do these kind of nonsense.

Posted: Tue Feb 16, 2010 6:31 am
by patientx
Fogdweller:

Sorry, my speculating comment wasn't in response to your post.

What I learned about statistics was for a completely different application from clinical studies, so I have been trying to read up on the subject. In the case of the Buffalo study, I am not sure how you would compute statistical significance, or p-value (is this what you meant by 'n'?), or confidence interval limits. There's not really a spread of data points; for example the researchers weren't measuring the number of closed veins, or the amount the veins were closed. It was simply whether patients or controls had CCSVI. So, there's no mean or standard deviation for the data.

I guess you could calculate an odds ratio similar to what's described here:

http://slack.ser.man.ac.uk/theory/association_odds.html

If you have the raw numbers from the study, you could construct a 2x2 chart of MS patients with and without CCSVI, and controls with and without CCSVI. Then run through the calculations like the example.

I have to read up a little more to really understand what this would mean, though.

Posted: Tue Feb 16, 2010 8:11 am
by Cece
Is p<.05 the highest possible statistical significance figure that a researcher would consider statistically significant?

It's not the best term (statistically significant) since statistical significance does not always mean real-world significance, but it makes it sound as if it does.

Posted: Tue Feb 16, 2010 9:18 am
by ama
Cece wrote:Is p<.05 the highest possible statistical significance figure that a researcher would consider statistically significant?
You are right Cece.
What does this p means?
It means that if you have 100 studies with the same design and the same conditions who will get the same result as you got it in 5 studies or 5% by random.
It's not the best term (statistically significant) since statistical significance does not always mean real-world significance, but it makes it sound as if it does.
So if you do not have a good theory you do not have anything.

Posted: Wed Feb 17, 2010 11:06 pm
by MrSuccess
In keeping with massaging the numbers .... I have the following to add :wink:

The healthy controls -who says they are NOT going to develope MS at various ages in the future ? :?: And reflect Dr. Zamboni's numbers ? :?:

If I was one of the healthy controls ...and I had stenosis ... you
can bet I'd be lining up for the vein angio PDQ.

Those people will be watched and monitored closely. If some go on to get MS .... the theory is proven . I say give them angio NOW. Prevent them from getting MS. What was the figure ? 43 fold ?

The day can't come soon enough when there is no resistence to CCSVI vein angio correction ....and the danger of getting MS.

This procedure looks straightforward .

What are we waiting for ? :evil:




Mr. Success

Posted: Wed Feb 17, 2010 11:16 pm
by Algis
If anyone can avoid MS - by all means just do it....

Posted: Thu Feb 18, 2010 7:32 am
by TFau
Algis wrote:If anyone can avoid MS - by all means just do it....
LOL - great advice!