Deception Detection In Non Verbals, Linguistics And Data.

Thursday, June 23, 2016

"Why Most Published Research Findings Are False."

Why False Positives are the downfall of most research papers.

The headline comes from this academic paper --
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/

It's based around a massive statistical problem at the moment. Simply put, the problem is this - because computers and software have become so powerful,  multiple testing is becoming the norm.

So whereas many years ago one experiment would have been carefully considered and a hypothesis would have been formulated, then the testing would determine whether this hypothesis was correct or not at say the 95% level (the most common).

Above graph from paper showing brain scan data grossly exaggerated because of incorrect statistical tests and controls.


This means that luck and False Positives (thinking you've found something when you haven't) would be at around the 5 % level and you could be fairly sure you had a positive result at the 95% level.

Nowadays, by running 20, or 50 or 100 tests and looking for an effect after the fact, it creates a massively biased data set. It's not 5% False Positives anymore, it's more like 35%-40. (in brain scans).

You are now guaranteed to to have lots and lots of positive effects that are due to luck alone, It's called data steering and many other terms. The bias is so large you don't have a result even if you think you do.

In neuroscience and brain scans, they are testing thousands of voxels at a time. The errors become magnifies the more tests that are being run, and they realise they have serious analysis problems with fMRI scan analysis --

http://reproducibility.stanford.edu/big-problems-for-common-fmri-thresholding-methods/




This helps to explain the fact that studies are unable to be replicated in many/most cases. Nowadays, only 18% of Pharma tests pass Phase 2 stage, only 50% pass Phase 3 stage.

It's been estimated that since 2004, only 7% of studies have accounted for this multiple testing error. So all studies from drug trials to economics that do not use some form of Multiplicity Control (compensating for that fact that multiple testing has been done) are useless!

So what does this have to do with word + deception analysis? Everything. Many variable are considered, and some kind of multiplicity control to compensate for this is vital.

I emailed an Italian professor, Livio Finos at the University of Padova who wrote the MatLab code for this book and from which I have based my statistical tests-


I checked with him if I had the procedure correct that he advocated in his paper with a new method of Multiplicity Control -- http://link.springer.com/article/10.1007%2FBF02741320

He confirmed by email that I had the correct procedure. I then hired a Ukrainian freelance programmer Victor, and after a few days of back and forth email communication, it culminated in a 4 hour skype session in the middle of the night and I managed to get the MatLab code written and tested. This means I can correct for multiple tests with a new efficient process besides the industry standard FDR approach, as well as using Non Parametric Permutation Tests for all analysis, so the results I get are reliable.

SHARE:

No comments

Post a Comment

© ElasticTruth

This site uses cookies from Google to deliver its services - Click here for information.

Professional Blog Designs by pipdig