<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[Azimuth]]></provider_name><provider_url><![CDATA[https://johncarlosbaez.wordpress.com]]></provider_url><author_name><![CDATA[John Baez]]></author_name><author_url><![CDATA[https://johncarlosbaez.wordpress.com/author/johncarlosbaez/]]></author_url><title><![CDATA[Why Most Published Research Findings Are&nbsp;False]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>My title here is the eye-catching&#8212;but exaggerated!&#8212;-title of this well-known paper:</p>
<p>&bull; John P. A. Ioannidis, <a href="http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124">Why most published research findings are false</a>, <i>PLoS Medicine</i> <b>2</b> (2005), e124.</p>
<p>It&#8217;s open-access, so go ahead and read it!  Here is his bold claim:</p>
<blockquote>
<p>Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies to the most modern molecular research. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.</p>
</blockquote>
<p>He&#8217;s not really talking about all &#8216;research findings&#8217;, just research that uses the</p>
<blockquote>
<p>ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05.</p>
</blockquote>
<p>His main interests are medicine and biology, but many of the problems he discusses are more general.</p>
<p>His paper is a bit technical&#8212;but luckily, one of the main points was nicely explained in the comic strip <a href="http://xkcd.com/882/">xkcd</a>:</p>
<div align="center"><a href="http://xkcd.com/882/"><br />
<img width="450" src="https://i0.wp.com/imgs.xkcd.com/comics/significant.png" /></a></div>
<p>If you try 20 or more things, you should not be surprised that once an event with probability less than 0.05 = 1/20 will happen!  It&#8217;s nothing to write home about&#8230; and nothing to write a scientific paper about.</p>
<p>Even researchers who don&#8217;t make this mistake <i>deliberately</i> can do it <i>accidentally</i>.   Ioannidis draws several conclusions, which he calls corollaries:</p>
<p>&bull; <b>Corollary 1</b>: The smaller the studies, the less likely the research findings are to be true.  (If you test just a few jelly beans to see which ones ‘cause acne&#8217;, you can easily fool yourself.)</p>
<p>&bull; <b>Corollary 2</b>: The smaller the effects being measured, the less likely the research findings are to be true.  (If you&#8217;re studying whether jelly beans cause just a <i>tiny bit</i> of acne, you you can easily fool yourself.)</p>
<p>&bull; <b>Corollary 3</b>: The more quantities there are to find relationships between, the less likely the research findings are to be true.  (If you&#8217;re studying whether hundreds of colors of jelly beans cause hundreds of different diseases, you can easily fool yourself.)</p>
<p>&bull; <b>Corollary 4</b>: The greater the flexibility in designing studies, the less likely the research findings are to be true.  (If you use lots and lots of different tricks to see if different colors of jelly beans ‘cause acne&#8217;, you can easily fool yourself.)</p>
<p>&bull; <b>Corollary 5</b>: The more financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.  (If there&#8217;s huge money to be made selling acne-preventing jelly beans to teenagers, you can easily fool yourself.)</p>
<p>&bull; <b>Corollary 6</b>: The hotter a scientific field, and the more scientific teams involved, the less likely the research findings are to be true.  (If lots of scientists are eagerly doing experiments to find colors of jelly beans that prevent acne, it&#8217;s easy for someone to fool themselves&#8230; and everyone else.)</p>
<p>Ioannidis states his corollaries in more detail; I&#8217;ve simplified them to make them easy to understand, but if you care about this stuff, you should read what he actually says!</p>
<h3> The Open Science Framework </h3>
<p>Since his paper came out&#8212;and many others on this general theme&#8212;people have gotten more serious about improving the quality of statistical studies.   One effort is the <b><a href="https://openscienceframework.org/">Open Science Framework</a></b>.  </p>
<p>Here&#8217;s what their website says:</p>
<blockquote><p>
The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software.  The purpose of the software is to support the scientist&#8217;s workflow and help increase the alignment between scientific values and scientific practices.</p>
<p>&bull; <strong>Document and archive studies.</strong></p>
<p>Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing. </p>
<p>&bull;  <strong>Share and find materials.</strong></p>
<p>With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists. </p>
<p>&bull;  <strong>Detail individual contribution.</strong></p>
<p>Assign citable, contributor credit to any research material &#8211; tools, analysis scripts, methods, measures, data. </p>
<p>&bull;  <strong>Increase transparency.</strong></p>
<p>Make as much of the scientific workflow public as desired &#8211; as it is developed or after publication of reports. Find public projects <a href="http://openscienceframework.org/explore/activity/" rel="nofollow" class="external">here</a>.</p>
<p>&bull; <strong>Registration.</strong></p>
<p>Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations <a href="http://openscienceframework.org/explore/activity/" rel="nofollow" class="external">here</a>.</p>
<p>&bull; <strong>Manage scientific workflow.</strong> </p>
<p>A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.
</p></blockquote>
<h3> CONSORT </h3>
<p>Another group trying to improve the quality of scientific research is <a href="http://www.consort-statement.org/">CONSORT</a>, which stands for  Consolidated Standards of Reporting Trials.  This is mainly aimed at medicine, but it&#8217;s more broadly applicable. </p>
<p>The key here is the &#8220;CONSORT Statement&#8221;, a <a href="http://www.consort-statement.org/consort-statement/overview0/">25-point checklist</a> saying what you should have in any paper about a randomized controlled trial, and a <a href="http://www.consort-statement.org/consort-statement/flow-diagram0/">flow chart</a> saying a bit about how the experiment should work.</p>
<h3> What else? </h3>
<p>What are the biggest other efforts that are being made to improve the quality of scientific research?</p>
]]></html><thumbnail_url><![CDATA[https://i0.wp.com/imgs.xkcd.com/comics/significant.png?fit=440%2C330]]></thumbnail_url><thumbnail_height><![CDATA[330]]></thumbnail_height><thumbnail_width><![CDATA[118]]></thumbnail_width></oembed>