TAU-Led International Collaboration: High Variability Is Consequence of Complex Data Workflows, Finds Nature Study
Different conclusions reached by independent teams about the same functional MRI hypotheses with the same dataset highlight challenges to current scientific methods
Support this researchA new Tel Aviv University-led study published on May 20 in Nature offers new evidence that the complexity of contemporary analytical methods in science contributes to the variability of research outcomes.
Previous studies in the fields of psychology, cancer biology and behavioral economics revealed many failures in the attempt to reproduce methodologies and mirror results. The TAU researchers used an approach known as “Many Analysts,” in which many researchers analyzed the same dataset to test variability in the analysis outcomes, explains study co-lead author Dr. Tom Schonberg of the Department of Neurobiology at TAU’s George S. Wise Faculty of Life Sciences and TAU’s Sagol School of Neuroscience.
“The variability in outcomes demonstrated in this study is an inherent part of the complex process of obtaining scientific results, and we must understand it in order to know how to tackle it,” he adds. “Science is conducted by humans, and there is no way to escape variability. But we must acknowledge this in order to self-correct and attain the most reliable answers.”
The Neuroimaging Analysis, Replication and Prediction Study (NARPS) was also led by Dr. Schonberg’s former PhD student Dr. Rotem Botvinik-Nezer, today a post-doctoral researcher at Dartmouth College, together with co-investigators Prof. Russel Poldrack of Stanford University and Prof. Thomas Nichols of Oxford University.
Overall, 180 researchers from 70 teams of scientists around the world analyzed the same brain imaging dataset of 108 subjects. These subjects participated in a task that tested their decision-making regarding gambles of potential gains and losses. Each group chose a distinct analysis method, and the different methods led to different conclusions.
“Science is being often criticized,” adds Dr. Schonberg. “But it is not a belief like a religion, as some have argued. It has rules and a method — the scientific method. We are constantly trying to improve this method in a process of constant self-questioning. We believe our study takes this process forward.”
The research teams were presented with the same data — fMRI scans of subjects performing a value-based decision-making task — and told to test the same nine different set hypotheses.
The large neuroimaging dataset had been collected over the course of a year at the Alfredo Federico Strauss Center for Computational Neuroimaging at TAU by Roni Iwanir, a former Sagol School MSc student from Schonberg’s lab. While the study participants engaged in the monetary decision-making task, fMRI scans were used to test whether the activity of specific brain regions involved in value processing changed in relation to the amount of money won or lost on a gamble. Some 70 international teams independently analyzed these datasets over the course of three months.
“The processing you have to go through from raw data to a result with fMRI is really complicated,” adds Prof. Poldrack. “There are a lot of choices you have to make at each place in the analysis workflow.”
Each team of researchers arrived at their own final conclusions regarding the data, with results varying significantly across the teams in five out of nine hypotheses.
“Our new study demonstrated high analytical flexibility as occurs ‘in the wild’,” says Dr. Schonberg. “The participating researchers modeled the hypotheses differently and used different software for the analysis. They also used different techniques and definitions in different aspects of the analysis.”
Another part of the study involved experts from the field as well as researchers from the analysis teams trading with other researchers in what are called “prediction markets” on what they thought would be the outcomes of the research. This part of the study was run by economists and behavioral finance experts, who provided the initial idea for the the study, and revealed marked over-optimism about the ability to replicate previous findings, even by researchers who analyzed the data themselves.
“While the final reported outcomes varied substantially, earlier stage analysis results actually showed a consensus among most research teams,” says Dr. Schonberg. “This is very encouraging and was in fact a somewhat surprising result. Despite the large variability in final reported results, the underlying analysis was similar, meaning we need to find methods to express this convergence.
“For example, the study suggests that researchers could perform and report multiple analyses with the same data, to find the results to which different reliable methods converge.”
Dr. Schonberg believes the findings can help scientists advance their methodology and improve the quality of their analyses in the future.
“I would want our findings to be used to take science forward — toward an even further sharing of all study related information, of transparency of methods, analysis codes and data,” says Dr. Schonberg. That is the only way everyone can test and “play” with results to see what holds. We have seen the importance and great need in sharing data in the recent COVID-19 pandemic in order to understand the optimal course of action.
“Naturally, the novelty of discoveries matters a great deal to scientists. But just as important is the rigor of our methodology,” concludes Dr. Schonberg. “Our study reflects the ambition of a vast community of scientists to spend thousands of hours to improve our methodologies in order to get the conclusions right and to reach reliable results.”
The research paper is available at the Nature web site here.