The 7 biggest problems facing science, according to 270 scientists by Julia Belluz, Brad Plumer, and Brian Resnick at Vox is an excellent overview of some of the most serious problems, with pointers to efforts to fix them. Their 7 are:
- Academia has a huge money problem:
In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. "In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants," writes John Chatham, ... Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley ... points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.
- Too many studies are poorly designed:
An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.
- Replicating results is crucial — and rare:
A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.
- Peer review is broken:
numerous studies and systematic reviews have shown that peer review doesn’t reliably prevent poor-quality science from being published.
- Too much science is locked behind paywalls:
"Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders)," Corina Logan, an animal behavior researcher at the University of Cambridge, noted. "It is not in the best interest of the society, the scientists, the public, or the research." (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)
- Science is poorly communicated:
Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out "Kill or Cure," a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.
...
Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice. - Life as a young academic is incredibly stressful:
A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed
Dr. Larson and his colleagues calculated R0s for various science fields in academia. There, R0 is the average number of Ph.D.s that a tenure-track professor will graduate over the course of his or her career, with an R0 of one meaning each professor is replaced by one new Ph.D. The highest R0 is in environmental engineering, at 19.0. It is lower — 6.3 — in biological and medical sciences combined, but that still means that for every new Ph.D. who gets a tenure-track academic job, 5.3 will be shut out. In other words, Dr. Larson said, 84 percent of new Ph.D.s in biomedicine “should be pursuing other opportunities” — jobs in industry or elsewhere, for example, that are not meant to lead to a professorship.Again, amen. A friend of mine spotted this problem years ago and has been making a business advising grad students and post-docs how to transition to "real work".
11 comments:
Do not confuse Ph.D with academia-only. As a Ph.D. (computer science) in industry I function AS a Ph.D. My role is to provide unique and significant results that lead to products, services, and solutions to complex technical problems. And, the work is very well paid since it is not easy to find people who can and will grapple with complexity and difficulty, where the problems and potential solutions are not well understood.
Peter Raeth, Ph. D., perhaps you would like to read the page about me on Wikipedia and some of my publications to understand where I am coming from on this topic.
«Academia has a huge money problem»
That's not what you argue when you mention grants. Obviously academia *as it is* does not have a money problem as "money problem" is commonly argued because the total level of funding is adequate to fund current activity.
What you are describing is a problem with money delivery, that is how the money the allocated and when the money is allocated.
The same amount of current funding could be delivered in a different way that would help longer term projects etc.
But look at the "advantages" of the current system: in effect academics are turned into little startups that are constantly having to sell projects to raise discretionary funding. This gives them a small-businessman and careerist mindset, and helps reduce their temptation to have a politically "unaligned" mindset.
The switch to a per-academic per-project funding model, where universities have become in effect business parks and sometimes startup incubators for academic-run businesses, was motivated politically, not with regard to platitudes like "excellence of research".
«an R0 of one meaning each professor is replaced by one new Ph.D. The highest R0 is in environmental engineering, at 19.0. It is lower — 6.3 — in biological and medical sciences combined, but that still means that for every new Ph.D. who gets a tenure-track academic job, 5.3 will be shut out»
That is totally intentional, and the main reason has nothing to do with preparing the academics of tomorrow, or even the high qualified workers of tomorrow.
The story is that there is an alternative model of research based on "national laboratories" or "corporate laboraties" with permanent full time staff that go through a research based career.
But that model is expensive, and relies on tax funds or spending company profits on research.
So what has happened is that research work has been outsourced to universities. Why universities? Because their whole organization is based on temping and "volunteer" work, from students to adjuncts.
So the large numbers PhDs and postdocs are *required* to provide lots of easily discarded low pay temps to do the tedious legwork of research (this varies by discipline, but for example in biomedical ones there is a lot of tedious legwork to do).
This also work well with the contemporary model where universities are business parks renting office and lab space to academics running small research businesses on short term contracts: these small businesses need particularly cheap short term casual workers.
This is particularly true of PhD/postdoc work in universities below the Ivy League level, because only Ivy League postgraduates have much of a chance of getting a chance of an academic career, when the chances overall are 1-in-10. But many students thanks to "positive thinking" have boundless optimism that they will be that 1-in-10.
Also keeping young people out of the job market and in postgraduate programs as effectively casual workers is a relatively cheap way to keep official unemployment statistics down.
As a to the casualization of work in academia, some amusing cartoon from Doonesbury of 1996:
http://images.UComics.com/comics/db/1996/db960909.gif
http://images.UComics.com/comics/db/1996/db960910.gif
http://images.UComics.com/comics/db/1996/db960911.gif
http://images.UComics.com/comics/db/1996/db960912.gif
http://images.UComics.com/comics/db/1996/db960913.gif
That's for adjuncts on the teaching side, but the research side is not much better.
Along the same lines as the Getty Museum, which has been caught claiming copyright on and charging for open access images (twice in two weeks), Mike the Mad Biologist catches NEJM arguing that open data sharing means charging for access:
"persons who were not involved in an investigator-initiated trial but want access to the data should financially compensate the original investigators for their efforts and investments in the trial and the costs of making the data available."
and preventing any studies:
"conducted with the aim of inappropriately undermining the original findings"
As Mike points out:
"If the data were generated with federal funding (e.g., NIH), then they are not your data."
Mark Humphries' How a happy moment for neuroscience is a sad moment for science is a must-read, contrasting the open-ness of the Allen Foundation's release of:
"a landmark set of data in June. Entitled the “Allen Brain Observatory”, it contains a vast array of recordings from the bit of cortex that deals with vision, while the eyes attached to that bit of cortex were looking at patterns."
before any publications based on it with the typical behavior of University scientists:
"Research needs grants to fund it, and grants need papers. Promotion needs papers. Tenure need papers. Postdoc positions need papers. Even PhD studentships need papers now, God help us all."
Humpries is:
"sad that an entirely private research institute can show up so starkly the issues of publicly-funded science.
But this also offers a case study in the solutions to science’s incentive problem. The Allen Institute have shown repeatedly that quality and rigor of science can be prioritised over quantity of output and money as measures of “success”. Others have also shown how dedicating many resources to long term projects can produce deep insights and highly beneficial tools for neuroscience. For example, Jeremy Freeman’s team producing a suite of neuroscience analysis tools for high performance computing platforms; or Christian Machens’ team developing their general neuron population analysis framework, and applying it [to] a vast range of datasets."
Apparently, Elsevier is trying to address the problem of poor quality reviewing by gamifying the process:
"To encourage efficient and timely reviewing and to recognize the appreciation for the important work of reviewers, Elsevier will publish on the journal's website a list of reviewers with their full names and their relative ranking and percentile in how quickly they submitted their report (computed as days between the invitation to review and the submission of a referee report). Referee anonymity will be preserved because authors are not aware of the dates in which a reviewer was invited and submitted his report. Moreover, Elsevier will not publish the number of days taken for the referee to complete the report, but only the relative ranking and percentile (e.g., a ranking of 120 among 300 reviewers and the 40th percentile). The reviewers' names, ranking, and percentile will be published only for the top 80% of reviewers in terms of days taken to review. The 20% of reviewers with the longest review times will not appear in the list."
Clearly, from Elsevier's point of view the problem with peer-review is that the reviewers take too long. Gamification will motivate them to work faster. Problem solved. What you can measure, that you can improve.
For the consumers of Elsevier's output (NB who are not the same as Elsevier's customers) the problem is, as the studies linked to above show, that peer review is not performing its task of improving the quality of publications. It can't detect fraud, it can't detect egregious errors, it hasn't prevented a wave of retractions, it is biassed against minorities. But it is effective at suppressing work deviating from the consensus.
Motivating more quick-and-dirty reviews will make every single one of these problems worse, but will help Elsevier's bottom line. Way to go!
PS - see Nature's attempt to solve the problem of slow reviews.
Christopher Ingraham at the Washington Post's Wonkblog reports that An alarming number of scientific papers contain Excel errors:
"A team of Australian researchers analyzed nearly 3,600 genetics papers published in a number of leading scientific journals — like Nature, Science and PLoS One. As is common practice in the field, these papers all came with supplementary files containing lists of genes used in the research.
The Australian researchers found that roughly 1 in 5 of these papers included errors in their gene lists that were due to Excel automatically converting gene names to things like calendar dates or random numbers."
Their paper is Gene name errors are widespread in the scientific literature.
A more interesting idea for improving the peer review process than Elsevier's comes from BMC Psychology:
"‘Results free’ means that reviewers of research manuscripts submitted for publication will not be able to see the results or discussion sections until the end of the review process. It is thought that this could ensure the research is judged on the strength of a study’s methods, and the question it is addressing, rather than the results or outcome of the study.
...
It is well established that results deemed statistically significant are more likely to be published than null results – or those that fail to reach significance in a statistical test. However, these null results form an important part of the scientific record and are crucial to develop an accurate evidence base."
At The Conversation, Andrea Saltelli's Science in crisis: from the sugar scam to Brexit, our faith in experts is fading is a worthwhile and well-linked survey of the credibility attrition caused by misaligned incentives in science.
Post a Comment