In a paper by Nicholas Bloom, Charles Jones and Michael Webb of Stanford University, and John Van Reenen of the Massachusetts Institute of Technology (MIT), the authors note that even as discovery has disappointed, real investment in new ideas has grown by more than 4% per year since the 1930s. Digging into particular targets of research—to increase computer processing power, crop yields and life expectancy—they find that in each case maintaining the pace of innovation takes ever more money and people.Follow me below the fold for some commentary on a number of the other papers they cite.
Annoyingly, Free Exchange does not link to the works they cite. I have taken the liberty of inserting links to what I think are the works in question flagged with an asterisk.
At first sight, the problem of falling research productivity is like the "high energy physics" problem - after a while all the experiments at a given energy level have been done, and getting to the next energy level is bound to be a lot more expensive and difficult each time.
But Free Exchange takes a different approach:
it is worth considering that it may be the motivation we provide our innovators, rather than a shortage of ideas, that is the problem.Although they start with yet another version of the "high energy physics" problem:
Benjamin Jones of Northwestern University* has found that the average age at which great scientists and inventors produce their most important work rose by six years over the course of the 20th century, thanks to the need for more early-life investment in education. But although important thinkers begin their careers later than they used to, they are no more productive later in life. Education, while critical to discovery, shortens the working lives of great scientists and inventors.They note the disincentives provided by the legacy academic publishing oligopoly:
Intellectual-property protections make it more difficult for others to make their own contributions by building on prior work. Barbara Biasi of Yale University and Petra Moser of New York University* studied the effects of an American wartime policy that allowed domestic publishers to freely print copies of German-owned science books. English-language citations of the newly abundant works subsequently rose by 67%.Despite being easily manipulated, citation counts have become a nearly universal metric for research quality, but:
Such metrics probably push research in a more conservative direction. While novel research is more likely to be cited when published, it is also far more likely to prove a dead end—and thus to fail to be published at all. Career-oriented researchers thus have a strong incentive to work towards incremental advances rather than radical ones.For example, the "least publishable unit" problem. There is also the related problem that in most cases, to get funding, a researcher needs to persuade their funder that the proposed research will be successful (i.e. be published and generate citations):
Pierre Azoulay of MIT, Gustavo Manso of the University of California, Berkeley, and Joshua Graff Zivin of the University of California, San Diego*, find that medical researchers funded by project-linked grants, like those offered by the National Institutes of Health, an American government research centre, often pursue less ambitious projects, and thus produce breakthrough innovations at a much lower rate, than researchers given open-ended funding.In Identifiers: A Double-Edged Sword I wrote about a classic example of the rewards for open-ended funding:
The physicist G. I. Taylor (my great-uncle) started in 1909 with the classic experiment which showed that interference fringes were still observed at such low intensity that only a single photon at a time was in flight. The following year at age 23 he was elected a Fellow of Trinity College, and apart from a few years teaching, he was able to pursue research undisturbed by any assessment for the next 6 decades. Despite this absence of pressure, he was one of the 20th century's most productive scientists, with four huge volumes of collected papers over a 60-year career.I should have noted that the reason there were only "a few years teaching" was that in 1923 he was awarded a lifetime research fellowship by the Royal Society. Of course, Taylor would have scored highly in an assessment, but the assessment wouldn't have motivated him to be more productive. The lack of formal research assessment at Cambridge in those days was probably a hold-over from the previous century's culture of innovation in Britain:
Some economic historians, such as Joel Mokyr of Northwestern University*, credit cultural change with invigorating the innovative climate in industrialising Britain. A “culture of progress” made intellectual collaborators of commercial rivals, who shared ideas and techniques even as they competed to develop practical innovations. Changing culture is no easy matter, of course. But treating innovation as a noble calling, and not simply something to be coaxed from self-interested drudges, may be a useful place to start.It is noteworthy that the success of the Open Source movement has largely been driven by "treating innovation as a noble calling, and not simply something to be coaxed from self-interested drudges".
A related article is David Rotman's We’re not prepared for the end of Moore’s Law. Rotman writes:
“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.”The R&D is expensive, but a bigger contribution to the cost of the product is the factory to make it. Rotman writes:
...
For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.
Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.This isn't news. Back in 2014 I reported on the Krste Asanović Keynote at FAST14:
Krste said that, right now, we probably have the cheapest transistors we're ever going to have. Scaling down from here is possible, but getting to be expensive. Future architectures cannot count on solving their problems by throwing more, ever-cheaper transistors at them.The point is that Moore's law enabled the cost of a state-of-the-art chip to remain roughly constant, while the transistor count increased exponentially. I.e. the cost per transistor dropped exponentially. Although the three remaining fabs believe they can make the transistors smaller, unless they continue to get cheaper the customers won't notice.
Rotman quotes Jim Keller of Intel:
He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.A top-end Intel CPU currently costs around $2e3 in 1e3 quantities. Say it has 5e11 transistors, each costs around $4e-8. Say transistors didn't get any cheaper for 10 years. The top-end chip with 2e12 transistors would cost $8e4,or $80K. This CPU cost would really change the economics of computing.
1 comment:
The Economist returns to the problem of research productivity in How to escape scientific stagnation:
"In a working paper published last year, Chiara Franzoni of the polimi Graduate School of Management and Paula Stephan of Georgia State University look at a number of measures of risk, based on analyses of text and the variability of citations. These suggest science’s reward structure discourages academics from taking chances. The most common way research is funded, through peer review—in which academics in similar fields score proposals—deserves some blame. In 2017, using a data set of almost 100,000 nih grant applications, Danielle Li, then of Harvard University, found that reviewers seem to favour ideas similar to their own expertise. If a project must satisfy a committee, it is not surprising that unorthodox ideas struggle to make it through."
And:
"Another approach in vogue is to fund “people not projects”. Most conventional grants fund specific projects for a specific amount of time, usually a few years, which researchers worry prevents them from pivoting to new ideas when old ones do not work out and fails to allot enough time for risky ones to come to fruition. A study in 2011 compared researchers at the Howard Hughes Medical Institute, where they are granted considerable flexibility over their research agendas and lots of time to carry out investigations, with similarly accomplished ones funded by a standard nih programme. The study found that researchers at the institute took more risks. As a result, they produced nearly twice as much highly cited work, as well as a third more “flops” (articles with fewer citations than their previously least-cited work)."
My great-uncle G. I. Taylor, who is regarded as one of the most productive physicists, is an example. At 24 he was awarded a fellowship at Trinity, allowing him to work on pretty much whatever he wanted. At 37 he was awarded a lifetime Royal Society research professorship, which meant he didn't even have to teach. He continued to produce high-quality papers for another 46 years, ending 60 years from his first, groundbreaking paper Interference fringes with feeble light.
Post a Comment