In a paper by Nicholas Bloom, Charles Jones and Michael Webb of Stanford University, and John Van Reenen of the Massachusetts Institute of Technology (MIT), the authors note that even as discovery has disappointed, real investment in new ideas has grown by more than 4% per year since the 1930s. Digging into particular targets of research—to increase computer processing power, crop yields and life expectancy—they find that in each case maintaining the pace of innovation takes ever more money and people.Follow me below the fold for some commentary on a number of the other papers they cite.
Annoyingly, Free Exchange does not link to the works they cite. I have taken the liberty of inserting links to what I think are the works in question flagged with an asterisk.
At first sight, the problem of falling research productivity is like the "high energy physics" problem - after a while all the experiments at a given energy level have been done, and getting to the next energy level is bound to be a lot more expensive and difficult each time.
But Free Exchange takes a different approach:
it is worth considering that it may be the motivation we provide our innovators, rather than a shortage of ideas, that is the problem.Although they start with yet another version of the "high energy physics" problem:
Benjamin Jones of Northwestern University* has found that the average age at which great scientists and inventors produce their most important work rose by six years over the course of the 20th century, thanks to the need for more early-life investment in education. But although important thinkers begin their careers later than they used to, they are no more productive later in life. Education, while critical to discovery, shortens the working lives of great scientists and inventors.They note the disincentives provided by the legacy academic publishing oligopoly:
Intellectual-property protections make it more difficult for others to make their own contributions by building on prior work. Barbara Biasi of Yale University and Petra Moser of New York University* studied the effects of an American wartime policy that allowed domestic publishers to freely print copies of German-owned science books. English-language citations of the newly abundant works subsequently rose by 67%.Despite being easily manipulated, citation counts have become a nearly universal metric for research quality, but:
Such metrics probably push research in a more conservative direction. While novel research is more likely to be cited when published, it is also far more likely to prove a dead end—and thus to fail to be published at all. Career-oriented researchers thus have a strong incentive to work towards incremental advances rather than radical ones.For example, the "least publishable unit" problem. There is also the related problem that in most cases, to get funding, a researcher needs to persuade their funder that the proposed research will be successful (i.e. be published and generate citations):
Pierre Azoulay of MIT, Gustavo Manso of the University of California, Berkeley, and Joshua Graff Zivin of the University of California, San Diego*, find that medical researchers funded by project-linked grants, like those offered by the National Institutes of Health, an American government research centre, often pursue less ambitious projects, and thus produce breakthrough innovations at a much lower rate, than researchers given open-ended funding.In Identifiers: A Double-Edged Sword I wrote about a classic example of the rewards for open-ended funding:
The physicist G. I. Taylor (my great-uncle) started in 1909 with the classic experiment which showed that interference fringes were still observed at such low intensity that only a single photon at a time was in flight. The following year at age 23 he was elected a Fellow of Trinity College, and apart from a few years teaching, he was able to pursue research undisturbed by any assessment for the next 6 decades. Despite this absence of pressure, he was one of the 20th century's most productive scientists, with four huge volumes of collected papers over a 60-year career.I should have noted that the reason there were only "a few years teaching" was that in 1923 he was awarded a lifetime research fellowship by the Royal Society. Of course, Taylor would have scored highly in an assessment, but the assessment wouldn't have motivated him to be more productive. The lack of formal research assessment at Cambridge in those days was probably a hold-over from the previous century's culture of innovation in Britain:
Some economic historians, such as Joel Mokyr of Northwestern University*, credit cultural change with invigorating the innovative climate in industrialising Britain. A “culture of progress” made intellectual collaborators of commercial rivals, who shared ideas and techniques even as they competed to develop practical innovations. Changing culture is no easy matter, of course. But treating innovation as a noble calling, and not simply something to be coaxed from self-interested drudges, may be a useful place to start.It is noteworthy that the success of the Open Source movement has largely been driven by "treating innovation as a noble calling, and not simply something to be coaxed from self-interested drudges".
A related article is David Rotman's We’re not prepared for the end of Moore’s Law. Rotman writes:
“It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.”The R&D is expensive, but a bigger contribution to the cost of the product is the factory to make it. Rotman writes:
For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.
Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.This isn't news. Back in 2014 I reported on the Krste Asanović Keynote at FAST14:
Krste said that, right now, we probably have the cheapest transistors we're ever going to have. Scaling down from here is possible, but getting to be expensive. Future architectures cannot count on solving their problems by throwing more, ever-cheaper transistors at them.The point is that Moore's law enabled the cost of a state-of-the-art chip to remain roughly constant, while the transistor count increased exponentially. I.e. the cost per transistor dropped exponentially. Although the three remaining fabs believe they can make the transistors smaller, unless they continue to get cheaper the customers won't notice.
Rotman quotes Jim Keller of Intel:
He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.A top-end Intel CPU currently costs around $2e3 in 1e3 quantities. Say it has 5e11 transistors, each costs around $4e-8. Say transistors didn't get any cheaper for 10 years. The top-end chip with 2e12 transistors would cost $8e4,or $80K. This CPU cost would really change the economics of computing.