The Romer Model turns 25

Screen Shot 2015-10-03 at 10.58.13 AM25 years ago this month Paul Romer‘s paper, “Endogenous Technological Change” was published in the Journal of Political Economy. After over 20,000 citations, it is one of the most influential economics papers of that period. The short version of what that paper did was to provide a fully specified model whereby technological change (i.e., the growth of productivity) was driven not be outside (or exogenous) forces but, instead, by the allocation of resources to knowledge creation and with a complete description of the incentives involved that provided for that allocation. Other papers had attempted this in the past — as outlined in David Warsh’s great book of 2006 — and others provided alternatives at the same time (including Aghion, Howitt, Grossman, Helpman, Acemoglu and Weitzman) but Romer’s model became the primary engine that fuelled a decade-long re-examination of long-term growth in economics; a re-examination that I was involved in back in my student days.

Recently, Romer himself has taken on others who, more recently, have continued to provide models of endogenous economic growth (most notably Robert Lucas) for not building on the work of himself and others that grounded the new growth theory in imperfect competition but instead trying to formulate models based on perfect competition instead. I don’t want to revisit that issue here but do want to note that “The Romer Model” is decidedly non-mathy. As a work of theoretical scholarship, every equation and assumption is carefully justified. The paper is laid out with as much text as there is mathematics. And in the end, you know how the model works, why it works and what drives its conclusions. It is no small wonder that it was immediately testable and economists such as Chad Jones put it to the test and with an interesting set of claims that what the data supported was not the Romer Model itself but an amended version of it with some potentially different policy implications.

In celebration of this anniversary, let me provide here my take on why the Romer Model was such a milestone and also, on why the research agenda seemed to peter out somewhat thereafter. This is not to say that it can’t be revived, must that it seems on somewhat of a pause.

The Romer Model’s central premise is that growth of knowledge is cumulative. New knowledge builds on past knowledge. This is what makes knowledge (or ideas) different from physical capital. The way in which knowledge provides a foundation for the production of future knowledge is inherently non-rival. This is in contrast to the way in which new knowledge impacts on short-term production. While it is also in-principle non-rival, in the short-run, there are ways in which inventors of new ideas can exclude others from profiting from them. Indeed, that is a core assumption of the Romer model: an inventor can get an infinitely long-lived patent that prevents anyone from directly using their idea in production. What the inventor does is then provides an exclusive license of that idea to a firm who pays a fee (literally an entry fee) to launch a new good into the market. Because that patent is lock-hard and the license is exclusive, they are the only provider of that good forever. This is a model in which market for ideas works very well because it is the prospect of the returns from that license fee that drives resources (in the Romer model, human capital) to allocate their energies to the production of new ideas.

But it is not all plain sailing. A new idea faces competition. For starters, there are goods based on older ideas still around (they never go away) and so there is imperfect competition there (specifically monopolistic competition). That is a critical assumption because if that competition was perfect no one would pay a license fee for a new patent. In addition, agents are far-sighted and so recognise that the economy is not done yet — new ideas will come in the future — so there is future competition as well. All the while, these new goods are not final goods but intermediate capital goods and so with each new one, final good productivity is rising. The economy is getting richer.

This is a recipe for a ‘heat death’ back to zero knowledge growth. After all, if nothing else changed, that future competition would seem to mean that patent fees would end up in the limit back with perfect competition. That does not happen because it gets easier to produce new ideas as those ideas can tap into all of the knowledge that is previously generated. So an inventor can’t stop that process (an assumption I’ll come back to) and so what Romer shows is that a virtuous cycle of productivity growth fuelling demand for new goods and hence the continual allocation of human capital to ideas production is possible. Moreover, it is (a) possible in a persistent way with growth in income greater than growth in the capitas themselves and (b) is still socially suboptimal because the private returns from idea generation are less than their social returns that takes into account cumulativeness — the central premise the model began with.

All this meant that we had a model that provided a consistent and convincing story as to why growth in per capita income persists (in contrast to the Solow model) and also what the role for government would be in promoting long-run economic growth — namely, strengthening the private appropriability of knowledge production (most likely through strong intellectual property protection) and by ensuring sufficient supply of scientific and engineering talent (through education programs and immigration). While much work built on, twisted and pulled apart the Romer model, these central messages never changed and an enduring basis for such policies became somewhat entrenched.

So why has work in this area somewhat petered out? First of all, while the Solow model allowed a new industry in the measurement of productivity to emerge that could inform policy on quantitative parameters, the Romer model did not generate a similar endeavour. This is possibly because, consistent with the macroeconomic intellectual requirements of the day, the Romer model had to be micro-founded. That specificity came at the cost of thinking about aggregate data and mapping them into an empirical framework. In sum, we got no new measures of rates of return to policy and with that did not have the quantitative data to support the strong qualitative predictions.

Second, one thing the Romer model abstracted away was some of the richness of the microeconomy that gives rise to new ideas and also their dissemination. For at the heart of the Romer model is a contradiction: new ideas are created and have a patent on them that allows them to be commercialised. At the same time, a new idea becomes a foundation for future competition against that patent. But how does that process happen? Romer did not say much but suggested that because patents require disclosures this was how the cat got out of the bag. This, however, seems too simplistic because idea creators have an incentive to ensure such disclosures are kept to a minimum. This means that the process by which ideas feed into a stock of disclosed knowledge that will, in turn, drive cumulativeness is important. In other words, to understand growth surely we need to understand what incentives there are to create and also detract from the pool of knowledge that can help future idea creators. This is something that I have explored in a recent paper with Fiona Murray and Scott Stern but it is also at the heart of empirical work by Jeff Furman and Scott Stern on biological resource centres and Bhaven Sampat and Heidi Williams on patents. The point here is that the Romer model subsumed the study of institutions for knowledge creation and dissemination and hence, the focus drifted from these things which economic historians have long demonstrated are at the heart of economic growth. Interestingly, the framework of the Romer model surely could be amended to bring these factors back into the fold.

In summary, the Romer model was a milestone and led to much progress. It is a stunningly beautiful work of economic theory. But there is more to be done and my hope is we will see that happen in the future as the cumulative process that drives new knowledge can drive new economic knowledge as well.

18 Replies to “The Romer Model turns 25”

  1. I’m not familiar with this literature but in your description at lot seems to turn on the way incentives translate into technology change. I find it very difficult to assimilate most of the examples I know about to the kind of model you describe. The very next blog post I read contains a good example:

    Next, hopefully by Stan 2.10, we’ll have a stiff solver and maybe a way for users to supply analytic coupled-system gradients and Jacobians. Stay tuned. These new designs are largely being guided by Sebastian Weber at Alcon (a Novartis subsidiary) and Wenping Wang at Novartis. And of course, by Michael Betancourt working out all the math and Daniel, Michael, and I working out the code with Sebastian’s and Wenping’s input. (http://andrewgelman.com/2015/10/03/comparing-waic-or-loo-or-any-other-predictive-error-measure/)

    So various contributors are adding to openly available intellectual property. Novartis, which lives and dies by patents, has no interest in “owning” this particular innovation.

    There are incentives at work here to be sure, but at least the direct incentives are non-monetary, and quite generally they don’t depend on controlling who uses the innovation.

    I would very much appreciate pointers to economic models that incorporate this major aspect of the process of technology change.

    1. If you want a book length non-technical treatment of the ideas in the Romer paper, i would suggest David Warsh’s lovely book “Knowledge and the Wealth of Nations” from 2006. Paul Krugman, whose ideas on economic geography are discussed in the book, reviewed it for the NY Times:

  2. “The short version of what that paper did was to provide a fully specified model whereby technological change (i.e., the growth of productivity) was driven not be outside (or exogenous) forces but, instead, by the allocation of resources to knowledge creation and with a complete description of the incentives involved that provided for that allocation.”

    This is a good example of an interesting point in economics. Romer’s already won the John Bates Clarke Medal, and there’s a good chance he’ll win the Nobel Prize, which I really hope he does. But say he does win the Nobel Prize. In undergrad econ courses, and newspaper articles, professors and journalists are going to say Paul Romer won the Nobel Prize for discovering that the rate of technological change depends on how much we invest in knowledge creation, and on the incentives for knowledge creation. To which I’d reply, before I had advanced economics, I knew that when I was four. He got a Nobel Prize for discovering something that obvious?

    But, no not at all. It’s important to make clear that he got it for putting this into a really nice and useful mathematical model for understanding important particulars of how this works and what are the effects in various ways in an economy, and any feedback effects, and what are good ways to go about advancing technology, and trying to quantify some of the effects, etc. (and, of course, there’s the mathematical check for internal consistency).

    This kind of good modelling is difficult, and takes a great deal of knowledge and skill that goes way beyond a basic idea that’s commonly known and not hard to understand.

  3. A patent is a communications mechanism. It preserves the ability to profit from an abstract idea, while insisting on the disclosure of one instance of the realized idea. This is where the leakage happens.

Leave a comment