Scholarly publishing

News from Management Science (Business Strategy)

I recently took over from Bruno Cassiman as the Department Editor for the Business Strategy section at Management Science. This seemed like a good opportunity to reflect on some changes being made — that is, implementing some of the ideas in Scholarly Publishing and its Discontents — as well as some things I have learned that may assist you if you are thinking of submitting here.

Expanded Editorial Board

At Management Science, referee selection and the main decision are handled by Associate Editors. I wanted to expand that board for Business Strategy and was fortunate to find many people willing to serve. Here is the new board.

Juan Alcácer, Harvard University
Kevin Boudreau, Northeastern University
Jennifer Brown, University of British Columbia
Meghan Busse, Northwestern University
Leemore Dafny, Harvard University
Andrea Fosfuri, Bocconi University
Maria Guadalupe, INSEAD
Neil Gandal, Tel Aviv University
Mara Lederman, University of Toronto
Hong Luo, Harvard University
Fiona Murray, Massachusetts Institute of Technology
Evan Rawley, University of Minnesota
Michael Ryall, University of Toronto
Rachelle Sampson, University of Maryland, College Park
Timothy Simcoe, Boston University
Catherine Thomas, London School of Economics
Rosemarie Ziedonis, Boston University

New Editorial Statement

There has been much concern about whether top journals are taking sufficient risks. So I wanted the editorial statement to reflect a weighting of ‘scientific impact’ more than ‘immediate managerial application.’

Here is the new statement:

The Business Strategy department seeks papers with research questions that deepen our understanding of business performance in competitive contexts. The department is interested in rigorous analyses that show how managerial choices impact performance, broadly construed, with special attention paid to persistent differences amongst competitors. Because we define strategic choices as those with significant competitive implications, the department will eschew papers that focus primarily on internal functions (e.g., finance or marketing), but welcome studies that link firm organization to market performance.

The primary criterion for consideration for publication in Management Science is the potential for impact on future study. This means that the research must conform to rigorous standards of quality in both theory development and empirical methodology and execution. We are agnostic as to the disciplinary origins of analysis. Finally, as the criterion is the potential for impact, this means that significant contributions may very well raise more questions than they answer and generate challenging and controversial findings with respect to the existing literature.

Note also that it tries to be clearer regarding where business strategy ends and other areas (which are well represented at Management Science) begins. In particular, we want the submissions to be business focussed. I know that sometimes, for instance, economists submit to Management Science when they miss out at an economics journal. That can be fine but I also know an economics paper when I see one (and I have written plenty) and won’t be looking for econ discards.

Efficient Decisions

These days managing submissions and ensuring timely decisions is very challenging. Over the past five years, submissions to Management Science have increased by almost fifty percent. At the same time, we want review rounds to be under 3 months and also to tighten up the time from initial submission to publication. Sometimes that can be over four years! That is just unacceptable.

To that end, there are two things we will be aiming for. First, taking cues from the Quarterly Journal of Economics, I will be ensuring the desk rejections are swift. They will also likely be the majority of decisions. There is plenty of evidence here that the error rate from such policies are low.

Your best bet not to get desk rejected is to state in a cover letter what your contribution to business strategy is, taking into account what has been written in the editorial statement. Also, don’t have titles with more than 4 buzz words. Believe it or not, it happens alot and ‘atell’ for me.

Second, we are going to aim for one round of revisions. Sometimes this can go for many rounds. The goal is to have the second revision be a straight up or down, accept/reject decision.

Authors Own Papers

Authors own papers. Not editors and certainly not referees. Too often this is not reflected in revisions and what happens. You get the following …

car_peer_review_comic_12

If you get a revise and resubmit from me, it will have two parts. It will list (a) the things you must change because they need to clarify the contribution, ensure it is correct and properly represent the past literature and (b) things that are a matter of taste which you may choose to change but need not do so. In addition, my inbox will always be open for mid-revision clarifications. No more of this arms-length dealing.

What do I like

Finally, let me just point to some papers that have been published in Management Science that I would like to see more of. For each one, they asked a significant question and the answer led (or will lead) to more research into the topic.

Bruno Cassiman and Masako Ueda (2006), “Optimal project rejection and new firm start-ups”

Sharon Novak and Scott Stern (2008), “How does outsourcing affect performance dynamics? Evidence from the automobile industry”

Hong Luo (2014), “When to sell your idea: Theory and Evidence from the Movie Industry”

Eric van den Steen (2017), “A formal theory of strategy”

artificial intelligence

AI leads to reward function engineering

[co-authored with Ajay Agrawal and Avi Goldfarb; originally published on HBR.org on 26th July 2017]

With the recent explosion in AI, there has been the understandable concern about its potential impact on human work. Plenty of people have tried to predict which industries and jobs will be most affected, and which skills will be most in demand. (Should you learn to code? Or will AI replace coders too?)

Rather than trying to predict specifics, we suggest an alternative approach. Economic theory suggests that AI will substantially raise the value of human judgment. People who display good judgment will become more valuable, not less. But to understand what good judgment entails and why it will become more valuable, we have to be precise about what we mean.

What AI does and why it’s useful

Recent advances in AI are best thought of as a drop in the cost of prediction. By prediction, we don’t just mean the future—prediction is about using data that you have to generate data that you don’t have, often by translating large amounts of data into small, manageable amounts. For example, using images divided into parts to detect whether or not the image contains a human face is a classic prediction problem. Economic theory tells us that as the cost of machine prediction falls, machines will do more and more prediction.

Prediction is useful because it helps improve decisions. But it isn’t the only input into decision-making; the other key input is judgment. Consider the example of a credit card network deciding whether or not to approve each attempted transaction. They want to allow legitimate transactions and decline fraud. They use AI to predict whether each attempted transaction is fraudulent. If such predictions were perfect, the network’s decision process is easy. Decline if and only if fraud exists.

However, even the best AIs make mistakes, and that is unlikely to change anytime soon. The people who have run the credit card network know from experience that there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.

This means that to decide whether to approve a transaction, the credit card network has to know the cost of mistakes. How bad would it be to decline a legitimate transaction? How bad would it be to allow a fraudulent transaction?

Someone at the credit card network needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so. This decision is what we call judgment.

What judgment entails

Judgment is the process of determining what the reward to a particular action is in a particular environment.

Judgment is how we work out the benefits and costs of different decisions in different situations.

Credit card fraud is an easy decision to explain in this regard. Judgment involves determining how much money is lost in a fraudulent transaction, how unhappy a legitimate customer will be when a transaction is declined, as well as the reward for doing the right thing and allowing good transactions and declining bad ones. In many other situations, the trade-offs are more complex, and the payoffs are not straightforward. Humans learn the payoffs to different outcomes by experience, making choices and observing their mistakes.

Getting the payoffs right is hard. It requires an understanding of what your organization cares about most, what it benefits from, and what could go wrong.

In many cases, especially in the near term, humans will be required to exercise this sort of judgment. They’ll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions.

But couldn’t AI calculate costs and benefits itself? In the credit card example, couldn’t AI use customer data to consider the trade-off and optimize for profit? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.

Setting the right rewards

Like people, AIs can also learn from experience. One important technique in AI is reinforcement learning whereby a computer is trained to take actions that maximize a certain reward function. For instance, DeepMind’s AlphaGo was trained this way to maximize its chances of winning the game of Go. Games are often easy to apply this method of learning because the reward can be easily described and programmed – shutting out a human from the loop.

But games can be cheated. As Wired reports, when AI researchers trained an AI to play the boat racing game, CoastRunners, the AI figured out how to maximize its score by going around in circles rather than completing the course as was intended. One might consider this ingenuity of a type, but when it comes to applications beyond games this sort of ingenuity can lead to perverse outcomes.

The key point from the CoastRunners example is that in most applications, the goal given to the AI differs from the true and difficult-to-measure objective of the organization. As long as that is the case, humans will play a central role in judgment, and therefore in organizational decision-making.

In fact, even if an organization is enabling AI to make certain decisions, getting the payoffs right for the organization as a whole requires an understanding of how the machines make those decisions. What types of prediction mistakes are likely? How might a machine learn the wrong message?

Enter Reward Function Engineering. As AIs serve up better and cheaper predictions, there is a need to think clearly and work out how to best use those predictions.

Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI.  

Being great at it requires having an understanding of the needs of the organization and the capabilities of the machine. (And it is not the same as putting a human in the loop to help train the AI.)

Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. Self-driving vehicles are an example of such hard-coded rewards. Once the prediction is made, the action is instant. But as the CoastRunners example illustrates, getting the reward right isn’t trivial. Reward Function Engineering has to consider the possibility that the AI will over-optimize on one metric of success, and in doing so act in a way that’s inconsistent with the organization’s broader goals.

At other times, such hard-coding of the rewards is too difficult. There may so be many possible predictions that it is too costly for anyone to judge all the possible payoffs in advance. Instead, some human needs to wait for the prediction to arrive, and then assess the payoff. This is closer to how most decision-making works today, whether or not it includes machine-generated predictions. Most of us already do some Reward Function Engineering, but for humans — not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff, and then tweak them to get better performance. Every day, we make decisions and judge the rewards. But when we do this for humans, prediction and judgment are grouped together, and the distinct role of Reward Function Engineering has not needed to be explicitly separate.

As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central.

Overall, will machine prediction decrease or increase the amount of work available for humans in decision-making?  It is too early to tell.  On the one hand, machine prediction will substitute for human prediction in decision-making.  On the other hand, machine prediction is a complement to human judgment. And cheaper prediction will generate more demand for decision-making, so there will be more opportunities to exercise human judgment.  So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.

Entrepreneurship

The young entrepreneur myth?

Paul Graham famously said:

The cutoff in investors’ heads is 32 … after 32 they tend to be a little skeptical.

That apparently is the consensus view on what age you want the founders to be in order to generate successful returns.

What does the data say? According to a new paper by Pierre Azoulay, Ben Jones, Daniel Kim and Javier Miranda, the answer is nothing like that. (The paper isn’t available but the slides from the talk at the NBER Summer Institute have been posted).

Hold on a sec, my impression from Tech Crunch is that the award winners are pretty young.

Yep, around 31 or if you look at Inc and stuff like that it is 29 years

OK so what is it like for the US?

For new firms in the US between 2007 and 2014 (aka the Y-C years), it is 41.9.

OK but that is all firms. What about technology?

It is actually higher, 43.9 with VC backed firms 41.9 and patenting firms 44.6.

Yeah but not in Silicon Valley surely?

Nope pretty much the same.

Alright but what about successful firms? That’s what the VCs care about.

In Silicon Valley, the ones with a successful exit have an average founding age of 47!

What are we missing then? 

Probably lots of stuff. This is just the basic data results. It doesn’t seem to pass the standards of modern econometrics (yet). So it may be that we can work out what is at the bottom of this. But it sure is provocative at the moment and should give Silicon Valley investors types some food for thought. It should also cause us to have another look at whether encouraging 20 somethings with endless entrepreneurship programs is a good idea.

open source software

Y2K Everyday?

[This post appeared on HBR Online on 14th July 2017]

Almost 20 years have passed since the corporate world woke up to long-term problems in computer code, which became known as Y2K. Over the previous decades, software developers had used the date 01-01-00 (January 1, 2000) as a convenient hack to make it easier to debug software. The problem was that it wasn’t taken out. So as 2000 loomed, there was a realization that, when the clocks hit midnight, software all over the world could simply stop running. Thankfully, at a cost of a few billion dollars, the software was audited and patched, and businesses went back to worrying about other things.

But at a recent workshop organized by the Ford and Sloan Foundations, I learned that Y2K-type concerns are far from over. And unlike Y2K itself, they are much harder to identify, let alone fix.

The base of all this is open-source code. Open source is where programmers share subroutines, protocols, and other software with each other and allow anyone to use it for virtually any purpose. It has arguably saved billions in development expenses by reducing duplicative effort and allowing complementary innovation without requirement permission or payment.

Open-source code resides everywhere. If you’ve hired a software developer, their work mostly likely contained code from the open-source community. The same goes for software programs. Let me take my favorite example, the Network Time Protocol, or NTP. Invented by David Mills of the University of Delaware, it is the protocol that has been keeping time on the internet for over 30 years. This is important because all computer systems require reliable time — even more so if they communicate with one another. This is how stock exchanges timestamp trade. In a world of high-frequency trading, imagine if there was no agreement as to what that time was. Chaos would reign.

You might think that time is a pretty stable thing. But it’s not. What we call “time” changes over time. Different countries set their clocks back or move them ahead, and every so often we have a leap second event that requires everyone to recognize an extra second at the same time. To add to that, time must be kept down to the millisecond, which means the server that houses time has to operate very precisely.

Now for the scary part. What if I told you that the entire NTP relies on the sole effort of a 61-year-old who has pretty much volunteered his own time for the last 30 years? His name is Harlan Stenn, he lives in Oregon, in the United States, and he is so unknown that he does not even appear on the NTP Wikipedia page.

For a number of years Stenn has worked on a shoestring budget. He is putting in 100 hours a week to put patches on code, including requests from big corporations like Apple. A look at the NTP homepage will give you a sense of the struggle. It looks like it comes from another era. And this has led to delays in fixing security issues and complaints. And not surprisingly, Stenn has become crankier:

“Yeah, we think these delays suck too.

“Reality bites – we remain severely underresourced for the work that needs to be done. You can yell at us about it, and/or you can work to help us, and/or you can work to get others to help us.”

NTP has had some donations, but its constant pleading for help is worrisome.

This is just one example. And in many ways, it is the easiest to understand and potentially fix. The fact that it hasn’t been is the bigger mystery.

Open-source code is embedded throughout all software. And since it interacts with other code and is constantly changing, it is not a set-it-and-forget-it deal. No software is static.

Last year we saw the consequences from this when a 28-year-old developer briefly “broke“ the internet because he deleted open-source code that he had made available. The drama occurred because the developer’s program shared a name with Kik, the popular Canadian messaging app, and there was a trademark dispute. The rest of the story is complicated but has an important takeaway: Our digital infrastructure is very fragile.

There are people so important to maintaining code that the internet would break if they were hit by a bus. (Computer security folks literally call this the “bus factor.”) These people are well-meaning but tired and underfunded. And I haven’t even gotten to the fact that hard-to-maintain code is precisely where security vulnerabilities reside (just ask Ukraine).

All this makes Y2K look like a picnic, especially since the magnitude of these issues is unknown. Individual companies have no idea how vulnerable they might be. And it may be slow-moving — systems slowly being corrupted without causing crashes that are visible. Finally, since open-source platforms have been built by a community that has relished its independence, the problems won’t be easy to fix using traditional commercial or governmental approaches.

There are pioneers who are working on the problem. Open Collective is providing resources to aggregate the needs of groups of open-source projects to assist in financing, resourcing, and maintenance. Another organization, libraries.io, is doing a heroic job of indexing projects, including much-needed documentation and a map of relationships between projects. But none of these have support from the businesses most vulnerable to the issues.

When Y2K emerged, publicly listed companies were told to catalog their vulnerabilities and plans. The time has come again for markets (and perhaps regulators) to demand similar audits as a first step toward working out the magnitude of the problem. And maybe — just maybe — those corporations will find a way to support the infrastructure they are depending on, rather than taking it blindly as some unacknowledged gift. Every day is now Y2K.

Antitrust, facebook, Uncategorized

Is social graph portability workable?

In the New York Times, Luigi Zingales and Guy Rolnik are proposing to pre-emptively deal with market power issues arising from the likes of Google and Facebook by advocating for social graph portability. Rather than use price regulation or antitrust, the propose a reallocation of property rights. As they note, this has happened before:

[I]n the mobile industry, most countries have established that a cellphone number belongs to a customer, not the mobile phone provider. This redefinition of property rights (in jargon called “number portability”) makes it easier to switch carriers, fostering competition by other carriers and reducing prices for consumers.

This got my attention because it was I (along with Stephen King and Graeme Woodbridge) who first proposed this solution to mobile network switching costs. Here is the original paper and here is a more formal treatment. Stephen King has gone on recently to propose a similar reallocation to handle issues associated with credit cards and I once proposed it as a solution in retail banking competition.

The basic premise of the argument is that network effects make it hard for consumers to switch to new entrants. Switching costs was the basis of mobile number portability but there were not the network effects issues as you could call anyone on any network. Facebook is a different story. If you were to switch to another social network, let’s call it Newbook, you could not read your friends posts on that network and your friends could not read your posts on Facebook. Not surprisingly, that is a big problem for Newbook’s ability to compete and Newbook — even if it were far superior to Facebook for some (or all) consumers — would not get many (or any) customers.

Well, that is not quite true. As Zingales and Rolnik point out, Facebook have APIs that do give some ability for third parties to access consumer’s “social graph” — that is, who they are linked with on Facebook. But that can be cut off (something Twitter did in the past) or otherwise controlled by Facebook.

What Zingales and Rolnik want is for that social graph to be controlled by consumers who can choose where to re-route their communications on any network. So if I want my posts to go to you and to see your posts, so long as we have a link, it would not matter which network we were actually on.

In practice, what would this mean? It would mean that you might have a “social identity” (akin to a phone number). You would then form bilateral links with others and those links would be recorded publicly. You would then modify those links as need be. That said, the last time we did something similar we got marketing calls! The point being, one value of Facebook, in particular, is the way permissions work. It is not readily obvious how this would work in their absence.

It gets even more complex when we think about people’s private and public identities. Some of my Facebook posts are public and I read many public posts from the media, fan groups and companies. That is all part of my social graph but how would we work all of that? That said, there may be solutions there. The larger issue is how these links work is constantly evolving yet having a consumer controlled social graph may make it difficult to be responsive. After all, think about how you manage the social graph that is your pre-programmed fast dial numbers on a phone (if you even do those things). They quickly go out of date and you can’t be bothered updating them.

Zingales and Rolnik mention Google as well referring to your search history as a critical piece of data. I don’t think that Google’s search market power comes from switching costs per se but instead the economies of scale and scope it can generate by having access to the entire history of search … for the world! That is the barrier to entry although even that is not as high as barriers in the past. That didn’t stop Google in facing the consequences of owning a market in the EU last week. (As an aside, why would you spend over $2 billion to defend your right to do Google Shopping which has to be your worst product in that nobody has heard of it — unless of course it is surprisingly lucrative in which case the EU really have a point).

That said, practical issues of porting an entire social graph aside, Zingales and Rolnik are on to something. There is, in today’s world, a need to clarify what data a consumer owns. In terms of social graph, consumers surely have a right to share information they have provided Facebook with others and Facebook should probably make that easy even if it falls short of some portability proposal. Google already allow the export of much data (including your entire Gmail history using Google Takeout). But in all of these cases, true consumer power would come if you could also exclude how your data is used. The porting proposal doesn’t even touch that … yet.

Uncategorized

Energy fuels the Star Trek Economy

Here is an article I wrote forecasting what Canada might be like in 150 years.

Canada +150: Energy fuels Star Trek economy

File 20170616 545 1uqpblc
William Shatner as Star Trek’s Captain James T. Kirk is depicted on a commemorative stamp issued by Canada Post in 2016.
Handout/Canada Post

Joshua Gans, University of Toronto

Editor’s note: Canada Day 2017 marks the sesquicentennial of Confederation. While the anniversary is a chance to reflect on the past, The Conversation Canada asked some of our academic authors to look down the road a further 150 years. What will our world be like in 2167? Economist and innovation guru Joshua Gans looks at the intersection of technology and energy and what that could mean to Canada in the future.

Canada is poised to commence an age of abundance that will see its citizens living in a virtual utopia by the country’s tricentennial.

Over the past 150 years, the biggest impact on the economy has clearly come from technology. Predictions that technology would bring with it improved living standards have been staples of long-term economic forecasts. But most of those predictions have been accompanied by a belief that technological progress will decelerate and peter out, raising the question: What comes next?

Karl Marx and Joseph Schumpeter believed that when technological progress reached its limit, something else would replace capitalism. John Maynard Keynes believed that work would diminish and leisure would take over. And in the modern day, Northwestern economist Robert Gordon has carefully documented a case for progress in the next 150 years set to be a mere shadow of the previous 150.

If we are going to get dismal about it, we remove all of the fun. Moreover, I don’t want to pretend to forecast technological progress, otherwise I might end up saying stuff like Paul Krugman did in 1998: “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”

New energy

Instead, let me imagine — without foundation and with a large dose of hope — a future 150 years from now that still has another technological revolution to come. And if I am going to inject a big hit to the economy, that revolution needs to occur in one area: Energy.

An energy revolution yet to come would transform the economy.
Shutterstock

Everything we have gained in the last 150 years has come because we discovered how to use new sources of energy. We can turn coal, oil, gas, water and nuclear materials into industrial-scale energy. That has powered everything physical (our machines) and intangible (our software). It has led to construction (giving rise to cities, potable water supplies and sanitation), transport (bringing the world together) and weather control. To be sure, we have learned to use our energy better but there is only so much you can pull out of existing natural resources. So, to be optimistic about technology is to believe we will discover a new energy source.

I’ll leave it to others to speculate about what that new source might be, but imagine that it’s plentiful and also clean. And just to make it simple, suppose that energy becomes essentially free and easy to distribute, whether it is through long lasting batteries, fantastic light or something even more grand.

This would solve the basic economic problem of not enough to go around. In the process, most of the things that we consider to be jobs today will make no sense. Here, I quickly fall into the same prediction that other economists have made: Without jobs as we imagine them today, then what?

More leisure

For Keynes, the likely scenario was a move towards a minimal amount of work each day and then leisure time. He wondered if people were educated enough to know how to fill their idle time. With the better part of a century of experience, we can confidently say that they worked it out. Let’s face it: There has been no entertainment device as powerful or as sticky as television no matter what other options have presented themselves.

The idea of increased leisure time dominates conclusions in a recent book of famous economists speculating on what the world would look like in 100 years. But in that book, Nobel Laureate Bob Solow noted that there was no evidence that leisure time would increase. For the last five decades there has been an increase in work hours, especially in North America. This is an enduring puzzle if only because casual observation suggests that a lot of people do not like many aspects of their jobs.

William Shatner as Star Trek’s Captain Kirk is depicted on a commemorative stamp issued by Canada Post in 2016.
Handout/Canada Post

While I, as an academic, can rejoice in my relative job satisfaction, even hardworking professionals seem to wonder about their work-to-life balance. If we look to the super-rich, there are surprising numbers who do things that are leisure-averse, such as running to be a head of state or ending up in public service.

What does this mean? It means that we economists truly have no idea about what to predict regarding work when the economic problem is relegated to history. Some distract themselves worrying about inequality – that is, that the economic problem may be solved but powerful people prevent it from being widely distributed.

Other economists suggest there will be a shift in demand towards labour-intensive work, such as artisanal products which are valued because they are made by people.

One reliable prediction borne out by history is that unemployment has been surprisingly low relative to technological changes that have occurred. People seem to have always found something to do.

Star Trek

If I had to guess, for many people, solving the economic problem on Earth will draw them into space where the challenge of scarcity will reassert itself. Given the current laws of physics, they will do so in the knowledge that they are confining their own descendants to lower living standards rather than the current presumption that people work to make their children better off. In other words, I continue to place weight on a Star Trek-like economy of the future even if 150 years only takes us to the beginning of the Final Frontier — and I am far from alone in that view.

In reality, there is a good chance there won’t be a solution to the energy problem, allowing living standards to skyrocket. In this scenario, Canada is nevertheless surprisingly well-placed.

Climate change is likely to cause large populations to move north to escape the heat. And despite recent moves away from globalization, it’s entirely plausible that free trade will come about because ideas are all that need to move across borders when Star Trek-like ‘replicator’ on-demand fabrication becomes possible.

If I wanted to bet on providing good economic possibilities for my grandchildren (and in a way, I have), then placing them in Canada is a pretty good one.

Joshua Gans, Professor of Strategic Management, University of Toronto

This article was originally published on The Conversation. Read the original article.

Academic Research, Advertising, Esssay, transitions

The Value of Free in GDP

Did the rise of free information technology improve GDP? It is commonly assumed that it did.

After all, the Internet has changed the way we work, play, and shop. Smartphones and free apps are ubiquitous. Many forms of advertising moved online quite a while ago and support gazillions of “free” services. Free apps changed leisure long ago—just ask any teenager or any parent of a teenager. Shouldn’t that add up to a lot?

Think again. The creation of the modern system of GDP economic accounting was among the greatest economic inventions of the 20th century. Initially created in the US and Britain, this system has been improved for decades, and, for all intents and purposes, it is the system in every modern government around the world today.

Although this system contains some flexibility, it also has its rigidities, especially when it comes to free services. I expect the answer to sit awkwardly with most readers. Nonetheless, a little disciplined thinking yields a few insights about the modern experience, and that is worth the effort.

Research

The millennial generation sometimes likes to adopt the attitude that they invented everything except sex (and, sure, some do try to take credit for that). As it turns out, that attitude will not go far when it comes to economic accounting for free services, where nothing is new under the sun. Television and newspapers faced similar issues decades ago.

That is bad news, sorry to say. Prior experience does not provide much reason for optimism. It has always been known that GDP accounting does not handle free services easily, and for one basic reason. If a service lacks a price, then there is no standard way to estimate its worth in plain dollars and cents. Something that has no price also produces no revenue and, by definition, contributes nothing to GDP.

This was an obvious problem when commercial television first spread using advertising as its primary revenue source. The consumption is free, and the only revenue comes from advertising. When the TV experience improved— say, as it moved from black and white to color—GDP recorded only the revenue for television sets and advertising, not the user experience.

There was hope that industry specialists would find some underlying proportional relationship between consumption of services and advertising revenue—for example, between the time watching TV and the value of watching commercials. Such proportionality would have been very handy for economic accounting, because it would yield an easy proxy for improvement in the quality of services. Accountants would merely have to examine improvement in advertising revenue.

Alas, no such relationship could be found. Just look at the history of television to understand why. Television has gotten much better over the last few decades, but—for many reasons—total advertising has not grown. The economics is just not that simple.

A similar problem has arisen today. Search engines attract users, and that generates tens of billions of dollars of revenue from advertisers, and that revenue contributes to GDP. However, the services delivered by search contribute no revenue and, by definition, contribute nothing to GDP. With so many free services today, this weakness in GDP accounting seems awkward. It does not matter how amazing the services are, nor how much they have improved over time. Any improvement in the quality of search services is not a contribution in GDP except insofar as it generates more advertising dollars.

Recently, three professional economists— Leonard Nakamura, Jon Samuels, and Rachel Soloveichik—tried to wade into this topic again, and tried something experimental and novel. They wondered how GDP would change if these free services were reconceived as a barter. (For more details, read their study, “Valuing Free Media in GDP: An Experimental Approach.”  And just to be clear, I did not coordinate this column with them. This column reflects my views of their research, and may or may not reflect their opinion.)

Think of it this way: Google Maps would be counted as output if used by a household to plan a vacation. It could be used as a business input, too, if used by a logistics company to plan deliveries. The study then asks: are we missing a big part of GDP by not counting the recent growth in online free services as a barter?

A Barter Approach

These three economists imagined the following thought experiment: what if the user is “paid” for watching advertising?

This is a production-oriented conception for accounting of economic activity. Every input counts insofar as it helps a user produce something they consume or it helps a business produce output. It is also a conceptual straight jacket as a way to account for value, and that is the point of the exercise. It imposes discipline and consistency on the accounting and makes free services just like every other input in production.

To be clear, this method does not provide any conceptual shortcut to deal with businesses that aspire to be ubiquitous and promote free services to achieve that aspiration. In this conceptualization, such businesses are thought to have given away valuable inputs in order to gain revenue later. So, the giveaway accrues as intangible capital investments, which the economy eventually monetizes as output. (If you are curious, the economists have a long discussion about it, which I will skip for the sake of brevity.)

Applying these ideas took effort and care. The authors had to examine distinct parts of the economy in which advertising supports free services, including print newspapers and magazines, broadcast television and radio, cable and other non-broadcast media, and online media.

The bottom line is straightforward to state, albeit disappointing and surprising: GDP does not change very much. The authors find that between 1929 and 1998, their new measure of real GDP falls by a tiny percentage, and measured productivity growth rises by a tiny amount. Between 1998 and 2012, real GDP rises by a tiny percentage, and measured productivity rises by a tiny amount.

Figure 1 provides intuition for this answer. The figure shows total advertising revenues in the US economy, broken apart by sector. The four sectors have different experiences. Since the mid-1950s, broadcast radio and television ad revenue has risen and fallen, with cable television ad revenue rising enough to generate net growth in the combined total. Advertising online appeared in the late 90s and has grown ever since.

Only one form of advertising has experienced long-term declines, and that was advertising for print newspapers and magazines. It is obvious what has happened: Broadcast radio and television took some of the advertising, and then cable television took another bite, and then the Internet came along and took another chunk. Just as interesting, all this reallocation took place in the face of a near-constant level of total advertising across the entire economy.

That explains what is going on. Since the key input (advertising) did not grow much, there is simply no way a bartered approach (to account for advertising) can make much of an impact on overall GDP growth. The input is merely reallocated between sectors.

Let me rephrase by focusing on the non-Internet economy. Many fundamental economic facts are just what they always have been, free Internet or not. Ride sharing apps made Manhattan more livable, but an auto purchase still costs the average household a high fraction of yearly income. Online sites track your health, but that did not change the need to eat fruits and vegetables. Weather apps let you track the weather, but you still must buy gloves and warm clothing in the winter. In short, the economic production that supports humanity has not changed much.

 

Here is my two cents: these three researchers may have just put the nail in the coffin of using production-side measures of the free economy—and that is not really all that bad. GDP is a measure of total production. It was ever meant to be a measure of how well-off society has become.

More to the point, maybe it is time to focus on the demand-side measures of free goods. In other words, you get a lot more for your Internet subscription, but nothing in GDP reflects that. For example, the price index for Internet services should reflect qualitative improvement in user experiences, and needs to improve.

Copyright held by IEEE. To view the original essay click here.