In the New Yorker, Tim Wu argues for yes. His argument is simple. Facebook make money because they collect data that their users freely give them.
For the most valuable innovation at the heart of Facebook was probably not the social network (Friendster thought of that) so much as the creation of a tool that convinced hundreds of millions of people to hand over so much personal data for so little in return. As such, Facebook is a company fundamentally driven by an arbitrage opportunity—namely, the difference between how much Facebook gets, and what it costs to simply provide people with a place to socialize. That’s an arbitrage system that might evaporate in a world of rational payments. If we were smart about the accounting, we’d be asking Facebook to pay us.
On its face, this is one of those arguments that people might think makes plain sense. Facebook gets money because we give them data and so, if we were to not do that freely, they would still make some money and we could too. But this is also one of those arguments that does not really stand up in the face of closer scrutiny.
Let’s work backwards from the “if we were smart about the accounting, we’d be asking Facebook to pay us.” Suppose we were smart about the accounting. Suppose, for instance, that we each did a calculation of precisely what Facebook is earning from us, as an individual. I am going to argue here that Facebook does not get much more from us than we actually get from it.
What does good data about a population give Facebook? To keep things simple suppose there are two ads — A and B — that Facebook can choose to put in front of any given user. The problem is that for any given user, only one is valuable to advertisers who are willing to pay $1 for a good match and $0 otherwise. Absent any information, Facebook might put A or B in front of me. If they literally do not know which one is better, their expected value for advertisers is 1/2 x $1 or $0.50 and this is the most they can charge advertisers. Suppose, however, that Facebook has population data and can work out that more of its users are of the type that are suitably matched to ad A rather than ad B. Suppose that proportion is z > 1/2. Then Facebook will know it can earn z from advertiser A and (1-z) from advertiser B. As z > 1/2, it will show its users ad A and so earn z. It’s return if you are doing the accounting from that population level data is z – 1/2; that is the value of population level data.
We should pause here and note that, to get a good estimate of z, Facebook doesn’t need accurate data from each and everyone of its users. It just needs a sufficient sample of them. What that means is that the value of your personal data in this equation to Facebook is likely to be nothing. That is pretty clear given the example here but I should also note that the population inference problem can be more complex. For example, Facebook may be trying to develop algorithms that allow them to target ads based on some less common customer characteristics. For instance, it is a much tougher job to work out the set of consumer characteristics that suggest an interest in underwater basket weaving than whether you like Coke versus Pepsi. If it turns out you are one of those unusual people — and chances are that you are for something — then Facebook might want to have your data as part of the sample. All this is short for saying that the returns to Facebook for grabbing the data of marginal users is diminishing but the rate is harder to parse. (This, by the way, is why I said “Probably not” rather than “no.”)
But we are not quite done yet. I have over-simplified what Facebook learns from user data although I do not believe I have over-simplified where they can make their return based solely on what they learn. Facebook, in fact, learns that users who, say, appear to like a, will be more appropriately shown ad A than ad B but, up until now in my argument, they have only been able to exploit the fact that amongst their users, a proportion, z, like a. What if they could identify and use the fact that a particular user liked a or b? If they knew that I liked b, they would show me B.
If we continue our smart accounting, then armed with personal data (rather than just population data), Facebook can perfectly target ads. Given our assumptions, Facebook will be able to match A and B to users perfectly and so will be able to charge advertisers $1. You might think, therefore, that the value of data to Facebook is 1 – 1/2 = $0.50 per user. This is what Wu is arguing. However, it is a conflation. The return to population data is z – 1/2. The return to having your personal data (i.e., whether in fact you like a or b) is 1 – z. That is the value you bring to Facebook at the margin.
Does this mean that Facebook should pay us some portion of 1 – z? Yes. From Facebook’s perspective, it had better be sure that if our value from being on Facebook is v, then v has to be greater than zero. Otherwise, Facebook would have to pay us to get 1 – z.
Here is where Wu is making an assumption regarding v. Here is what he says:
A different way of understanding the surrender of personal data is more personal. To pay with data is to make yourself more vulnerable to the outside world. Just as we all perk up if someone says our name, we are inherently more receptive to whoever knows more about us. The more data you give away, the more commercially customized your world becomes. It becomes harder to ignore advertisements or intrusions. Those willing to pay will be able to grab your attention and, in certain cases, exploit your weaknesses.
But that is far from clear. For starters, people like Facebook even if they ignore what this might mean for ads. But, secondly, what is the counterfactual. If I had the power to shut off the flow of personal data to Facebook yet could still use Facebook, that means that Facebook will show me ads based on the population inference and not my own characteristics. That means that much of the time I am getting ads that are useless to me and useless to advertisers. There is surely an argument to be made that when we get ads more suitable to our own interests we find them less annoying. This, by the way, is the reason why magazines that are targeted using specific interests tend to have way more ads than newspapers. Why? Because people actually like seeing the ads. To be sure, those ads may be exploiting them, grabbing money and causing consumerism. I have no idea. But on the metric of pure annoyingness, it is far from clear that Facebook without my personal data is worse for me than Facebook with my personal data.
In summary, Facebook make profits. Does that mean that they could pay us more and still be around? Yes. How would they do that? I have no idea. But when you step back, this is a pretty ordinary state of affairs in business. Chances are that the deal I am getting from Facebook is not dramatically unfair.
3 Replies to “Should Facebook be paying us?”
I wonder how many people are like me.
I have become very practiced at ignoring ads. I may click one or two ads per month intentionally and one more accidentally. The ads don’t annoy me because I tune them out. I don’t really care if Facebook is making money from my information though I don’t think they are getting all that much. I’m getting a service from Facebook and I’m paying very little for it.
Recently I filled in a survey from The New Yorker asking my views on several advertising campaigns they had recently run. I read The New Yorker carefully but I was completely unaware of any of the ads. I had ‘never heard’ of any of them.
Am I alone in this or almost alone?
It seems a big thing here, though, also, is just economies of scale, including plummeting transactions costs with scale, and coordination problems.
You can’t nearly as practically just go out and sell your data to the many advertisers out there; the costs are just too high in time, trouble, and money. Facebook puts the data together for hundreds of millions of people and sells it at much lower transactions costs. And, of course, the great software we get in return is ridiculously high in economies of scale, including positive network externalities.
And this is just so much the benefit and problem with modern high-tech economies. The Republican Party, to the extent they think at all (beyond simple-minded soundbite economic dogma) wants to take us back to 1810, the good old days of small gub’ment; you know, when the 99% were dirt poor and illiterate, and average lifespan was in the 30’s. But in a modern high tech economy, more and more of the value is in non-rival, little-rival, and other extremely high economies of scale goods.
And so you have this very serious natural monopoly power problem. And that means more and more a bigger government role can increase efficiency and total societal utility.
Facebook has a lot of monopoly power now, and so they should be watched carefully, and have some smart government regulation. The same with much of advertising, and so much else.