I was at a conference early this week hosted by the Technology Policy Institute. One of the keynote address was by the Chair of the FTC where she talked about big data and privacy. It struck me in listing to the speech that there was insufficient attention being paid as to what might be effective in this realm. Put simply, I can imagine regulations that may prevent big companies like Google and Facebook from doing certain things but it also seems to me that so much data is being gathered that a regulation on one set of firms only raises the commercial opportunities for other firms outside of the domain of regulators — legally or otherwise. Specifically, data about me could have been gathered and then sold to firms to use it in some fashion. Personally, it wouldn’t worry me as much if it was used as an anonymous point in a big dataset — although I’d worry about the quality of data in that case. What would worry me is it being used specifically in relation to me.
My suspicion is that there is no real protection about data being gathered about me and being used in relation to me. The only person who may have a hope of stopping that from happening would, in fact, be me. The question is what tools would allow me to self-police inaccurate and misapplied data in relation to myself?
I think Google already offers us a clue. Google allows us to see — in a limited but important way — the information they are using to throw ads at us. There is information based on my searches and also information based on websites I visit. As I use Chrome and permit this, Google has lots of information this way and I personally don’t mind them using it to get me more relevant ads. But when I looked at this recently, I saw that Google had decided I was interested in boxing, Brazilian music and Android Apps. I have no idea how any of those happened but they dominated my ad preferences. Fortunately, Google also provided a way to change these assumptions which I did.
What if ‘assumptions’ data like this that a company used in a formula to base a price or service to you was required to be transparent? This wouldn’t require a company to reveal the formula but it would require them to reveal any assumptions they are making (based on data gathered, purchased or otherwise) and allow you to challenge them on the basis of accuracy. To be sure, this wouldn’t prevent information about yourself being used against you (e.g., a correct assumption you were a smoker for health insurance) but it would be a first step in ensuring that, at the very least, the assumptions are accurate. Of course, at a first glance, companies should have an incentive to be accurate anyway but this is also costly and they may not invest sufficient resources to ensure this is the case.