Citizens in many countries increasingly voice concerns about the lack of online privacy. Despite expressing these concerns, the vast majority of people widely share personal data on the internet and when using connected devices.
This is the privacy paradox. We fail to take actions to protect our privacy even when we claim to be worried by the lack of online privacy.
To better understand the privacy paradox, Susan Athey from Stanford University and Christian Catalini and Catherine Tucker from MIT[1] ran an experiment offering undergraduate students a free pizza in exchange for disclosing the email addresses of their closest friends. Many students took the offer and gave away their friends' emails. Strikingly, students who stated to be concerned about privacy were equally likely to disclosing their close friends' emails as students who said they did not care much about privacy. This experiment confirmed that behavior towards privacy seems quite disconnected from stated concerns.
Failure to protect privacy is not ubiquitous though. In a recent study written during her Ph.D. at HEC Paris, Huan Tang[2] analyzed a randomized experiment run at a large online lender in China. The online lender routinely asks people who apply for a loan a lot of personal information. They ran an experiment by randomly not asking certain questions to certain loan applicants.
Huan Tang found that not asking some questions raised the chances that the customers completed the application and eventually took the loan. For example, not asking the loan applicant's identifier on the Tencent QQ instance messaging app increases the likelihood that the client takes the loan, thereby increasing the lender's revenues. Not asking the contact information of the employer also had a significant impact, probably because people were concerned that the online lender would call their employer if they failed to repay the loan.
Huan Tang estimates that people are willing to pay the equivalent of a half-day of salary in the form of a higher loan fee in order not to disclose personal data.
This research has two implications. First, it can be in the interest of online companies not to collect information to convince customers to adopt their products. Privacy can be a win-win outcome.
First, it can be in the interest of online companies not to collect information to convince customers to adopt their products. Privacy can be a win-win outcome.
Second, the propensity of customers to protect their data seems to be context dependent. MIT undergraduate students happily disclose personal information whereas Chinese online borrowers are more cautious. A possible explanation is that customers in China feel less protected against malevolent use of their personal data compared to Europe for instance, so when they have the choice, they are more inclined to protect their data. (I would be curious to know what readers from China think of this hypothesis!)
What explains the privacy paradox? Privacy as a public good
It has been proposed that internet users are naive and do not understand how their data are used. This was perhaps true in the first years of social media but awareness of data usage by tech companies have greatly improved. Ignorance can therefore no longer be invoked to explain the privacy paradox.
When one person protects its private data, it protects other people's privacy as well.
A more useful idea has been proposed in a recent paper by Acemoglu, Makhdoumi, Malekian and Ozdaglar.[3] The idea is that privacy is a public good: when one person protects its private data, it protects other people's privacy as well.
Data sharing as an externality
When you share personal data with a company, you disclose information not only about you but also about other people who are similar to you. For instance, when Amazon keeps track of my behavior on its platform, it learns something not just about me but also about the tastes of finance academics (which I'm not going to share in this article, it's our secret!) This information is useful to forecast the behavior of other people with similar characteristics and show them targeted content.
When you share personal data with a company, you disclose information not only about you but also about other people who are similar to you.
Economists have a term for this type of phenomenon: the decision to protect or not your personal data creates an externality on other people because your data are used to train algorithms that are applied to other people.
The notion that privacy generates an externality is useful because it forces us to think about whether the externality is positive or negative; that is, whether sharing personal data is beneficial to, or hurt other people.
Indeed, sharing personal data can be beneficial. For example, if a biotech developing artificial intelligence-based detection of lung cancer has a larger database of lung scans, it will be able to develop a better algorithm to detect lung cancer, which will benefit future patients. In this case, when people consent to the inclusion of their lung scans in databases used for medical research, they exert a positive externality on society.
But the data sharing externality can also be negative. For example, when information about individuals is used for non-transparent political advertising, algorithms' greater ability to target such advertising is detrimental to democracy, as showed by the Facebook-Cambridge Analytica scandal.
Abundance of data in the hands of a few companies can also be detrimental to consumers. Privileged access to data for big tech companies can unlevel the playing field with respect to other companies and prevent the entry of new players, hurting consumers in the form of higher price and less innovation.
Data regulation
Understanding data externalities is key to determine the right way to regulate tech companies. Europe has taken a first step with the GDPR in acknowledging that the law should make it easier for individuals to protect their online privacy. Recently, Amazon has received the largest GDPR fine to date, over its targeted advertising practices.
It is also a competitive advantage for companies to comply with privacy regulations, who can be helped by an app that is being developed by Professor David Restrepo Amariles with AI researchers and data scientists. Other solutions are proposed by Professor Ruslan Momot, summarized in this article.
However, the GDPR framework does not address the fact that privacy is not only a private good but also a public good. The clearance of the acquisition of Fitbit by Google by the European Commission shows that competition authorities are still taking a soft stance on the risk of negative data externalities.
Sources:
Why do we share our personal data? The privacy paradox, in The Finance Blog by Johan Hombert.
Privacy as a public good, in The Finance Blog by Johan Hombert.
References:
[1] Susan Athey, Christian Catalini, and Catherine Tucker, 2017, The Digital Privacy Paradox: Small Money, Small Costs, Small Talk, NBER Working Paper [pdf].
[2] Huan Tang, 2020, The Value of Privacy: Evidence from Online Borrowers [pdf]. Find the summary on Knowledge@HEC. Huan is a graduate of the HEC Paris Ph.D. program, now Assistant Professor at the London School of Economics.
[3] Acemoglu, Makhdoumi, Malekian and Ozdaglar, 2019, Too Much Data: Prices and Inefficiencies in Data Markets, NBER Working Paper [pdf].