What Foucault can Teach Us about Privacy Self- Management in a World of Facebook and Big Data

Privacy paradox is when people are worried about their privacy on the internet, but when it comes to keeping their data safe, they consistently choose options that make their information available to tech companies. However, this model expects people to make decisions about their privacy in an individual, self-managed manner.

On page 91, Hull argues that this current model fails to protect user privacy and cites three reasons why. The first one is that there is a lack of transparency for what their data is being used for, i.e. an information asymmetry. This leads to their consent not being meaningful anymore. This lack of transparency arises from privacy policies being inaccessible - one cannot comprehend the long list of terms and conditions and ends up choosing the default option provided. Big tech companies don’t have the incentives to make these consent notices more accessible either. The author gives a particular example of Facebook (now Meta), a company where the users themselves are the product. Since their business model depends on selling user information to third parties, they would not jeopardize that by having more people not consent to give their data. Consequently, the users are not privy to the privacy policies of said third-party sites, and so they never have accurate knowledge of what’s being done with their data. These companies have an advantage over customers because only they are aware of the cost structure, and technology used, and factors such as stock market valuation drive their intentions. This allows companies to make more money off of users than users would like. Therefore, information asymmetry is structural and inherent to the way online trade is set up.

As an extension, the structural problem begins with the mere assumption that the user data possessed by tech companies is theirs to manipulate. Unstructured data is harmless, but the potential for harm arises in structured data. We know that unstructured user data is created as a by-product of online interaction, but the problem arises when tech companies assume that they can use this data for reasons beyond what the actual online ecosystem requires, including structuring said data, finding insights into human behavior, and sharing them with third parties. For example, houses in a lane are accessible to the neighborhood, and civilized society assumes that no one can take someone’s property without consent despite that property sitting out in the open. Moreover, even if minor harmless thefts are made, they can have severely harmful consequences. While user data is readily available, it would be the fair play of the companies that they wouldn’t use this data without consent, but in reality, they are very ambiguous about it. Therefore, unless we make changes to this structure where tech companies assume that user data is something that they can manage and utilize for ulterior motives unless told not to, privacy will always remain under-protected if users have to make decisions in a self-managed manner.