The economics of the Internet lies in the structure of targeted advertising: advertisement companies analyze user behavior and market a different product based on the patterns they exhibit online. While the discussion of targeted ads stems from an economic perspective, I am interested in the behavioral lens pertaining to the flow of information (about the users). While it is known that user surveillance and privacy are major concerns, there is a lesser-discussed aspect of this data-driven labeling: how do users perceive themselves when they are aware of the targeted nature of ads? Users are much more likely to believe the inference that is made about them by the algorithm when it is targeted, given that it should be somewhat accurate, to begin with. That is, the algorithm can present information to the users about themselves that they otherwise would not have known, where the conclusion drawn is by the internet habits provided by the users’ digital footprint. However, the glaring issue with this is when such algorithms will translate biases that have been (un)intentionally programmed. This leads to misinformation for the users about themselves attributed to the level of trust put into the algorithm. Furthermore, users would not only believe themselves to be a certain person but would buy products in-line with this perception - even if it is somewhat incorrect. One interesting use case I observed was a post on Instagram detailing a “modern love story”, where a girl uses her boyfriend's laptop to search for items she wanted him to gift her. While some level of playing with psychology is associated with advertising, targeted ads are perpetuating the potential for personalized manipulation. Another aspect of such targeting is the effect on trauma survivors. Advertisements may successfully conclude that a person is interested in the ballpark of a certain product, but they cannot make causal inferences pertaining to “why”. For someone who searches for information about a disease they fear they have, given how the algorithm works, the user would constantly get recommendations for the same disease, furthering their anxieties. This can be said for any situation the user feels deeply about and from which they could get triggered easily. We know that the Internet is a constant feedback loop of information, but the targeted ads are worse than social media because you know they are specifically for you. Therefore, you end up attempting to decode stuff about yourself than merely being sold a product. Lastly, the unstructured user data used to make conclusions has been detrimental in cases where there are legal consequences, such as abortion in the US. Using advertisement data to find out who is likely/has gotten an abortion has been done before, and raises questions pertaining to the sharing of user data. In this case, anyone with the potential to be prosecuted will be fearful of their safety on the Internet and wouldn’t trust any information shown to them. Thus, while targeted advertising came about as a business model for generating online revenue, it has serious sociological, psychological, and legal implications when it comes to users. It calls for the development of a new framework to navigate the digital space that didn’t exist before, and which runs deeper than simple economics.

References: https://insights.som.yale.edu/insights/now-its-personal-how-knowing-an-ad-is-targeted-changes-its-impact https://www.vox.com/the-goods/2020/4/9/21204425/targeted-ads-fertility-eating-disorder-coronavirus https://theconversation.com/online-ads-know-who-you-are-but-can-they-change-you-too-54983https://edition.cnn.com/2022/06/24/tech/abortion-laws-data-privacy/index.html