blog2

Shaped by the Feed: The Personal Costs of Data-Driven Marketing

INTRODUCTION: FROM CONSUMERS TO TARGETS
Data-driven digital advertising transforms how individuals are perceived and shaped online. Rather than neutral commerce, personalised ads leverage behaviour, demographics, and psychometrics to nudge decisions and reinforce identities (Nadler & McGuigan, 2021). This practice, often invisible, reinforces stereotypes, manipulates behaviour, and compromises autonomy.Indeed, users are no longer just audiences; they are algorithmically segmented subjects whose emotional vulnerabilities and cognitive shortcuts are monetised (Zuboff, 2019). This transformation reshapes not only how people shop but also how they construct their sense of self and social belonging.Indeed, users are no longer just audiences; they are algorithmically segmented subjects whose emotional vulnerabilities and cognitive shortcuts are actively exploited for profit (Büchi, 2021; Thomas & Docherty, 2021).Platforms don’t just respond to user preferences, they predict and influence them. Users are becoming more predictable as our online choices are incorporated into machine learning systems, reducing individual agency and reinforcing feedback loops that limit exploration.

PERSONALISATION OR PROFILING?
Algorithmic personalisation promises relevance. However, these systems rely on tracking everything from likes to scrolling speed, turning users into predictable behavioural profiles (Zuboff, 2019). Eubanks (2018) notes such profiling reinforces inequalities when categorising by gender, race, or income.This is particularly visible in credit, housing, and job recruitment platforms, where automated decision-making has led to systemic exclusions (Eubanks, 2018; Angwin et al., 2016). When personalisation crosses into predictive profiling, it risks determining what options individuals even see—removing their ability to choose freely.This is particularly visible in credit, housing, and job recruitment platforms, where automated decision-making has led to systemic exclusions (Eubanks, 2018; Angwin et al., 2016).

STEREOTYPING THROUGH ALGORITHMS
Studies demonstrate ad delivery replicates biases. Ali et al. (2019) found job ads for higher-paid positions are more likely shown to men. Facial recognition-based targeting can further racial discrimination, amplifying inequalities (Noble, 2018).This reflects what Noble calls “algorithmic oppression,” where the logic of optimisation embeds social prejudice into everyday digital interactions (Noble, 2018).

It not only limits opportunity but normalises discriminatory norms under the guise of efficiency.This reflects what Noble calls ‘algorithmic oppression,’ where the logic of optimisation embeds social prejudice into everyday digital interactions (Noble, 2018).

MICROTARGETING AND POLITICAL INFLUENCE
Cambridge Analytica exemplified how psychological profiling manipulates voters through emotional targeting (Cadwalladr, 2018). These techniques undermine democratic processes by encouraging emotional reactions instead of rational deliberation.The logic of microtargeting is inherently asymmetrical—it grants campaigners granular insight into individuals while denying the public a shared informational ground. As a result, citizens are offered fragmented, polarised versions of political reality, often shaped by behavioural triggers rather than civic engagement (Bennett & Segerberg, 2012).The logic of microtargeting is inherently asymmetrical—it grants campaigners granular insight into individuals while denying the public a shared informational ground.

EMOTIONAL EXPLOITATION AND FAKE NEWS
Emotionally charged ads spread misinformation because engagement, not truth, drives algorithms. During COVID-19, anti-vaccine conspiracy theories proliferated due to their provocative nature, creating public harm (Frenkel et al., 2020).Emotionally manipulative content, such as fear-based messages or fabricated claims, enjoys algorithmic preference due to its virality (Pennycook & Rand, 2018). This undermines users’ epistemic agency—the ability to filter what’s credible—by constantly prioritising affect over evidence.Emotionally manipulative content, such as fear-based messages or fabricated claims, enjoys algorithmic preference due to its virality (Pennycook & Rand, 2018).

CONSENT WITHOUT CONTROL
Consent becomes meaningless when dark patterns push users towards data sharing without informed understanding (Mathur et al., 2019). Power asymmetry prevents meaningful user control. Even when users are given options, they are often buried in lengthy policies or nudged through misleading interface designs. “Accept All” buttons are prominently placed, while opt-out mechanisms are hidden or require extra effort. This architecture exploits cognitive fatigue, not informed choice.For instance, cookie banners often use misleading design to nudge users into accepting tracking, a phenomenon termed “privacy Zuckering” (Brignull, 2020). Furthermore, the complexity of privacy policies leaves users unable to comprehend what they are agreeing to (Nouwens et al., 2020).

RESISTANCE AND REGULATION
Despite regulatory frameworks like GDPR, enforcement is weak. Structural reform alongside digital literacy is required for meaningful resistance. Beyond institutional change, user-led resistance is emerging in the form of ad blockers, privacy-focused browsers, and ethical tech movements. While these actions are limited in scope, they signal a growing awareness that behavioural data should not be exploited unchecked.As Gillespie (2018) argues, platforms act as “custodians of the public sphere” yet operate under corporate imperatives that often conflict with public accountability. Without global coordination and transparent oversight, legal measures struggle to keep pace with technological innovation. Civil society and educators play a crucial role in equipping users to resist manipulation.As Gillespie (2018) argues, platforms act as ‘custodians of the public sphere’ yet operate under corporate imperatives.

CONCLUSION: AUTONOMY IN THE AGE OF ALGORITHMS
Data-driven advertising shapes how we shop, vote, and think. Protecting autonomy demands critical examination and accountability from those profiting off shaping digital identities. While technology itself is not inherently harmful, its current implementation prioritises commercial gain over human dignity. Reclaiming digital autonomy involves not just resisting invasive advertising, but reimagining a digital economy where users are citizens, not just data points.True reform requires not only corporate transparency but also participatory design approaches that foreground user agency. Until then, the digital marketplace remains an uneven battlefield—one where personal data is currency and attention is the product.True reform requires not only corporate transparency but also participatory design approaches that foreground user agency.

References (Harvard Style):

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A. and Rieke, A., 2019. Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), pp.1–30.

Angwin, J., Larson, J., Mattu, S. and Kirchner, L., 2016. Machine bias. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bennett, W.L. and Segerberg, A., 2012. The logic of connective action. Information, Communication & Society, 15(5), pp.739–768.

Brignull, H., 2020. Deceptive by Design: The Dark Patterns in Modern UI. Darkpatterns.org.

Cadwalladr, C., 2018. The great British Brexit robbery: how our democracy was hijacked. The Guardian.

Eubanks, V., 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Frenkel, S., Alba, D. and Zhong, R., 2020. Surge of Virus Misinformation Stumps Facebook and Twitter. The New York Times.

Gillespie, T., 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

Mathur, A., Acar, G., Friedman, M.G., Lucherini, E., Mayer, J., Chetty, M. and Narayanan, A., 2019. Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), pp.1–32.

Nadler, A. and McGuigan, L., 2021. An impulse to exploit: The behavioral turn in data-driven marketing. New Media & Society, 23(8), pp.2129–2148.

Noble, S.U., 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

Nouwens, M., Liccardi, I., Veale, M., Karger, D. and Sax, M., 2020. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence. CHI Conference on Human Factors in Computing Systems, pp.1–13.

Pennycook, G. and Rand, D.G., 2018. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings. Management Science, 66(11), pp.4944–4957.

Zuboff, S., 2019. The Age of Surveillance Capitalism. PublicAffairs.

Word count: 892
 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *