AI and nudging. Who has control?

May 5, 2026 at 10:43 PM
Posted by
Categories: Uncategorized

AI and nudging. Who has control?

Sanjana Koushik ’22 NU and Jim Stellar

In our third blog, we discussed how individuals can build awareness of decision engineering and make more purposeful choices. This involves slowing down, as recommended by decision-science Nobel Prize winner Daniel Kahneman, engaging in introspection, and strengthening the brain’s capacity for deliberate thought. While this practice allows us to better navigate the nudges embedded in our environments, a new layer of complexity is emerging: one that is adaptive, personalized and constantly changing. Artificial intelligence systems are increasingly used by the average human, and by choice architects to shape how decisions are presented and in turn, how they are made. These systems learn from us in real time, continuously refining the structure and therefore biasing the framing of choices based on our behavior. This raises an important question: as nudging becomes personalized, predictive, and increasingly invisible, how do we make decisions independently of the systems shaping them?

From a business perspective, AI has fundamentally transformed how companies interact with consumers. Platforms leverage vast amounts of behavioral data- what we click, how long we pause on a screen, and even when we abandon a cart- to tailor experiences at an individual level. But this doesn’t stop at observation; increasingly, these systems construct simulated versions of who we are using our aggregated data, to anticipate a successful journey for our persona and nudge us accordingly. As someone working in a technology company, I (SK) have seen firsthand how organizations are investing in personalization engines that optimize experiences across the board. These systems are designed to increase engagement and conversion, but also signal a broader transition from broad behavioral nudging to highly personalized and adaptive forms of decision influence.

A recent example that caught our attention was a report around Instacart experimenting with pricing variations and promotions based on user behavior and demographic signals, in a tactic called “smart rounding”. While dynamic pricing is not new, the integration of AI allows for increasingly granular adjustments that learn and change over time. What may have begun as a strategy to maximize revenue has evolved into a system that subtly shapes how we perceive value itself. By tailoring costs to inferred willingness and ability to pay through small, often imperceptible adjustments, these systems can significantly influence purchasing behavior, extending beyond placement or framing to reshape how economic decisions are made.

Simultaneously, recent discussions around AI-driven mental health tools have highlighted a more concerning dimension. In one widely reported 2024 case, an AI chatbot designed to provide emotional support was implicated in reinforcing harmful thoughts expressed by a vulnerable user- at times validating, mirroring, or failing to challenge expressions of hopelessness and self-harm ideation. While these systems are intended to assist, we underestimate the risks associated with deploying adaptive technologies in emotionally sensitive domains, particularly among more impressionable populations. When AI systems learn and optimize using feedback loops from human input, they amplify not only preferences, but also fears, biases, and distress, reinforcing harmful cognitive patterns and bias over time.

From a neuroscience perspective, these developments intersect with the same brain systems we have been discussing throughout this series. The limbic system, which evaluates emotional significance is something Kahneman (mentioned above) might have called the intuitive heuristic-based System 1 (but he did not). The second system,  that is more deliberative and analytical (Kahneman’s System 2), we think is based on the neocortex, a more recent evolutionary invention that gives us symbolic logic, abstract reasoning, and language. Recently, AI has been able to do a surprisingly good job of imitating this information processing, and its fundamental structure of artificial neural networks reminds us of the network of interacting cortical columns in the human neocortex. Notice that we learned language without being programmed, or maybe even taught, just like AI has learned how to predict the next word in a sentence by “reading” everything on the internet. Now we think the limbic system taps into the emotional pathways to inform the neocortex of higher-order processes, such as the value of its plans. But how does this balance between intuitive and deliberative processing work, and what happens when it is disrupted?

When time and cognitive resources are limited, this balance can shift toward faster, more automatic responses that bias how we process information. This is precisely where modern AI systems exert influence. As these systems learn what captures our attention and triggers engagement, they become increasingly effective at activating our fast, intuitive System 1 processes. Over time, this can reduce the likelihood that we engage in slower, more reflective reasoning. In this way, decision-making is no longer shaped solely by internal cognitive processes; it is increasingly guided by external systems that influence how, and how deeply we think.

In considering our systems of thinking, it is clear that effective decision-making is not about eliminating emotion, but about integrating it with intentional reflection. However, when external systems, such as AI-driven applications designed to continuously stimulate emotional responses, come into play, this balance can be disrupted. The result is not just influence over individual choices, but a gradual reshaping of how decisions themselves are formed. In a society of increasingly adaptive and personalized nudging, the deeper question is not simply how we make isolated decisions, but how the very process of our decision-making is shaped, and what that means for our independence.

Across this series, we have explored how decision-making emerges from the interaction between intuition and deliberation, as well as biology and environment. From a neuroscience perspective, we have considered both the cognitive structures that form the foundation of judgment and the external systems that increasingly influence it. These interconnected systems, embedded within environments structured through choice architecture, continuously shape how we interpret and respond to the world through subtle framing and nudging at scale. As artificial intelligence becomes more adept at modeling and mimicking these human processes, the boundary between internal cognition and external design has become increasingly blurred. What was once primarily internal negotiation between fast and slow thinking is now performed within the context of systems that are actively learning how to capture attention and guide behavior. Ultimately, the challenge is not to reject AI or decision engineering, but to evolve alongside them. As we’ve emphasized in previous blogs, introspection becomes even more critical in an AI-driven world. The ability to pause, reflect, and question the drivers of our choices alongside our values may be one of the most important cognitive tools we can develop to preserve agency in an environment of adaptive influence.

PREVIOUS
Is a mentor in college similar to a nurse in medicine in creating good outcomes?
NEXT
Graduation is looming. What now?
0 Comments

Leave a Reply