In the era of AI-powered personalization, the prices we pay are no longer fixed. They fluctuate, sometimes minute by minute, based on who we are, what we browse, when we act, and how platforms interpret our behaviors. But what if, in the rush to optimize, we’ve forgotten to ask the most basic question: Is this fair?
Anmol Aggarwal, a product leader at Intuit and former engineer at Uber, has spent the last decade building the very algorithms that shape our digital economy. Lately, however, his focus has shifted not just on making systems smarter but also on making them more just. With a background in machine learning, marketplace dynamics, and startup innovation, Aggarwal is part of a new generation of technologists asking hard questions about pricing: questions that go beyond conversion rates and revenue curves.
“Most people don’t realize how much pricing decisions are now happening in real-time, and how much they can affect consumer behavior, even when no one’s explicitly trying to be unfair,” Aggarwal says. “That’s the risk: optimization without awareness.”
The Hidden Complexity of Pricing Signals
One emerging challenge, Aggarwal notes, is hidden pricing variability—cases where users receive different prices due to factors they’re unaware of. For instance, browsing the same flight from different countries or devices can produce different fares.
“What starts as user-tailored optimization can sometimes drift into opaque outcomes,” he explains. “It’s important that users understand what drives the price they’re seeing.”
Aggarwal advocates for a framework where personalization and transparency go hand in hand, enabling businesses to tailor offerings while reinforcing user trust. “Treating fairness as a design constraint ensures systems stay user-aligned, even as they scale.”
From Uber to Intuit: A Career at the Frontlines of Pricing
Aggarwal’s journey began in engineering roles, most notably at Uber, where he helped develop dynamic pricing systems that powered thousands of markets globally.
Later, he transitioned into product roles centered on pricing systems and operational resilience, designing mechanisms to ensure stable user experiences during dynamic conditions, while supporting business and platform goals at scale.
At Intuit, where he now leads personalization strategy for TurboTax, he’s bringing the same technical rigor to user experience and ethical AI. “We’re designing systems that personalize experiences, guide decision-making, and help build lasting trust with users.”
He’s also helped revive a blue-collar job platform in Southeast Asia, Jobshine.sg, where pricing isn’t about software features but lives and livelihoods. “When you’re building for the workers who keep things running, every design decision around value and access becomes magnified.”
Reinforcement Learning Meets the Real World
In recent research, Aggarwal applied Multi-Agent Reinforcement Learning (MARL) to one of the messiest real-world pricing problems: balancing profit and waste in perishable inventory management. Instead of hard-coding rules, his system lets agents learn how to discount or restock milk, lettuce, and berries based on freshness, demand, and price sensitivity.
What stood out? The AI agents learned to prevent spoilage without being explicitly told to. They developed behaviors resembling first-expire-first-out (FEFO) retail strategies and used timely discounts to keep inventory fresh, validating that complex, cooperative strategies can emerge from simple incentives if the system is designed right.
“That’s the beauty of MARL in operations,” Aggarwal explains. “You get scalable, interpretable policies that align with business intuition, but evolve from data.”
His takeaway: fairness and profitability aren’t always at odds. With the right architecture, systems can learn to balance the two proactively.
A Practical Definition of Fairness
So, what does fairness in pricing mean? For Aggarwal, it’s refreshingly simple.
“It means not conditioning price on personal behavioral signals, especially those unrelated to value delivery. Price should respond to demand and supply, not to someone’s urgency or past willingness to pay.”
This contrasts with black-box optimization approaches, where price tuning is often opaque. “Clarity builds trust,” he notes, “and trust is what sustains long-term engagement.”
Advice for Builders: Clarity Over Complexity
For companies designing pricing infrastructure today, Aggarwal’s advice is direct: don’t over-optimize. “Start with value-based pricing,” he says. “For SaaS companies, that often means usage-based models. They’re transparent. They scale. And they avoid creeping complexity that breaks down over time.”
He cites the recent wave of AI infrastructure companies using usage-based tiers as an example of getting it right. “It removes ambiguity. It respects user agency. And it aligns incentives.”
Aggarwal emphasizes that personalization, when done thoughtfully, can enhance user experience and trust, especially when it’s transparent, explainable, and grounded in clear value exchange.
The Future: Pricing as a Trust Layer
As AI personalization becomes more pervasive, Aggarwal argues that pricing will increasingly act as a trust barometer. “People won’t just ask, ‘Is this product good?’ but ‘Am I being treated fairly?’ The companies that win long-term will be the ones who can confidently say yes—and prove it.”
“Fairness doesn’t mean the same price for everyone. It means prices that are explainable, consistent, and not exploitative. It’s about giving people confidence in the systems they interact with.”
As more businesses embed AI deeper into their monetization models, the need for that confidence will only grow.
