Saloni Pasad is a Senior UX Designer and Strategist whose work sits at the intersection of design, technology, and business strategy. She has led digital product initiatives for global clients, shaping platforms used by millions, where her decisions influence both user satisfaction and organizational outcomes. With an M.S. in Human-Centered Design and a Bachelor’s in Communication Design, she combines formal training with hands-on experience to make AI tools practical, usable, and impactful.
AI is being framed as a transformative superpower, an all-knowing force that will reinvent products, teams and industries. Yet after countless pilot programs, a different truth has emerged: most AI projects struggle not because the technology is flawed, but because design and decision-making don’t match expectations. To understand how teams can move past hype and create AI people actually use, we spoke with Saloni about a practical framework for AI adoption. Her message is clear: stop asking “What can AI do for us?” and start asking “What can AI do today that actually matters?”
- Why do so many AI projects collapse after initial excitement?
- The pattern is predictable. Teams start with a wish, an idea that feels cutting-edge internally, and then try to retrofit the organization and the product around it. That’s backwards. AI projects go wrong when expectations exceed the technology’s capabilities, or when the real business problem or user need isn’t clearly defined.
Models are designed to optimize numbers, but products succeed only when they deliver real value for users and organizations. If those two don’t align, a feature might look clever, but it won’t be adopted or scaled.
And that’s where reframing becomes essential.
- How should teams reframe the way they approach AI design?
- Many organizations fall into the trap of deploying AI in high-risk contexts where even a small error could be costly. They chase perfect precision in unforgiving situations. The reality is simple: if your product needs 100 percent accuracy to be useful, AI probably isn’t the right tool.
The sweet spot lies in features where moderate accuracy is sufficient, the stakes are low, and the value is high.
In other words, don’t build AI tools for critical diagnoses. Build them for inbox sorting, lead prioritization, or customer tagging.
- You’ve proposed a three-part framework. Can you walk us through it?
- Sure. Once you move past wishful thinking, you need a structure to identify practical opportunities. My framework has three layers:
- Human-centered design: Understand recurring friction for real users, not hypothetical personas.
- Service design: Map the organizational processes and incentives so the AI feature supports revenue, efficiency, or measurable insight.
- Matchmaking: Map the concrete AI capabilities to specific tasks in that service flow.
Most failures happen at this third layer. Teams either overestimate what AI can do or use generic models without customizing them to their context. That’s why projects can look brilliant in the lab but struggle in the real world.
- Can you give an example of this “matchmaking” approach?
- Start by listing what AI can do today, like summarizing 200 words into 30, classifying requests by urgency, or extracting key entities from documents. Then ask where these capabilities could reduce friction or cost in your product.
For example, if AI can classify items by urgency, you might use it to sort customer requests, triage support tickets, or highlight top leads for your sales team. This is how you move from abstract innovation to real value, mapping tech to task.
- How do you avoid wasting time on bad ideas?
- Once you’ve mapped ideas, you run them through four checks:
- User value: Does it solve a real need?
- Organizational value: Will it support a business goal?
- Technical feasibility: Can it be built now?
- Risk: What happens if it’s wrong, and is that acceptable?
Be decisive. If an idea fails any of these checks, pause it. If it passes, prototype quickly. The goal isn’t to use AI, it’s to create value.
- You talk about “designing for failure.” What does that mean in practice?
- Even the best AI systems make mistakes, so success depends on how you handle failure:
- Outputs should be suggestions, not commands.
- Users should be able to override or correct the AI.
- Be transparent about what the AI can and can’t do.
- Build feedback loops so the system improves over time.
When you stop pretending AI is perfect, users can trust it, and that’s where adoption happens.
- Finally, what does this mean for where AI product design is headed?
- The next wave of impactful AI won’t come from flashy breakthroughs. It’ll come from quiet, useful tools that reduce friction, guide attention, and help people do their jobs a little better. The most successful tools will enhance human capability rather than replace it.
