top of page
  • LinkedIn
  • LinkedIn
3D Round Ridges

Beyond usability: Why AI requires a new UX mindset

April 2025 | Article

By Marianne van Ooij

Designing AI experiences calls for a deeper understanding of uncertainty, trust, and an evolving design vocabulary.

The traditional job of UX was to help people get things done—efficiently, clearly, and without confusion. That meant removing friction, optimizing flows, and ensuring user goals aligned with what the system could do.

But AI systems behave differently. They are dynamic, probabilistic, and sometimes opaque. This changes not only how they operate—but how we design for them.

DOWNLOAD

Article

While our Four Shifts framework addresses organizational transformation for AI UX, this article focuses on the mindset changes needed for individual designers and researchers working with AI systems. It's about evolving our core assumptions about what makes a good experience when that experience is powered by AI.

Designing with uncertainty

AI systems don't always provide the same answer every time. That's by design. Their strength lies in their ability to adapt, infer, and personalize. But with that comes variability—and uncertainty.

Traditional UX practices are built around consistency. In AI, inconsistency is not always a flaw—it's often a signal of how the system is learning or adapting. This can feel counterintuitive for both users and designers.

The challenge is not to eliminate this uncertainty, but to help users navigate it. That includes:

  • Designing feedback loops that reflect confidence levels

  • Making system reasoning visible through explanations or previews

  • Offering controls for user input, correction, or override

  • Communicating clearly what the system can't do—not just what it can

​​

Rather than hiding the AI, UX should reveal just enough of it to build confidence and encourage learning.

The myth of the invisible interface

For years, design leaders promoted the ideal of the invisible interface—one where the user barely noticed the UI at all. In AI systems, that goal can be misleading.

Users need guidance, context, and signals about what's happening behind the scenes. They need to know when the system is guessing, when it's sure, and when it's stuck.

In this sense, AI UX is not about disappearing—it's about strategic visibility. The interface becomes a communication layer that helps users build accurate mental models of the system's capabilities and limitations.

Spotify's recommendations​

 

Consider how Spotify handles music recommendations. Rather than simply presenting a playlist, they provide context about why songs were selected ("Because you listened to..."), confidence indicators through playlist naming ("Discover Weekly" vs. "Radio"), and feedback mechanisms to improve future recommendations. These elements make the AI more visible, not less, but in ways that build understanding and trust.

Designing for trust, not just usability

In traditional UX, usability meant users could find and complete a task. In AI, we're designing for trust—that the system is acting in the user's best interest, and that its behavior makes sense over time.

 

Trust is built when:

  • The system communicates clearly and predictably

  • Users feel they can influence outcomes

  • Feedback is acknowledged and reflected in future behavior

  • Errors are surfaced transparently, not hidden

  • Users understand how to recover when things go wrong

 

This moves UX beyond button placement and into system behavior, error handling, and long-term adaptation.​

GitHub copilot

 

GitHub Copilot illustrates this approach by showing multiple suggestions, indicating where code comes from, and giving developers control over when and how to use AI assistance. The system builds trust not by being perfect, but by being transparent about its limitations and giving users agency in the process.

Experience is the product

In AI, the experience is the product. Models and algorithms are essential, but without thoughtful interaction design, their value won't reach the user.

  • A well-performing model can fail if users don't understand its output

  • A mediocre model can succeed if it's paired with meaningful controls and explanations

  • Product adoption hinges on user perception, not just system performance

​​

This reframes UX from polishing the interface to shaping the core experience.

ChatGPT

 

When ChatGPT first launched, its rapid adoption wasn't just due to the underlying model quality, but because the conversation interface made complex AI capabilities accessible and intuitive for millions of users. The experience was inseparable from the product value.

Bringing UX and ML together

Designers and researchers need to work more closely with ML teams—not just to translate outputs into UI, but to help define what the system should do in the first place.

 

Key opportunities include:

  • Co-creating model goals that reflect user needs

  • Identifying feedback signals that help systems learn

  • Anticipating where users might get confused or misinterpret results

  • Designing for gradual trust-building, not one-shot conversion

 

This partnership helps avoid mismatches between system logic and user logic—and surfaces valuable new signals for model refinement.

Google Search

 

At Google, UX researchers and data scientists collaborate to define what makes a "helpful" answer in search results. This collaboration ensures that technical metrics like precision and recall are balanced with human-centered considerations like clarity, relevance to intent, and appropriate confidence.

A new design vocabulary

As we shift our mindset, we need to develop new patterns and principles for AI interactions. Some emerging elements of this vocabulary include:

Confidence visualization

Creating visual or textual cues that communicate the system's certainty about its output, helping users calibrate their own trust appropriately.

 

​Feedback collection

Designing lightweight ways for users to correct, refine, or validate AI outputs that feed directly into improving the system.

Progressive disclosure

Revealing AI capabilities gradually as users build familiarity, rather than overwhelming them with complex options from the start.

Transparent reasoning

Showing not just what the AI concluded, but key factors that influenced its decision, making the black box more translucent.

Graceful degradation

Designing thoughtful fallbacks when AI confidence is low or when the system encounters edge cases.

These patterns form building blocks for a new approach to experience design—one that embraces the dynamic, learning nature of AI rather than trying to force it into traditional interaction models.

Moving forward

Designing for AI isn't about abandoning UX fundamentals. It's about evolving them to meet new demands.

The next generation of UX will be defined not just by how smooth an experience feels—but by how well it helps users navigate ambiguity, build trust, and shape their interaction with intelligent systems.

This evolution challenges us to:

  • Reconsider what "good" design means when systems are probabilistic

  • Develop research methods that capture changing user behavior over time

  • Create new patterns that visualize uncertainty and build appropriate trust

  • Balance automation with meaningful human control and oversight

​​

By embracing these challenges, we can create AI experiences that don't just perform tasks efficiently, but help users feel confident, informed, and in control as they collaborate with increasingly capable AI systems.

ABOUT THE AUTHOR(S)

Marianne van Ooij is the founder of AI UX Navigator

bottom of page