top of page
  • LinkedIn
  • LinkedIn
Abstract Purple Patterns

AI UX in practice: 20 workflow changes that power transformation

By Marianne van Ooij

AI changes more than what we build—it changes how we work.

Designing for intelligent systems means rethinking the workflows that support them. These systems are adaptive, dynamic, and probabilistic. To support them, teams must evolve how they make decisions, share feedback, structure reviews, and define success.
 
While understanding the Four Shifts of AI UX is essential, operationalizing them requires changes at the workflow level. This article maps 20 practical transformations that enable teams to move from concept to capability.

DOWNLOAD

Article

Beyond theory: Workflow changes that make AI UX work

These workflows aren't theoretical. They emerge as a natural consequence of the shifts introduced in our broader transformation model:

  • The Four Shifts of AI UX: Strategy, Research, Design, and Collaboration

  • The Organizational readiness layer: Leadership, governance, and infrastructure

  • The Continuous evolution layer: Monitoring, feedback, and long-term refinement

 

Together, these workflows create the conditions where adaptive, human-centered AI can thrive.

The Four Shifts of AI UX: A framework for strategic transformation
Supported by an Organizational Foundation and sustained through Continuous Transformation
Four Shifts of AI UX _ AI UX Navigator  .png

Organizational foundation: Setting the stage

These workflows lay the groundwork for transformation—establishing the structures, roles, and fluency needed to support AI UX. Before diving into the specific shifts, three foundational workflow changes enable all subsequent progress:

1. AI governance integration

AI governance traditionally focuses on technical aspects like data quality and model accuracy. In mature AI UX organizations, governance expands to include user experience concerns—with formal channels for UX leaders to influence model selection, confidence thresholds, and feature releases. 

[From]

Separate technical and UX governance​

[To]

Integrated governance where UX has explicit input into model decisions

2. Leadership fluency development

Effective AI UX requires leaders who understand both the possibilities and limitations of AI. Organizations must establish ongoing learning programs that build AI literacy among executives and managers—enabling informed decisions about investments, timelines, and priorities.

[From]

Executive sponsorship without AI literacy​​

[To]

Leadership development that builds AI fluency across decision-makers

3. Cross-functional resource allocation

AI UX projects suffer when UX and AI teams have separate budgets, timelines, and success metrics. Organizations need resource allocation processes that recognize these interdependencies—creating joint funding models that support collaboration rather than competition.

[From]

Separate budget silos for UX and AI initiatives​

[To]

Unified resource allocation that recognizes interdependencies

Shift 1: Strategic shift — Leadership transformation for AI UX

This shift transforms how leaders guide AI initiatives—establishing vision, literacy, and governance. These workflows create the strategic foundation and organizational readiness needed for successful AI UX implementation.

The strategic shift requires four key workflow changes that reshape how teams approach AI UX initiatives:

4. AI UX vision development

Many organizations develop AI strategies focused primarily on technical capabilities, with UX considerations added later. Effective AI UX requires a fundamental shift in strategic planning—creating roadmaps where user experience principles shape AI initiatives from the beginning, aligning stakeholders around a coherent vision that balances technical possibilities with human needs.

[From]

Technology-first AI strategy with UX as an afterthought​​

[To]

Human-centered AI roadmap with integrated UX principles

5. AI literacy training integration

When AI literacy is optional, UX teams create designs that don't account for AI's probabilistic nature. Organizations need workflow changes that make AI education a standard part of UX onboarding, with structured learning paths that build competence in understanding confidence levels, machine learning concepts, and data considerations.

[From]

Optional technical knowledge for UX teams​

[To]

Structured AI literacy programs embedded in UX onboarding and development

6. AI ethics evaluation

AI ethics can't be separated from user experience. Organizations need evaluation workflows that assess both technical performance and user impact—considering factors like transparency, control, and potential for confusion or frustration, not just accuracy metrics.

[From]

Technical risk assessment focused on system performance​

[To]

Holistic assessment incorporating user impact and experience risks

7. Scenario planning for AI variability

 

Requirements typically focus on the "happy path" and a few error states. AI UX requires an expanded approach to scenario planning—creating workflows that systematically identify the range of potential system behaviors and designing appropriate responses for each.

[From]

Requirements focused on ideal paths

[To]

Planning processes that explicitly account for system variability

Shift 2: Research Shift—From static to adaptive research

This shift transforms how teams learn from users—moving from one-time validation to ongoing insight. These workflows support trust measurement, signal collection, and direct UX-to-ML feedback loops.

 

Research workflows must evolve to capture how users adapt to AI over time:

8. Longitudinal research integration

Traditional research validates interfaces at a single point in time. AI UX requires research workflows that stretch across weeks or months—capturing how user perception, trust, and behavior evolve as they interact with adaptive systems.

[From]

Point-in-time research​​

[To]

Extended research timelines that track trust and adaptation

9. Behavioral signal collection

Qualitative research alone doesn't reveal how people actually interact with AI. Organizations need data collection workflows that systematically capture behavioral signals like corrections, overrides, and avoidance patterns—revealing how users really feel about AI recommendations.

[From]

Explicit user feedback (surveys, interviews)​

 

[To]

Systems that capture implicit behavioral signals (corrections, overrides)

10. Trust metric integration

Traditional metrics don't capture the unique aspects of AI interactions. Research workflows must incorporate new metrics that assess trust formation, appropriate reliance, and perceived system intelligence—moving beyond standard usability measures.

[From]

Point-in-time research​

[To]

Extended research timelines that track trust and adaptation

11. ML-UX insight integration

Research insights often get diluted when shared through reports. Organizations need workflows that connect UX research directly to model development—creating feedback loops where user behavior and preferences directly inform how models evolve.

[From]

Research insights shared through reports and presentations​

[To]

Direct integration of UX findings into model development​

Shift 3: Design Shift—From rigid to adaptive UX

This shift expands how teams create experiences—designing not just for interaction, but for variability, transparency, and recovery. These workflows enable flexible patterns and responsive systems.

 

Design workflows transform to support flexible, adaptive experiences:

12. AI component systems

 

Traditional design systems assume consistent behavior across instances. AI UX requires component systems that accommodate variability—with versions for different confidence levels, fallback states, and explanatory elements.

[From]

UI design systems with fixed components​​

[To]

Adaptive component libraries with confidence variations

13. User feedback mechanism design

Feedback is typically collected through separate surveys or feedback forms. AI UX needs design workflows that integrate feedback collection into the core experience—creating mechanisms for users to correct, refine, or validate AI outputs during normal interaction.

[From]

Feedback collected outside core experience

[To]

Integrated feedback mechanisms that inform system learning

14. Uncertainty & recovery design reviews

Traditional design reviews assess interfaces against usability principles and brand guidelines, often focusing on happy paths. AI UX requires expanded review processes that evaluate how designs handle uncertainty, communicate confidence, and provide appropriate user control when systems are less certain. These reviews must also prioritize recovery paths—treating AI limitations and misunderstandings as core design challenges rather than edge cases.

[From]

Design critiques focused on ideal paths and visual details

[To]

Review processes that evaluate uncertainty handling and error recovery as core considerations

15. Explainability Integration

 

Traditional interfaces rarely explain why something happened. AI UX requires design workflows that systematically identify where transparency is needed—creating appropriate mechanisms to reveal system reasoning without overwhelming users.

[From]

System logic hidden from users  

[To]

Transparency mechanisms that reveal appropriate system reasoning

Shift 4: Collaboration Shift—From silos to cross-functional teams

This shift redefines how teams collaborate—bridging language gaps, aligning goals, and co-owning success. These workflows formalize shared decision-making and integrated execution.

Collaboration transforms through workflows that bridge disciplinary boundaries:

16. Shared vocabulary development

Teams often speak different languages, with terms like "confidence" meaning different things to UX and ML teams. Organizations need structured processes for developing shared vocabulary—creating glossaries and alignment sessions that build common understanding.

[From]

Discipline-specific terminology

[To]

Transparency mechanisms that reveal appropriate system reasoning

17. Cross-functional prototyping​

Traditional prototypes show the interface but not the underlying logic. AI UX requires workflows that enable cross-functional prototyping—where UX designs incorporate actual model outputs and behavior, not just simulations.

[From]

UX prototypes focused on interface

[To]

Collaborative prototypes that incorporate real model behavior

18. Joint decision frameworks

 

Decisions about AI features often happen in silos, with technical teams making model choices and UX teams designing interfaces separately. Organizations need decision frameworks that bring these perspectives together—creating shared criteria that balance technical feasibility with user experience goals.

[From]

Sequential approvals across teams​

[To]

Integrated decision processes with shared criteria

19. Unified success metrics

Technical and UX teams often measure success differently. AI UX requires workflows that establish unified success metrics—creating dashboards that show both technical performance and user experience indicators side by side.

[From]

Separate KPIs for technical and UX performance

[To]

Integrated metrics that balance system and user success

Evolution layer: Sustaining AI quality

AI UX doesn’t end at launch. This final workflow  ensures teams can monitor, refine, and adapt experiences as systems learn and user needs evolve—turning AI into a living part of the product, not a one-time release.

 20. AI monitoring system

Traditional UX often ends at launch, with limited post-release assessment. AI UX requires ongoing monitoring workflows—creating systems that track not just technical performance but experience quality as models evolve and user behavior changes.

[From]

UX assessment at launch only

[To]

Continuous monitoring of AI experience quality

ABOUT THE AUTHOR(S)

Marianne van Ooij is the founder of AI UX Navigator

bottom of page