top of page
  • LinkedIn
  • LinkedIn

AI UX
Landscape

Established AI industry  frameworks

The AI field has evolved through significant contributions from industry leaders, research institutions, and collaborative initiatives. These frameworks offer critical insights into principles like fairness, explainability, and human-AI interaction—providing the groundwork for responsible and effective AI systems. 

Industry resources

The following curated list highlights some of the most influential frameworks that inform and guide the AI community.

Industry frameworks

Google’s PAIR (People + AI Research) guidelines
Focus: Human-centered approach to AI development
Key components: Explainability, user control, fairness, transparency
Value for UX leaders: Practical design guidance for creating user-first AI systems grounded in transparency and trust

Microsoft's guidelines for Human-AI Interaction 
Focus: 18 research-backed guidelines for human-AI interaction
Key components: Initially available, continuously helpful, contextually aware, respectful
Value for UX leaders: Actionable principles for designing intuitive and supportive AI behavior

IBM’s AI Design Principles
Focus: Enterprise-focused responsible AI design
Key components: Fairness, explainability, robustness, transparency, privacy
Value for UX leaders: Emphasis on building trustworthy systems at scale

Apple’s Human Interface Guidelines for Machine Learning
Focus: Integrating ML into consumer-facing Apple products
Key components: Accuracy, privacy, transparency, user feedback
Value for UX leaders: Platform-specific guidance that balances performance with usability and safety

Intel’s Human-AI Interaction Framework
Focus: Human-machine collaboration and control
Key components: Trust calibration, appropriate delegation, human-system cooperation
Value for UX leaders: Useful for designing systems requiring adaptive handoffs between human and machine

Academic & Research frameworks

HAX (Human-AI Experience) Framework​
Focus: Understanding how people perceive and interact with AI
Key components: Social perception, trust dynamics, collaboration models
Value for UX leaders: Insight into psychological and experiential dimensions of AI use​

FATE (Fairness, Accountability, Transparency, Ethics)
Focus: Microsoft's research group focusing on the ethical challenges in AI
Key components: Bias mitigation, governance, transparency practices
Value for UX leaders: Groundwork for integrating responsible AI into product development

HCXAI (Human-Centered Explainable AI)
Focus: Making AI logic understandable and trustworthy
Key components: Mental models, explanation interfaces, adaptive trust
Value for UX leaders: Tools and strategies for building interpretable AI experiences

Industry collaborative initiatives

Partnership on AI Design Guidelines
Focus: Cross-industry principles for AI development
Key components: Human well-being, fairness, explainability, safety
Value for UX leaders: Shared values to align cross-functional teams on responsible AI design

IEEE Ethically Aligned Design
Focus: Global standards for ethical technology
Key components: Human rights, data agency, transparency, social impact
Value for UX leaders: Comprehensive standard-set for aligning AI with human values​

Specialized frameworks

AI Explainability 360 (AIX360)
Focus: Technical and UX strategies for AI transparency
Key components: Diverse explanation techniques, evaluation metrics
Value for UX leaders: Toolkit for selecting and implementing explainability methods
 

Responsible AI Practices Framework (e.g., Google, Microsoft)
Focus: Operationalizing responsible AI practices
Key components: Risk assessment, fairness testing, impact reviews, deployment checks
Value for UX leaders: Process-driven roadmap for embedding responsible AI throughout the product lifecycle


NNGroup on Artificial Intelligence
Focus: Human-centered design principles for AI and machine learning
Key components: UX roles in AI projects, mental models, trust-building patterns, usability of AI interfaces
Value for UX leaders: Provides research-backed guidance for designing usable, trustworthy, and explainable AI experiences

Mapping the AI UX landscape

The AI UX field includes a rich range of models—from theoretical ethics and human-AI interaction principles to product-level explainability and interaction design. Many of these offer essential guidance on what to consider when designing AI systems.

This chart positions these frameworks along two axes:​

  • From AI optimization to AI adoption, indicating focus on systems vs. users

  • From theoretical to practical, indicating conceptual guidance vs. operational application

​​

The Four Shifts of AI UX sits in the top-right quadrant: a practical, adoption-focused framework designed to support cross-functional teams through the full transformation process. 

 

Rather than replacing other models, it builds on their foundations—helping organizations connect principle to practice and move from inspiration to implementation.

Mapping the AI UX landscape

Positioning frameworks from foundational guidance to operational adoption. The Four Shifts model emphasizes practical integration at scale—building on and extending the strengths of established tools

AI UX landscape map | AI UX Navigator
Acknowledgment​ 

The work shared on this site builds on the foundations laid by leading organizations and research communities. We acknowledge the contributions of Google, Microsoft, IBM, Apple, and academic institutions whose public frameworks, tools, and insights have shaped the evolution of AI and informed this perspective on AI UX transformation.  

 

Their efforts have helped move the field from experimentation toward implementation—and continue to inspire more adaptive, responsible, and human-centered approaches.

bottom of page