Understanding User Mental Models Around AI
In intelligent automated solutions, users can develop various mental models that affect their understanding and perception regarding the value and quality of an AI system.
“Human-centered design has expanded from the design of objects to the design of algorithms that determine the behavior of automated or intelligent systems.”
Harry West, CEO frog
What are the users' mental models that underlie a digital solution?
A mental model refers to an individual’s cognitive representation of their understanding of how a particular system, concept, or phenomenon works. Mental models are formed by an individual’s interpretation of their experiences, beliefs, and expectations, which influence how they perceive, reason, and problem-solve in a given situation. Mental models can be conscious or unconscious and can vary in complexity depending on an individual’s level of expertise, familiarity with the subject matter, and ability to process and organize information. Mental models are constantly evolving, and individuals may modify or adjust their mental models based on new information or experiences. Mental models can be applied to various domains, including decision-making, problem-solving, and learning, and can have a significant impact on an individual’s behavior and performance.
These mental models include 3 outcomes:
- How the system works — How to use it? — Users may develop an understanding of how the system works and how it arrives at its recommendations. This mental model is influenced by the transparency of the system.
- System’s impact on user work — When to use it? — Users may develop an understanding of how the system impacts their work and how they can best utilize the system to improve their productivity. This mental model is influenced by the level of control users have over the system.
- System’s limitations — What it can do ?— Users may develop an understanding of the system’s limitations, including when it may not be accurate or when it may not provide the best recommendations. This mental model is influenced by the trustworthiness and value of the system.
Novelty
PROBLEM
When the context of use is novel to the users there will be a bias for reliability and trust.
SOLUTION
Make the entire service as transparent as possible to build trustworthy solutions that operate transparently in people’s best interests.
Why?
- Sustains trust in the brand/organization
- To avoid confusion and disappointment
- Access value
- Lowers drop-out rate
- Establishes trust in the system
- Easier interactions
Design for Trust & Transparency
When confronting our users with novelty systems, it is our job to help them understand how the system works — Explainable AI — be transparent about their abilities — Managing Expectations — construct helpful mental models, and make them feel comfortable in their interactions — Failure + Accountability — assume failure and design graceful recoveries. Take accountability for mistakes and minimize the cost of errors for your user.
Transparency is key to building trust in the system and respecting user trust in your solution.
What have in mind — User Trust & Transparency
Give accurate and reliable information
Give users suggestions that help them make decisions by leveraging multiple data sources. The information must be objective and emphasize users’ best interests. To build user trust and comfort level, communicate what the user’s collected data is used for.
- Explainability — How will we help our users understand certain outcomes?
- Managing Expectations — How will we establish realistic expectations?
- Graceful Failure & Accountability — How will we design for trust in case of failure?
Agency
PROBLEM
When there are a lot of new elements to learn from or if the AI has a high level of automatization make sure to balance the user autonomy and control over the system.
SOLUTION
Design mechanisms to give people control over the level of automation they are comfortable with.
Why?
- Consent
- Avoids feeling out of control
- Avoid feeling being surveilled
- Creates more user value through customization
- Learn about user needs through feedback
Design for User Autonomy & Control
When we question the agency or autonomy of our users, they need to feel like they’re in charge of the system.
People have justified concerns about giving up their agency to a (semi-) autonomous systems and sharing the personal data required to make them work well — Data Privacy + Security. We have to respect the human need for autonomy — User Control + Customization — as well as creating mechanisms to control the level of system automation — Machine Teaching + User Feedback — users need a way to exercise consent and control over the system and their data based on their individual and contextual needs.
What have in mind — User Autonomy & Control
Set expectations and collaborate and engage with users
Communicate expectations to users immediately, so they know what your AI solution can and can’t do. Cooperate and collaborate with users to let them feel in charge of the decision-making so they want to explore your AI solution. This builds trust and creates a more successful user experience.
- User Feedback — How will we help our users provide feedback to the system?
- User Autonomy — How will the user be able to customize their experience?
- Data Privacy — How will you collect, store, and handle user data and communicate that with them?
Capabilities
PROBLEM
Does even the problem need AI? The most important thing to consider is the problem being solved and then look at whether AI is uniquely positioned to solve it. Benchmark usefulness is based on the use case rather than the solution.
SOLUTION
Evaluate whether AI will improve or degrade the problem-solution fit. Instead of asking “Can we use AI to ______ ?”, ask “How might we solve ______?” and “Can AI uniquely solve this problem?”. Valuing the problem instead of the technology validates the appropriate exchange value — convenience, expectations, and relevance.
Why?
- Ethics
- Accountability
- Impact
- Fairness
- Inclusiveness
- Prevent harmful bias
Design for Value Alignment
Deploying AI systems across layers of society will affect
the lives of individuals and groups across the globe in different and sometimes unexpected ways.
Operating at an unprecedented scale and complexity, we must be mindful of biases, risks, system dynamics, and consequences — Accountability — to make thoughtful trade-offs in our AI applications — Fairness + Inclusiveness. Striving for value alignment between man and machine (and those operating the machine!) by integrating — Ethics — at the core of your projects is required to shape this technology to help humanity.
What have in mind — Value Alignment
Explain the benefit, not the technology
Help users understand your product’s capabilities rather than what’s under the hood. But no matter how novel your use of AI, when explaining your AI-powered product to your users, focus primarily on conveying how it makes part of the experience better or delivers new values, versus explaining how the underlying technology works.
- Accountability — How will you turn needs into parameters?
- Fairness + Inclusiveness — How will you prevent bias and guard inclusivity?
- Ethics — Does the cost of errors or negative impacts outweigh the benefits that AI provides?
TAKEAWAYS
The mental models that users develop regarding AI systems can affect their understanding and perception of the value and quality of an AI system. To build user trust and transparency, it is essential to design for explainability, manage expectations, and graceful failure & accountability. Design mechanisms to give people control over the level of automation they are comfortable with to provide users autonomy and control. Finally, before implementing AI, it is necessary to ask if AI is necessary to solve the problem at hand.