Introduction to Artificial Intelligence for UX Designers

Dano Qualls
UX IRL
Published in
10 min readAug 16, 2016

--

If you work in the world of product design, you’re probably hearing a lot about how artificial intelligence (AI) is going to change everything in the next few years. If you don’t have a grasp of what AI is and how it works, you may worry that you’re behind. This article should help you understand the basics of AI and feel that you, as a designer, are not actually that far behind (I can’t say the same for developers). It takes a little bit of work to get through this article, but after reading it you should feel that you are capable of designing for new developments in AI.

Although the title of this article is “Artificial Intelligence for UX designers,” we can’t get to the “for UX Designers” part until we discuss what AI is and how it works, at least at a basic level (the “designing for AI” courses at MIT and Bentley follow the same structure). UX designers who understand how HTML/CSS/JS work are better at designing web products than designers who don’t. They don’t have to write perfect code, but if they understand how a developer will turn their design into working code, they can design products that will be realistic to build, as well as more usable. For that reason, this article comes in three parts: 1. What is AI?, 2. AI Techniques, 3. Designing for AI Systems.

Part 1. What is Artificial Intelligence?

We often think of artificial intelligence (AI) as being something that’s coming in the future (for good or bad), but we are already surrounded by it. Once an AI breakthrough becomes commonplace, we stop seeing it as AI. Here are a few examples already here:

  • Netflix suggestions of movies and TV shows based on your viewing history
  • credit card fraud detection
  • computers in hospitals helping people interpret medical images or heart sounds
  • Siri or Google responding to your voice requests

AI is all about computer systems able to perform tasks that normally require human intelligence.1

The key there is “human” intelligence, and the key to human intelligence is handling ambiguity. Computers are great at precision and bad at ambiguity. Humans are great at ambiguity and bad at precision. AI is about creating computers that can handle ambiguity more like a human.

What does that mean? Examples of precise tasks include:

  • crunching complex arithmetic
  • identifying the specific color of a pixel
  • searching for a specific word in a 1,000-page novel

These are things that would be so impressive for a person to do, but computers do with ease. That’s because these aren’t examples of intelligence, but examples of computing — that’s why they’re called “computers”.

Where computers have trouble is solving vague problems with an unclear path to the answer and changing environments. Examples would include recognizing a face or making a decision. We can divide these ambiguous tasks that require intelligence into mundane tasks and expert tasks.

Mundane and Expert Tasks

Mundane tasks are actions that most of us do without giving it much thought, but they require real intelligence. Going shopping is a great example. We plan what we need, drive to the store, navigate around the store, pick things up, and interact with people. No AI system exists today that can do all of these things. Alison Cawsey (whose book informed much of this article) lists several mundane tasks that would contribute to this grocery shopping robot, including:

  • Planning: the ability to decide on a good sequence of actions to achieve our goals
  • Vision: the ability to make sense of what we see
  • Robotics: the ability to move and act in the world, possibly responding to new perceptions
  • Natural Language: the ability to communicate with others in English or another human language

Unlike mundane tasks, which most people can easily do, expert tasks require specialized training that few people possess. Examples of this include:

  • medical diagnosis
  • computer configuration
  • financial planning
  • playing chess or Go

Because expert tasks focus on solving a specific problem with well-defined rules, they’re actually easier to achieve with AI than the mundane tasks are. IBM’s Watson and Google’s DeepMind may be Jeopardy and Go champions, but no existing AI system can navigate around a messy house and communicate with people was well as a two-year old.

Building Blocks of AI

So how does AI work? Let’s start by asking how human intelligence works.

  • We receive raw data through our senses
  • We extract workable information from that raw data
  • We apply reason to the information and create decisions
  • We act on those decisions, observe the result, and learn from the process

AI can do this, too, if we turn the vast, ambiguous information in the world into discrete pieces of relevant data. Computers can then apply AI techniques to this data and create useful outcomes.

Part 2. AI Techniques

The many different ways of imitating human intelligence are known as AI techniques. Certain techniques are great at solving certain kinds of problems, but completely useless at solving others. None of the AI techniques that exist today are capable of bringing to life the mind blowing AI you see in sci-fi movies. If that “strong AI” ever comes into existence, it will be after completely new AI techniques are invented.

For the sake of simplicity let’s talk about just two broad approaches to AI: rule-based systems and machine learning systems.

The big difference is that a human being has to program every single rule into a rule-based system, whereas a machine learning system teaches itself.

Rule-Based Systems

Software developers can train a program to solve a specific problem by giving it rules. Examples include basic chatbots, medical diagnosis tools, and even a fire suppression sprinkler system. A simple example, like the sprinkler system, might only have a few rules, such as “If ‘hot’ and ‘smoky,’ then turn on sprinkler.” A medical diagnosis tool would have many more rules about specific health measurements and what diagnosis they likely point to. If the team of doctors building medical diagnosis software want to refine a diagnosis or add a new one, they have to add more rules.

Another form of rule-based AI is a search program. This is a good technique for solving planning problems with many possible answers, but one best answer. Two examples of search problems are solving a sudoku puzzle and planning a driving route.

  • You start with an initial state, such as a blank Sudoku puzzle or a car parked in Boston. Next, you provide the goal state, such as a completed Sudoku puzzle or New York City. Finally, you provide the rules — in this case the rules of Sudoku or of driving only in the right direction on actual roads.
  • If the problem is a small one, the search algorithm can use brute force to systematically and exhaustively check every possible action and see if it reaches the goal. For large problem spaces, this would be too time-consuming and the AI can prioritize the most promising actions and try those first. The program will examine thousands of possible Sudoku moves and thousands of possible driving routes, and then provide you with the best one. You get to define what “best” means, whether that is least time, shortest distance, or least time on two-lane highways.

Machine Learning

Machine learning takes an entirely different approach from rule-based systems. Rather than telling a computer “If A and B, then C,” the system creators feed the computer lots of examples and let the computer learn to identify that “If A and B, then C.” One of the most frequent uses of machine learning is classification ­– show the computer something and the computer tells you what it is.

Here’s how it works. Imagine I show you pictures of different animals. Can you identify which ones are dogs? Of course. Now imagine that I have never seen any animal in my entire life. Can you tell me, using just words, how to identify dogs? Is your definition good enough to tell a cat and Chihuahua apart? Or a Great Dane from a horse? A computer using machine learning should be able to come up with a model that can do this.

The creator of the dog-spotting algorithm would likely use “supervised learning.” That’s when the programmer shows a picture of a dog and says, “this is a dog.” They would then show a picture that is not a dog and say, “this is not a dog.” So how does a computer “learn”? They key is breaking down the attributes of “dogginess” into discrete traits.The table below shows how a computer would start to recognize dogs (although probably using binary “yes/no” questions instead of these analog questions).

The computer will find the statistical strength between traits and “dogginess,” and then start to predict the likelihood that an animal is a dog. Of course, it will take a lot more than four traits and five examples, but that’s the idea. So if the computer is shown another picture and not told whether or not it’s a dog, the computer can compare it to known dogs, calculate a “dogginess” score, and give a prediction.

Deep Learning

Rule-based systems and a form of machine learning known as “neural networks” have been around since the 1950s. So why have we seen an explosion of AI interest and breakthroughs in the past few years? Researchers today have access to more computing power than ever before. Neural networks in the 1960s had a few layers of neurons, but neural networks today can have hundreds of layers and over a billion connections. “Deep learning” gets its name from this jump from a few layers to hundreds of layers of neurons. Deep learning is basically massively powerful neural networks doing machine learning.

Bonus Section: Syntax and Semantics (Working with Knowledge)

If you get lost reading this section, just skip it and go on to part 3. But if you can digest it, this is really useful stuff. Let’s start by thinking about what happens when you talk to Siri or Google Now. You’re not just using “artificial intelligence,” you’re using four distinct stages of natural speech processing:

  • Speech recognition: analyzing the raw sounds made by your voice and turning them into symbols in a database
  • Syntactic analysis: using the grammar and structure of the English language to understand the syntax of what is said
  • Semantic analysis: piecing together the meaning of words to find the meaning of the whole sentence
  • Pragmatic analysis: analyzing your meaning in context of everything else the computer knows about you, your situation, or anything else it knows about the outside world

The gap between syntax and semantics is one of the hardest things to solve. Syntax is made up of grammar and rules, which computers are great at. Semantics are the intent behind what you say, and even two people who grew up speaking the same language can misinterpret what the other one means. The knowledge representation language that a system uses must:

  • accurately represents facts in a clear and precise way
  • and also allow the user or system to deduce new facts from existing knowledge.

The ability to take a leap and deduce new knowledge from existing facts is one of the great benefits of AI. For example, if the system knows that all elephants have trunks, and that Dumbo is an elephant, then the system should know that Dumbo has a trunk, even though nobody specifically told the system that fact. When a system can make new connections and create new facts on its own, it appears to imitate human intelligence.

If you want to learn more about the syntax-semantics gap and whether or not computers will ever be able to “really think,” read up on the Chinese room problem.

Part 3: Designing for AI

Now we’re getting past the intro to AI and bringing it back to the world of UX design. And this article is basically over because the two most important things to know before designing intelligent interfaces are:

  • the basics of how the AI works
  • good old-fashioned principles of usability and human factors

Digital product designers, and especially web designers, are kind of spoiled. Most of the products we make are built on well-understood interactions. People know the what common software is capable of and they know how to interact with it. People know how to fill out forms. We’ve done usability testing on everything and made the web easy everywhere, often by making it fit common design practices. That’s why your friend with an idea can sketch something on a napkin that looks like a real app. It’s crude, but it’s not unrealistic. The time-tested elements are all there.

So what does that have to do with artificial intelligence?

We might not have established patterns of how to interact with AI, so we, as designers, must make that path clear.

More importantly, we must make the interactions clear to humans. Remember what we said about humans in the first section: “Computers are great at precision and bad at ambiguity. Humans are great at ambiguity and bad at precision.”

Help people understand what the system can do and how they can use it. Give people flexible ways to express their intent while you convert that into the proper syntax behind the scenes. This is nothing new for trained designers, but it is more critical when the users don’t know what to expect.

Users will bring mental models of what they already know to your product. Find out what those models are and build on them. Use scaffolding to get them moving in the right direction. Your ambiguity-prone users need an interface that can handle that, or at least they need the onboarding to easily provide more precise inputs.

In an article about designing chat interfaces, Matt Mariansky offers specific steps to onboard users. He says to begin by having the system suggest something users can ask for, providing feedback on what’s happening, suggesting next steps, and unlocking achievements as people spend more time with it.

Don’t expect your users to remember information or steps as they move through the program. Use feedback to let users know their status. Prevent user errors, and help users recover from errors. You probably recognize these from Nielsen’s usability heuristics. You may not have looked at these heuristics since your UX design training, so go revisit them and think about how they can improve your AI product.

You’ve probably heard that universal design is good design. The same is true of AI — good design for AI is just good design.

--

--