The importance of UX in AI

Yel Legaspi
6 min readFeb 24, 2021

--

A black and white photo of a robot’s back
Photo by Jesse Chan on Unsplash

Artificial Intelligence has captured our imagination since the invention of computers. Since its early days, we’ve put high expectations as to what it can do for us (think Jetsons) and to us (think Terminator 2). However, the effort and study of AI ( technology in general ) to help us, in the sense of being usable and useful, started late with HCI and Don Norman and other pioneering UX drives that resulted in an unbalanced expectation of what AI can do.

But before we begin, I think it is important to first define what AI is, at least in the scope of this post. A definition from Britanica states:

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

The above definition, I think is too loose, which is a point this post will stress with regards to our expectations towards AI. Going back to the definition, I would borrow the definition of AI from the book AI & UX:

“… artificial intelligence is any technology that appears to adapt its knowledge or learns from experiences in a way that would be considered intelligent.”

From the definition above, this would cover:

  • Your smartphone OS automatically adjusting your schedule when a timezone change happens
  • Siri giving you the answer to 13 * 476
  • Amazon factory robots
  • .. and everything in between that can learn or even “seem to learn” and respond based on that learning.

The current state of AI

AI is all the rage these past couple of years, and for good reason. The technologies behind AI have progressed exponentially (as with most technologies) and the applications of AI tech have also grown, not just vertically but also across the board. It is now easy to see or hear AI in its many forms being used in the financial and health industries. And seeing in popular media the robots Boston Dynamics working through obstacles gives us the feeling of “we’re almost there”. It is becoming as ubiquitous as the internet itself. Constantly intertwined with the apps and products we use, whether we know or notice it or not. We even have an AI in our pockets constantly. However, that is not an “overall” state of AI. How do we, the end-users, perceive our experiences with AI? I could personally say, that yes, I am impressed with some of my experiences with AI but if I compare that to what it can actually do (the tech behind it and its exponential growth), why does it seem to be lagging? Plus, at times, not only does serve its purpose, it degrades the usefulness of the product (e.g. badly implemented chatbots).

As mentioned, my personal experiences with AI had been either something that wows me or leaves me frustrated at times. Here’s something simple that I’ve valued which AI helps — fraud detection. Bank and credit institutions have the technology, specifically AI technology that helps them detect if a purchase was made fraudulently. That’s incredible. That’s a value that goes both ways — to the product owners and its users. Here’s one that frustrates me. Up until last week, I’ve had to cancel several orders in Uber Eats because I’ve made a habit to not check the delivery address and I always make the mistake of ordering food to be delivered to my house when I’m at a friend’s house. The app has learned what I likely would like to order but up until recently, it didn’t learn that I was in a different place.

We’ve all had similar experiences, good and bad. But it seems that AI isn’t living up to its hype which can be confusing because of all the amazing things AI can do. The thing is, this has happened several times before. The authors of AI & UX detail the history of AI in its genesis post World War II, which focused on machine translation (translation of one language to another using a computer program). The companies and universities got great support (funding) from the government but soon dwindled as careful analysis of what it can deliver did not meet the expectation. This caused a mistrust between the producers and consumers of what AI can do that lasted for quite some time until the hype started building up again in the late 80s to the early 90s with sci-fi movies pushing AI into another “over-promise and under-deliver” scenario. And this is key to how our impressions of AI are affected.

Personally, this alone isn’t necessarily a bad thing. Expectations put pressure on AI developers to move the technology needle. It also adds to the ideas of where we can apply AI. This, however, becomes a problem when we holistically expected a great experience in AI when we didn’t holistically improve it. Interactions with AI started when AI started (basic input — learned output) but the study of Interactions with computers started way after that. And even that, HCI only focused on basic human-computer use (basic input — basic output).

Another problem I see as well are the affordances AI have, or at least what it is trying to imply it has. Let’s take for example Siri, the virtual assistant on some of Apple’s products. Siri is a technological achievement. I appreciate it because I have a morsel of knowledge in technology and I can badly approximate the cost in time and money that has gone to do produce it. To be able to understand what someone says — with consideration to a user’s intonation, inflection, background noise, etc. is a major step for tech. Incredibly it can also “converse” with us in a manner that isn’t off-putting. All of these and much more should make Siri an AI of the ages. But it isn’t. Why is that?

Part of it, I think, is the implied affordance that Siri has. All the great things that I’ve itemized are double-edged for Siri, and maybe for AI in general. Because it can understand me, someone who doesn’t have a natural English accent, maybe it can do X. Because it still understood me, while standing in the shower, maybe it can still do Y. Since it called me by my preferred name, maybe it can do Z. It’s hyping up itself (at least in our mind it is) and is building up expectations in our heads. So when we asked something trivial that is on the level of what we expect it to be able to do, our experience dwindles as it opens up a Google page with our question (which is still useful). The perception of AI seems to be living on the extremes of imagination and experiences. We need to ground these expectations so that we can efficiently utilize these technological marvels and appreciate their actual value.

UX can help

It should be expected that the companies and teams working on UX are on top of their games, so to speak. They are using the most proven methodologies and processes in producing their tech, and I’m pretty sure a lot of resources are being put in for usability research for it. What I would think that would help more is making smaller- leaner iterations on releasing the tech and proving there is an actual value to its intended users. This is where the great Lean UX methodology comes in. Going back to my Siri example, what if Apple decided to release in a leaner form, with a scoped down set of features accompanied by a limited press release of what it can do. Let’s say that on a release Apple just said, we have a voice assistant that can help you query Google just through your voice, and it will give you the top 3 results. That’s it. That is small scope, with a small scope of intended user scenarios, accompanied by grounded expectations of what it can do. That’s small enough to invest in, big enough to see the value and work out the kinks, and useful enough to a set of people which results in a version the team can build on top of, pointed in the right direction.

(I am just using Siri here as an example, and I do know how much time and resources Apple put in usability and design of its products. It was just an easy reach)

More importantly, in reference to usability, is the value it actually brings to AI’s users. The following is a quote from Martin Kohn CEO and Chief Scientist at MedPredixAI, that encompasses this thought:

“Merely proving that you have powerful technology is not sufficient” he says. “Prove to me that it will actually do something useful — that it will make my life better, and patients’ lives better”

As with other products, integrated with AI tech or not, if we do not put the users at the center of our tech — the one we validate those techs with, we will keep off shooting everyone’s expectations to what AI can deliver to its end-users.

--

--