AI · · 26 min read

Beyond the Text Field: Louise Macfadyen, Author of Designing AI Interfaces

Managing the conceptual distance between what AI actually does and what people imagine it can do, with Designing AI Interfaces author Louise Macfadyen

Beyond the Text Field: Louise Macfadyen, Author of Designing AI Interfaces

This episode, I got to sit down with my friend, podcast cohost on Away from Keyboard, and former coworker Louise Macfadyen as she prepares to launch her new book Designing AI Interfaces with Oreilly this Spring.

We had a lot to talk about, from managing the conceptual distance between what AI actually does and what people imagine it can do, to the new skills designers, engineers, and PMs need to work with models, the potential harms of new technology, and some of the patterns designers can use to keep ethics and safety in the loop.

Liam Spradlin: Hi Louise, welcome to Design Notes.

Louise Macfadyen: Hi Liam, thanks for having me.

Liam: It is about time that you came on this show. For any of the listeners who somehow don’t know, Louise and I recorded an award-winning, best-selling podcast called Away from Keyboard. The advertisers were banging the door down to get to us.

Louise: Oh man. Overrun.

Liam: And on that show, we covered mostly kind of internet archaeology, like cultural studies of the historic internet. And today, Louise is joining me because she is tackling a much more future-facing topic, I think. So, Louise, why don’t you introduce yourself and tell us a little bit about the journey that has brought you to this project.

Louise: Yeah, hi. I am so happy to be here. It is so nice that our background comes from not just working together, but like going over our shared love of the history of the internet. I think our founding episode was about Neopets? Or maybe the little dolls that used to show up in people’s email signatures...

Liam: I think it was dolls, yeah.

Louise: ...might have been dolls. And behind all of that sort of fun narrative, we used to work together at Google. And now I work at Microsoft on AI products, and I’m bringing out a little old book with O’Reilly called Designing AI Interfaces.

But it feels super appropriate to talk to you because so much of the book is rooted in internet and computing history. One of the first chapters I wrote was talking about like medical systems from the 70s where they were trying to do like diagnostic technique through Boolean systems. And I just wrote the first chapter—doing them out of order—where I was trying to find a consumer analogy for Large Language Models. And I’m so certain you’re gonna know what I’m talking about. Do you remember T9? The texting interface?

Liam: Of course.

Louise: So it was a really good analogy—it almost is itself a language model because it was predictive. And so I open talking about how you would have to use like so many keystrokes to ask someone if you wanted to like download directions off MapQuest, you know.

Liam: (Laughs)

Louise: So we’re bringing it full circle, embracing, you know, the uniqueness of the early internet and that early phase of computing. But I find it really useful when I am talking about AI and designing for it because there’s, you know, so much change occurring in the industry, yet I find it’s very rooted in some of the early questions that were being posed and some of those early solutions. And so I try to use that as a basis from which to provide people a little bit of surface from which to work that will be consistent in this changing world.

Liam: Yeah, I think first of all, I’m excited to read it because of our shared love of the early internet. There are, I think, not many people who could like successfully make that synthesis and who I would trust to handle that. But yeah, I think also something that I’ve been noticing recently from the design side is like this thread that has gone through like pretty much the whole history of computing of like, you know, in the future, we will be able to make an interface that is like so smart and so ultimately predictive. I’m wondering like what other kind of through-lines you’ve identified and like how much more developed you think we are in that direction compared to before?

Louise: I mean, you’re calling out the big one. Because to set up discussions about AI, you have to acknowledge that you’re not building a product in a neutral space. And going back to even like 1950s conceptions of what an intelligent technology would be, like it’s really obvious when you start reading some of the old documentation, like people are bringing the same behavioral assumptions then that they’re bringing now. And so there’s a period of time called the AI Winter, which is, I think it’s sort of like through the mid-80s until, you know, kind of somewhat recently, like just a little bit before transformer architecture came out. And it occurred because so many people were obsessed in like the 60s and 70s with the idea of a smart technology being built, but then it just wasn’t actually meeting any consumer expectations.

And so I try to point out in the book pretty regularly that this is not like an intelligent technology. We are not actually dealing with some profound knowledge; we’re dealing with a really sophisticated language model. And the thing that we as designers have to be contending with continually is just the two elements. One is what the user is bringing—what they’re assuming about this technology because we love as humans to project and anthropomorphize onto anything. We just want a friend. We want like a smart thing to be talking to and to be getting to book our flights for us. And then the second part is educating consumers on what the product actually can do, you know, and allowing them to interact with it in a meaningful way that, you know, we’re dealing with the full extent of user expression where within the same period of time someone might be using an AI product to research what they’re gonna eat for dinner that evening, you know. And then also a very complicated work task where they need to interact with like a database and a different model agent, you know.

Liam: Yeah.

Louise: And the same interface needs to cope with both of those. And so, you know, fencing in the expectations is challenging to balance with like surfacing the capabilities. So I feel like that ends up being some of the cognitive distortion you’re dealing with across the board.

Liam: Yeah. It’s... I think that’s so important that like mental models are not based on reality. And I think that the kind of centering of the text box as the interface for for like interacting with language models has really like set up a mental model that must be grappled with. And I’m I’m curious about like going into a little more depth on like how you how you grapple with that as a designer?

Louise: I mean, it’s interesting. I think earlier on in like our LLM, our mutual LLM journey, people were a little skeptical, you know, of the text box. And I had a great conversation I think a few months ago now in SF about this with another Googler who was sort of pointing out, you know, shells—the CLI—was the origin of a lot of computing, you know. And so I make that comparison in the book—with credit, I gave this person credit. But it’s it’s important to I think distinguish when we are using purely a chat interface and the output is retaining in chat—like you’re communicating with a model extensively—versus using a model to create something that you then go on to use, right?

And I also think that that... there’s a lot to cover in terms of like grasping intent across the board there and what can be actually done. Right now, my work on at Microsoft on agents—I lead a little team designing for agent patterns at Fluent. One of the opportunity spaces I’m starting to see is we we like chat. Chat’s a good interface across the board; we understand it very clearly. But sometimes we want live text in chat, and sometimes we want it to be elevated to UI in chat as well. So think about like deep reasoning patterns and chain of thought and things like that. These are all patterns we work on. And I’m very curious in the next few years for us to start to see established patterns for when something does elevate into being UI, you know. When do we want to package away information in a dropdown so someone can go and interrogate it further if they want to, versus all of it being shown to the user all the time? So there’s a few different threads I think we’re still gonna see become unraveled there, you know?

Liam: Yeah, that also leads me to a question about like interaction patterns with AI, both as they exist now and in the immediate future. What are what are some of the patterns or considerations that you’re seeing emerging?

Louise: A lot of affordances to educate users—those are the real challenge. A text box: very intuitive, right? You just type into it, you see what you get. So you’re kind of just able to play with the interface pretty frequently. Metering is a huge challenge though. And I think we see this internally at Microsoft, but I think everyone who’s working on AI has the same problem where they want users to get an extremely good experience up front, but they also don’t want them to be sifting through loads and loads of very expensive timely queries that require lots of latency. They want them to to use the tool well from the first time. And so there’s patterns like default prompts, prompt input examples that are, you know, becoming very popular, but I think they’re a little unexplored in terms of opportunity.

I did some feedback recently for an AI group that’s not at Microsoft that just came to me for some feedback and, you know, their little like prompt suggestions were so ambiguous that I don’t think they gave them the chance to differentiate their product. They had such a cool feature that wasn’t actually being showcased in any of their example prompts. And I think with that, there’s just the classic problem that we have as designers, which is like: here’s what we think the user... would optimize this user’s journey. You know? Like: Here’s how we know a task is being accomplished and here’s where we think AI could slot in to help. And that’s quite discrete, you know, and requires a lot of knowledge of your user.

Then there’s the business incentive, which is, you know, to put front and center the capabilities. And we see a lot of resistance from users. This is again to go back to: you don’t publish an AI product into a neutral space. People feel very strongly about having the products they know and love just getting AI crowbarred in on top. And yet you see like the vast investment organizations are doing in AI; they want to see that, you know, come full circle. And so I think there’s additional burden on us as interface and interaction designers to balance that, right? To make sure that the product is surfaced—you know, yet another little sparkle emoji—but you know, balance that with it actually feeling useful and consequential to the user and helping them like answer their question, solve their problem.

Liam: Yeah. And with that, I want to move into kind of the considerations that, yeah again, we as designers have to make when we are tasked with either implementing these things into a product or you know, working on features for the product itself. I think it’s become clear that like, you know, despite the intuitiveness of the of the text box, working with this technology is a lot more nuanced than normal language in terms of understanding what its capabilities actually are and how you interact with those. And I want to get your take on that from the perspective of a maker who’s like creating a product.

Louise: The distance between the design discipline and web technologies or like mobile technologies, mobile platforms was so much smaller than the distance between designers and Large Language Model technologies. I think a lot of designers just through their day-to-day work picked up a pretty good working knowledge of CSS or, you know, some animation libraries like, you know... what am I thinking of... GreenSock. You know, that’s different now. It is very challenging to go and work out how a Large Language Model is actually working. People who have PhDs do not know how every part of the animal works, right?

So, honestly part of the reason I arrived at writing this book was to... I had a discussion with my partner who is an AI engineer—former VP of Data—and he... I was like: "What is the right amount that I should go and learn about this technology to make me, you know, slightly smarter than your average bear in this?" And he was like: "You’re talking about going getting like a PhD-level, like, you know, understanding. You don’t need that." And so it’s been important for me to pick and choose, and in the book to share, like what is actually useful to understand. And it’s some of the technology—like it’s a little bit... like clarifying that this is a text-predictive model. Like it wants to give you something that sounds right, but it is not objectively right. And we’re probably never gonna solve for hallucinations, right? And it explains why hallucinations happen, things like that.

But then there’s also like how Large Language Models are understood and put into products and how some of the harms are prevented. So one of the things that I loved learning about in this sort of pre-book era when I was first coming back to AI was about how evals had become more sophisticated. And evals are a method for taking and testing a model to see what it’s capable of and seeing what it’s failing at. We’ve heard of this before in a really informal way probably with "how many Rs are in Raspberry" or something like that. Like, that’s a kind of like gestural example of an eval. But these are tools that basically allow PMs and engineers to understand how how a model is working and thinking. And designers can benefit from this. And designers are capable of forming their own evals to be able to poke and prod to see what a query will give them. And so just giving people even an understanding of what that is. Or Red Teaming, which is a way that, you know, models are tested for problematic or, you know, harmful outputs. These are all things that are really useful for a designer to know about at just so distant from like even the more technologically advanced stuff of the web era where like we might learn about an API, we might have to deal with some latency, but it’s just nowhere near this level of complexity.

I just wrote out like the three things that I feel like designers most need to learn to adapt in this era. And one of them is like communication. You know, like how much we need to be able to distill information back and forth. But I also think that if you are someone who’s pursuing AI as part of your design practice, continuous relearning is going to end up being part of it. You know, unfortunately... or fortunately maybe... like this is something I personally really enjoy about our work is that we get to stay so close to this cutting edge stuff and not be in the camp of being very scared of it. Like I think there’s a lot of fear in the design community—and honestly like lots of people who are just workers or they have kids who are in college—about what this technology is capable of. And working close to it is nice and being able to understand it. My gastroenterologist was giving me an exam a couple days ago and asked me if his kid should still go and study computer science because of AI. And I was like, "This is an inappropriate time..." Honestly actually, I thought it was really sweet that he was asking me this. But it just highlights how much fear is out there in the world about it. And working closely with it and being able to demystify it even for myself and then share that is so valuable. And I think that yeah, it’s probably going to be part of the practice for a while to come.

Liam: Yeah. I I want to talk about that more because I know in my own work with AI like the ground is shifting constantly. And so I wonder... and you know, not only that, but it feels like the stakes are particularly high for the industry and also kind of for society in some ways. Or they feel they feel so. So I wonder what it’s like to write a book about this right now in this kind of landscape?

Louise: Yeah. I mean it’s interesting. We have benefited I think from being in an industry that doesn’t have high risk. You know, like how badly can a tool go wrong? Unless you’re working in like fintech or something where someone can lose all their money. But generally speaking like it’s pretty few and far between that you’re working on a product where there is an emergency attached to it. And that’s changed with AI. You know, we have a little bit of a different world now where there’s actual harms that we need to be really cautious of. And so there’s like a big ethical component to the work that we do. And I think that that also provokes a little bit of fear for people who are working in this field or associated with it.

I’ve kind of felt that my obligation in this changing world of design is just to provide some structure and some knowledge. And more than anything, that doesn’t derive from me being curious about being like a guy in this world—like a figure. It is much more from the fact that like I see a big absence in like something that I think we really like to do as designers, which is to meet and to discuss and to like find ways to build with a complicated technology. You know, it’s definitely no secret that design has contracted and tech in general is going through a bit of a rightsizing. And so... rightsizing might be the wrong word there... but I I am sort of curious about the obligation we have as designers to meet and discuss and form cohesion around this subject because it can be a really hairy one. And you know, we’ve somewhat lost, I think with this technology change, our clear mandate to be the voice of the user. Because we can’t see into and change the behavior of models. You know, hallucinations are terrible; AI psychosis was really really horrible. Designers don’t actually have many tools to be able to solve those things, you know? We can give affordances and we can give content warnings, but the actual meat of the experience is owned by engineering. And so we have to change the way that we work in some ways to be able to keep being that consistent voice of the user.

Liam: How do you recommend we do that?

Louise: Well, like I was saying earlier, like the third leg of being like in the AI design world is communication. And so fostering shared language—like I was saying earlier like evals and red teaming—but there’s a lot more that goes behind the lexicon of of AI. I think indicating to PMs and engineers that you are someone who wants to be influential in this discussion is a good place to start. And just making it really clear that you have you know you understand the lexicon and the rubric and you are like participating in sophisticated AI discussion as opposed to being kind of a neutral party who wants to just influence and inform. Because it’s such a technical field, I think that you want to go one degree further. It’s like speaking like if you and I were speaking German, I would need to have a few more words to be able to actually meaningfully express myself.

And then I think this is a little bit of like organizational structure as well that needs to change in the AI world where the triad I think needs a little bit of an update where just expectations are really clearly set across Eng, PM, and Design. And I see PM changing a little bit as someone who translates capabilities—like someone who continually owns model capabilities and transfers them back and forth—so that your designer, maybe working with a researcher conducting research themselves, is able to pick apart and prove or disprove, you know, theoretical user experience elements that are coming out of model capabilities. And then push them back to the engineer and say, "You know, this is or isn't meeting the user expectation or this isn't fulfilling what we think we want to provide." And so it just makes for a little bit more of a challenging environment, but I think that there are definitely ways to update the structure of like your team to support it.

Liam: Yeah. So so going from kind of model capabilities back to the user side, there’s, you know, given that as you said the experience that the user has can be so open-ended... I’m wondering if if you talk in the book about kind of the conversational nature of working with LLMs as a user in terms of like what might be happening there socially to the extent that we could call that interaction a social one? It certainly feels like one for the user. So I wonder what you think is going on there.

Louise: It’s so interesting. Are you asking like if the user’s brain is perceiving the interaction differently and if that affects the UI? Or like how they interpret the UI?

Liam: Yeah, basically.

Louise: Yeah. I think that... I mean, this is a little bit of a challenging area for me to be totally honest because I am honestly really astonished by a lot of the stories I’ve just been seeing recently about like AI psychosis. And you know, I don’t mean to make this super negative, but I do think that there are big ethical implications to letting a UI make a model appear intelligent. Something that I think is really powerful in... let’s get the example of like Claude Code... is that when you’re communicating with it—these are probably like users who know what they’re doing a little more in general because this is you’re communicating with Claude Code usually in the terminal and you’re writing code with it, and so these these users might be just more sophisticated in their understanding of LLMs—but something that’s happening throughout your communication with it is it’s showing you the context window. And it’s telling you how full that context window is in a percentage. It has a great little UI within the terminal where after a period of time when your context window is full, it’ll just like compact it down and it’s that’s it’s great.

But when you’re showing the user the context window, you’re reminding it continually—it, the user, you’re reminding them—continually that they are speaking with a Large Language Model. That language model has a limited amount that it can take in. You know, you’re you’re showing it the constraints of the technology. There are other consumer-available models that don't do this. And I think try to make an experience feel magical, you know, and make it feel like the technology is actually intelligent. And I think that is where we get into a really difficult place. It’s deep water there because if users think they’re having a magical experience and they think that they are building a relationship and that there’s maybe a sentient being on the other side there, you know, we are doing harm as designers to not remind them continually that that’s not the case. Yeah.

Liam: Right. It reminds me of my conversation with Judith Donath who’s done so much work on sociable computing and like the social aspects of communicating online. And how she said that like one comparison she made is like in real life—in or in physical tangible life—there’s a whole constellation of like subtle cues that you get from your communication partner about their intent, maybe the veracity of what they’re saying, whether they mean it or not, like what is said and what is unsaid kind of. And that in online spaces, there are like various other ways to kind of try to approximate that or try to like build a richness into the conversation that ends up kind of developing its own sort of social meaning. And I wonder like the Claude Code example is great, but I wonder if there are like other kind of touches or or patterns that you’ve noticed that kind of help add some of that contextual information for the user to get a sense of like what they’re really interacting with?

Louise: I mean, it’s interesting. There’s there’s new little patterns all the time that I’m seeing. I get to go and do some very dumb research, as I bet you can imagine, for this book. I was playing with an app yesterday that allows users—we should specify these are like Gen Z users—to make videos of themselves and like the the guy, like a hamster, and it’s like a fake voice. And at first I was like quite alarmed by this. And I was like, "Oh, this is a really odd thing." I do begin to realize though that as AI visuals become more sophisticated, users also bring greater scrutiny. Like there is certainly the risk of like... I don't know if you ever go on the subreddit scams, but I do keep an eye on that subreddit because I do notice that AI crops up pretty frequently in there and I try and see whether there’s trends about like certain products being used. But like of course there are like harms and risks there.

But I don't think we're actually at tremendous risk of like a speaking hamster video. And it was actually one of the first times that I felt really charmed by like a community of... these are like, you know, younger kids... like not kids, like super young, but like they’re sending like charming little videos back and forth to each other about like going to the store. And I was like, "This is actually just another way of packaging the stuff we were doing when we were teenagers." And you know, I think that that’s just a novel form of communication that a younger generation is going to inevitably discover that isn’t dreadfully problematic. It’s likely that we’ll see harms certainly that are difficult to perceive now associated with it, but yeah. I mean at the end of the day, there’s... with any really sophisticated technology, we’re going to be unable to anticipate... what is it... the shipwreck? You know, the invention of the ship comes with the invention of the shipwreck. But I also do see with this technology because it’s such a novel one, a little bit of sort of shooting ourselves down before we’ve really achieved. And I do sometimes try and caution people that like there’s always going to be ethical implications, but don’t let that prevent you from exploring and playing with this if you feel that you want to, you know?

Liam: I am going to go back and add a little music stinger when you say Gen Z users.

Louise: Yeah! We used to have a noise we did! I forget what it was on the other podcast.

Liam: But I also want to talk about like the more purely textual interface that people have with this and like what what your thoughts are on how to better communicate the the... I guess I want to say like the intentionality, but here I am like anthropomorphizing the machine as well... the the kind of these subtle indicators of of how the machine is interacting with you.

Louise: Yeah. Something that’s super interesting—I try and mark down a lot of the hallucinations I get. Because writing a book is great to talk to a model about. And I’ll go and like chat to it about the book and it’s a great way for me to test out how big a context window is ‘cause sometimes I just chuck the whole book in there and see what it does. And I’ll be like, "Okay, let’s rewrite this chapter, but give me three really good facts that I’ve never heard before." Which is a really good way to test out like if it’s going to be hallucinating. I got a great... like I’ve had some really funny ones, but this one was so mild but very amusing. You know the famous paper "Attention Is All You Need"? Like the foundational transformer paper? The fact that it gave me recently was actually, when they were first writing that paper, they wanted to have an exclamation point at the end. So it would have been "Attention Is All You Need!" but then they weren't allowed to at publication. And I was like, "What?" And I I went I I really went hard on the research that ‘cause that would be so funny if it was true. But, no. Totally imagined. Very sweet, somewhat benign hallucination there.

But yeah, sorry. So you were asking about like what text interfaces can do to prevent and to inform about hallucinations and things like that in general. I I do come back again and again to saying like designers are not obliged to solve hallucinations. And I do think it’s important to for us as a community to install the understanding in our users that AI is not the place to go for facts. We probably need to find ways to communicate that more clearly, but in general, human-in-the-loop is the best practice for nearly any type of AI interaction. Even just using it as a personal diary or form of therapy can can get yield some pretty problematic results and you can find that it can become distorted quite quickly.

And so what is actually happening in the body of the text itself I think is sometimes very beyond our control. There’s smaller things. I think that there’s there’s opportunities to leverage the way a model works. I’m doing a vibe coding event next week... this week... next week... with Rhizome to help artists do some creative coding. And just in the time I’ve spent exploring that side of the tool, it’s interesting to examine how much if you ask a model to work in a certain way, you’ll get a better output. And so there’s user education as part of UI that can go in there. For example, if you ask a model that’s going to do some coding to explain what it will do to you before it does it, you actually get much better performance. There’s also we know that like one-shot or like much longer initial inputs tend to yield better outcomes. We know that if you have a very long conversation, you tend to get worse outputs. You know, there’s certain things we can do to educate the user about the type of experience they’re having.

And then there’s like protective UI or preventative UI, which is like if you have been using this product for like 10 queries in a row, 100 queries in a row, six hours, like we know that we probably want to be showing you something that indicates that you’ve maybe been using it for a long time. That said, gaming has tried these interfaces; they haven’t been extraordinarily successful. Netflix has done the similar thing. And so, yeah. I I just come back to the fact that if this is something that you really care about and you’re seeing harms come out of your product, that is something that needs to get run up the chain and the communication structure that maybe you should be your organization should be working to establish.

Liam: Yeah. I do definitely remember at some point in my youth my Animal Crossing New Leaf villager telling me that I looked tired and should take a break.

Louise: Is this is this... how far back is that though? Is that the Switch game or is that all the way back on the GameCube?

Liam: It’s we’re talking Nintendo 3DS days.

Louise: That’s nice to hear actually because that’s a while back.

Liam: Yeah. Yeah. One one big thing that I still want to talk about that I think underpins all of this is the tension in this space. Like given the way that models talk to us, there is a constant tension between the capabilities of the models and what we imagine they are capable of. Going back to the the thread from the beginning of our conversation. I’m curious like both in an organizational sense and also within yourself how you manage that.

Louise: Mmm. Working out a model’s capabilities.

Liam: It’s I I particularly see that there’s like a kind of there’s a strong pull towards what we imagine the model can do in in thinking about like building these products versus what they are capable of now. And it’s hard to manage.

Louise: I keep a little list for myself of prompts to try out with different tools ‘cause it’s useful to have some consistency to see how they show up. I was recently playing around with GPT Image 1, which is OpenAI’s new image model that’s an update to DALL-E. And so running similar queries between those two showed me, you know, it’s much better at doing X, Y, and Z. Like it follows instructions much better for instance.

So in terms of like just really testing out the model and understanding it, I think I’m in a little bit of an unusual position because this is my day-to-day work and it’s the book. So I am way more under the hood than I think the average person should be, to be honest. But you know, working out a model’s capabilities is to me exactly the same thing as arriving at a website where I’m going to check the weather and seeing how much information about the weather is going to give me. Can I get it on an hourly basis? Can I get it for a week? Can I get it in Sardinia? But with AI, I think there’s the additional challenge of: if you are enabling it to do other tasks, there could be implications. So that looks a little bit like: I’ve given access to my email or I’ve given access to my documents in my drive or something like that. If I’m gonna go ham and just try out like a random prompt, there’s like a risk in my mind as a user—real or not—that something that matters to me could get damaged.

Um, there’s a funny little interface query that pops into my mind all the time when I’m using Gmail. When I’ll respond to an email—I love this affordance on mobile, don’t get me wrong—where I’ll go to reply to something and Gmail will pop up with three suggested responses and I can pick one of them. But what’s not clear in the boxes that show the suggested responses is whether the email will get sent without me reviewing it. And so I guess I’m talking about like how to make experimentation feel good for users because that is a big part of working with AI that we haven’t had to deal with as much in other products. Because I’m taking a bit of a leap of faith if I am speaking to someone that I’m working with and I’m like, "Okay, I’m gonna press the button that says 'write the email' and I’m gonna really hope that you don’t send them some AI slop." You know?

Liam: Yeah.

Louise: And I think that that’s probably something that we’ll come to have more language about. And honestly I think I talk about this a bit in the book... it’s become so deep in the book by the way that I have forgotten what I’ve written and I have to go back and remind myself sometimes. And sometimes I’m really impressed and I’m like, "Oh my god, I was spitting fire today." And sometimes I’m like, "What was I on about?" This is a really good example of what it’s like to design an emerging field. You know, where we may well see in a short period of time that users don't develop an expectation that a task will be... something will be done without their approval. That’s just gonna become table stakes mental model. Or we might find that there are patterns where users are just like, "Yep, oops, now I’ve emailed the entire company." And so we can do education within our individual products, but trends will also emerge that we have to all be observing and be, you know, extremely mindful of as well. And we can’t anticipate those things unfortunately; we have to just be attentive to them.

Liam: Yeah. So part of the vagueness of the mental model is just built into the fact that it’s new.

Louise: I mean, if you are making AI products right now, you are influencing the field. And that’s very important to consider. If you make a really good solution... this is kind of like Jacob’s Law, you know: your users spend more time on other products so your product should be like other products. It this breaks that a little bit because we’re in a frontier experience where we have to develop new solutions that we think are better for our individual users.

It’s also important to remember like this is the hardest kind of design because it’s for everyone. Like we are used to designing for personas and designing for user journeys and being really like... you know, the opposite of the fear of the blank page. We want to know what people are trying to accomplish and help them solve their problem. Opening that up to literally any kind of problem, anywhere, any time... like that is so difficult to design for. And thankfully we’re already seeing like more differentiated AI products develop and like even between the main AI models, I’m seeing differentiation in who they know that they’re working for. And that is going to probably continue to be a trend. So the products themselves will also begin to improve. But yeah, getting to know the users and how they’re thinking about the product is just gonna be the continued challenge of the next 10 years.

Liam: Louise, thank you again for coming on Design Notes. Can you tell us when the book is coming out?

Louise: I believe the book is coming out in March. I I don’t actually know. Maybe I can dial you up, Liam, and you can put in like a edit... she was wrong and this is when the book is coming out.

Liam: I’m going to... I’ll drop in the official date here.

Liam (Voiceover): Editing Liam here. The book is coming out April 21st, 2026.

Louise: I am super excited for people to read it and to really hear about if it’s like useful in gaining an understanding of the of the field. And hopefully it’ll help solve a couple of these problems. Even if it helps a little bit, it will be a a win for me.

Liam: Nice. Well, thanks again.

Louise: Thank you, Liam. Lovely to see you.

Read next

The State of Buttons
Android · Featured

The State of Buttons

The button's complexity is a result of how flexible the concept of a button really is. It's just a region of space on screen that triggers some action.