Doubleshot · · 5 min read

☕️ Doubleshot • The Sparkle Emojis are Here

The phrase “make designs,” I would argue, does not describe what occurs when the button is pressed.

☕️ Doubleshot • The Sparkle Emojis are Here

Welcome to the 5th Doubleshot newsletter! Thanks for subscribing, and welcome to the new subscribers since last time :)

🧑‍💻 Consider your tools

Soon, I’ll be publishing an episode of Design Notes in which I talk to Meta Design Director Bethany Fong about design leadership and her role in the early days of Material Design.

One thing we touch on in the conversation is how, in the early days of Google’s design system, designers carried around notebooks with them everywhere. The tactile, quick, and messy sketching process was deeply embedded in the origin of the system. You’ll also hear us cover whether and how this kind of tactile exploration is still important in a world of pre-baked design systems available immediately within high-fidelity design tools.

I’ve been thinking a lot about this lately: how the tools we use in design impact the things we make. In my conversation with Matías Duarte, we talked about how digital tools bake in certain assumptions while simultaneously letting us dig deep into the problems specific to our design projects.

There’s an interesting history of research into tool use demonstrating how the human brain relates tools to the body, perceiving our arms as longer than they are or distances from us to a target as shorter than they are when we’re grasping a tool. And I wonder—particularly at a time when many folks are reconsidering the tools they use—what beliefs or assumptions are at play in my conception of the design process based on the tools I’m used to now.

My tip this time is to consider this idea and maybe reintroduce an older tool or try a new one. I’d like to see if imposing (or removing) new constraints can spur new ideas.


📚 The Sparkle Emojis are Here

Last week, Figma added an important button to its interface, labeled “Make designs.” The company announced other AI features, too—like automatically naming layers—but those features were largely overshadowed in community discussions by the idea of generating screens based on a text prompt, and what it might mean for our work.

The response I’ve seen from the design community has been varied, but the top categories I’ve seen are as follows:

One thing I’ve noticed about all these responses is the underlying agreement that this feature represents a coming fundamental change to our discipline. I think the real meaning of this development is probably a synthesis of all these responses, which would probably point back to foundational questions we need to tackle in the discipline rather than a new seismic change for which we must prepare.

One such issue represented in these responses is the idea that design has now (just now) been commodified, and as soon as business leadership realizes that, it’s all over. So, it makes sense that the discussion largely turned toward what differentiates human designers in a “make designs” world. One answer I noticed coming up a lot was that designers (and “real” design) are defined by “taste”: a special factor or the secret sauce of being a designer. People posting about AI in design in the past week have suggested that human designers have “taste,” that they are “tastemakers,” that they are defined by “craft,” etc.

But we will need a stronger explanation than this if the AI generation of screens takes hold (more on that later). After all, taste, says sociologist Steven Shapin, is “among the most private, arbitrary, and least-discussable of all subjective modes.” There’s no accounting for it, as the saying goes. And if there’s no accounting for it, we probably shouldn’t be resting our profession on it.

A more robust argument must rest on what design is and does.

The Material Design guidelines say that “label text is the most important element of a button. It describes the action that will occur if a user taps the button.” The phrase “make designs,” I would argue, does not describe what occurs when the button is pressed because design is not merely the production of an image based on patterns. Design, as an object, also does more than just present an image.

As I’ve written before, design is neither an object nor a process. Design is a system; one that collects, synthesizes, and represents inputs from the other systems around it, capturing and transferring meaning. Back then, I also wrote:

...Tools, for example, that place new screens into Figma files based on a statement of intent (a “prompt”), like all generative products, create ... a mass-object: a series of representations of an idea that lack exact origin or subjective meaning. We cannot know the subjective decisions that brought [a generated screen] to us, so we cannot know if it actually “works.” Ironically, at the exact point in which we’ve removed humanity from the production of an object, it loses all meaningful function.

We need to zoom in on the lack of subjective origin: design being a system means that it’s inflected by all the social, economic, and political systems that shape the lives of those who create it. The object itself is just one outcome of that process. The other outcomes belong to users, culture, and the world.

In that essay, I wrote that we would need to essentially inject subjective meaning back into any generated design, making the distance from starting point to usable artifact pretty wide.

As I wrote last year (and have written for the past eight years), generative AI will likely have a place in the interface soon. However, placing it at the beginning of the process could be costly for designers, execs who believe in certain definitions of efficiency, and users.

That’s precisely why I don’t think the wholesale generation of screens is as portentous as it feels right now. I believe it is the visible, somewhat distracting face of the idea that language models (let’s be more specific than “AI”) can integrate into our workflows meaningfully.

Renaming layers, adding descriptions to things, finding and addressing a11y bugs—things that have a tangible, immediate impact on the quality of the final product—will take hold long before a language model can successfully describe an interface that connects with humans if it ever can.

In the meantime, we have time to step back, breathe, and think about the framing and situation of our practice as the people creating the interface.

If we can recognize, acknowledge, deal with, and build intention around all the subjective and systemic influences that lead us to decide how the interface works and adopt tools in full recognition of their limitations, then we’ll become irreplaceable.


That’s it for now—see you in the next one! If you liked this one, reply with some feedback and forward to a friend :)

Read next