Doubleshot · · 4 min read

☕️ Design Space and the Generative Spectrum

Untangling some of the abstract and ambiguous possibilities of designing with AI.

☕️ Design Space and the Generative Spectrum

Welcome to the Doubleshot newsletter! Every two weeks (or so), I send out a bit of practical advice and a bit of theory. This time, we’ve got a somewhat AI-focused issue, and I’m using the newsletter—as ever—to work out some of my thoughts.

Want to get the latest Doubleshot a week before it goes up on the site? Sign up!

If you enjoy Doubleshot, pass it on to a friend.


🧑‍💻 Practice: The AI Spectrum

“Ambiguous” and “abstract” are two of the words I hear most often in day-to-day conversations about how AI is expected to affect the practice of interface design. As with all AI discourse, it’s easy to get caught up between what AI is doing and what AI is predicted (or hoped) to do later. As practitioners in the tech space, we’re of course concerned with both, and I get the sense that we’re collectively eager to get concrete about what’s likely to happen—not just what we hope.

I’ve written about a few possibilities already, but I wanted to sit down and plot out the different roles I think AI is taking or could take in the future, without respect to any technical details or any knowledge of how research and development of AI might progress. In my own opinion, the possibilities for AI’s involvement in the design process—in order of a model’s hypothetical involvement—look something like this:

Ideation ⇢ Sketching ⇢ Speccing ⇢ Composing ⇢ Creating ⇢ Running

Ideation, the least-involved step on this axis, is what we’d think of as normal interaction with an LLM. It might even just be “talking” about the design with the model, or it might be creating some simple assets.

Sketching would represent something like wire framing or early structural establishment. Lots of variants might be desirable here. The designer would be testing out ideas, looking for rough hierarchy.

Speccing is where we start getting to high-fidelity drawing. The model might create individual high quality components or entire screens that actually match up with systematic coherence. Here, being able to reliably replicate system features is crucial. It may deliver final specs or create other assets needed for implementation.

Composing is where we’d move onto the device. At this stage, the model is assembling pre-fabricated elements, components, or layouts on the fly, and may be adjusting or recomposing them as needed.

Creating is, conceptually, where the model would begin not only to decide what to assemble, but what the available pieces even are. It would compose those pieces and then assemble them on screen.

Running, finally, is where the model is not just defining or putting together the pieces of the visual interface—it is deciding what the interface should even be. At this end of the axis, the machine has complete control—I won’t say “agency”—over the entire process.

So, where do I see the industry landing? My philosophical perspective on generated designs doesn’t need rehashing here (though I’ll expand on it soon), but in general, I don’t see steps 1, 2, or 3 on the process axis as the final—or only—landing place. Ideation and sketching can be useful, especially for small teams or new projects, but relying on machines for speccing, as I’ve written before, can be costly. Creating and Running feel unlikely to me in the near term, because of the need for predictability, insight into implementation, and reliable production of designs that accomplish specific goals. The future may yet bring these capabilities, but as Tom Boellstorff once wrote, “the goal of foreseeing an unknowable future leads to misunderstandings of the present.”

In the end, I see “composing” as the most likely landing place once practitioners and researchers have a better handle on the fundamental capabilities and limitations of the technology at hand today.

In that world, like the one I described almost ten years ago with Project Phoebe, the design is a conversation between designer, machine, and end-user, kicked off by finding one most-ideal point in the design space, plotting the relevant axes, and moving from there. In that reality, the designer actually has an increased scope of work: instead of plotting out a single set of designs, we’ll need to discover the extremes of the design system and the specific interface we’re building, likely creating interpolated or extrapolated (so, generated inter-variant points or points that extend beyond the end of the axes) to test how the designs hold up.

In the end, I still see the vision of Project Phoebe as a likely direction for our practice, and still believe we should keep seeking out concrete steps in that direction. Maybe this framework will help us organize our thoughts toward that vision.


📚 Theory: Design Space

Speaking of Design Space, in this Doubleshot I want to direct you to my post from last weekend unpacking the concept of design space and why I believe it’s crucial to the future of our practice. Check it out here.

That’s it for now—see you in the next one!

Read next