In 1979, Steve Jobs was pissed. The Apple 2 had sold over 100K units, but he was hungry for something to make a larger dent in the universe. Desperate for new technology, he roamed around Silicon Valley, trying to find something that got him excited. Walter Isaacson described in his biography of Jobs the scene where he attended an internal demonstration of a touch screen:Â
“At first he flirted with the idea of touchscreens, but he found himself frustrated. At one demonstration of the technology, he arrived late, fidgeted awhile, then abruptly cut off the engineers in the middle of their presentation with a brusque 'Thank you.' They were confused. 'Would you like us to leave?' one asked. Jobs said yes, then berated his colleagues for wasting his time.”
Instead, he found salvation in Xerox PARC in December of 1979. The researchers at Xerox had simultaneously invented three hugely important pieces of technology: networked computers, object-based programming, and (importantly for today) a graphical user interface navigated with a mouse to point and click1. Jobs was flabbergasted. “You’re sitting on a gold mine,” he shouted at the hapless scientists, “I can’t believe Xerox is not taking advantage of this.” Apple happily stole everything they saw and released the hugely successful Lisa personal computer in 1983.
Up until the Lisa, most computers were reliant on command-line interfaces. Users had to type in an exacting, painfully accurate command to make a computer do anything. Any spelling mistake or any input error meant nothing would happen.Â
Our industry has spent the last 40 years trying to escape that command bar. From the 1990s through the 2000s, pointing and clicking was the dominant mode of interaction with technology. In the 2010s, Jobs made it even simpler by popularizing the simplicity of tapping on a touchscreen.Â
And now in the 2020s, we’ve circled back to a similar form factor in the command bar but with a completely new engagement paradigm: prompts.
Prompts are commands given via text or voice that tell software what to do.Â
We’ve had over 20 years of Google’s influence to teach us to rely on a search box for the answers to our queries. But we sense a shift—people are no longer satisfied with software giving them page upon page of loosely related results. Instead, there is a desire to use command bars to take direct action by using text to prompt AI to generate outputs for us.
Whole products are now built with prompts as the primary form of engagement. We see a growing trend of software startups with prompts as the center of the experience.
We’re labeling this form of interface prompt-driven design. Prompt-driven design is where software uses an AI-powered command bar as either the primary tool of navigation or output. We find it so exciting because it can make apps more accessible, more powerful, and almost universally applicable.Â
There are three key tailwinds contributing to an increase in prompt-driven products:
1. Infrastructure. We’re now seeing founders tackle the fundamental nature of search, language, and text in exciting ways: from making it easier to handle prompts on customer service platforms (such as Forethought) to developer APIs that can integrate AI into different types of applications. OpenAI is a great example of this—even though they make their own prompt-driven experiences—most of their revenue comes from their APIs.
2. User Experience (UX). There is something euphoric about inputting a request and instantly getting the output. Thanks to products like Superhuman or Glean, users are already conditioned to navigate products via a command bar or keyboard shortcuts. For the last few years, it has felt like you need to have a Command + K hotkey to show that you were part of the cool kids club of software design. While text will continue to be the primary way we interact with prompt-driven design for many users, we see text as the precursor to voice-based prompts for many use cases, which platforms like Voiceflow (a Felicis investment in 2021) stand to capitalize on. Some geographies, like India, already use voice to search at a much higher rate than other countries. Just as QR codes took a while to spread to more Western countries, we see voice prompts following a similar path of adoption.
The fact that prompt-driven products will become accessible through multiple modes of engagement is part of what makes the experience so compelling: entire cohorts of people who were previously not able to create with or use technology will now be able to.
To escape the error of the command-line interface of the 80s, natural language processing (NLP) and AI are key.
3. AI. User-friendliness becomes possible when prompts are fused with AI. NLP makes it possible for software to understand your intent versus having to type in an exact command. However, we think when NLP is combined with generative AI real magic can happen. Vague input commands can be created into outputs that are better than what you had imagined.Â
For example, Runway is a fully generative creative suite, where you can type almost any imagined image or video request into a box and watch it come to life. It’s why we were so excited to lead their Series C in 2022—they’re building a future where you will soon be able to prompt their AI for any kind of creative asset or effect that you want. Poly (Felicis company since 2022) is another example where a prompt can be used to generate textures that are then used for 3D designs. We are already used to this “prompt and you get it” style of interaction, but AI makes what you get so, so much better. The simplicity and delight of these experiences cannot be overstated.
As people get more comfortable with AI, working with these models will become second nature just as using touchscreens feels natural today. Whether it’s embedded into platforms like Notion or Canva (both Felicis companies) or whether you are searching for links using a chatbot like Andi or going through your personal spoken history with Rewind. Once you get to the point of using AI for specific tasks, asking a program like Adept to perform a detailed or more complicated task (i.e. “book a family-friendly hotel for three nights in Los Angeles”) isn’t much of a stretch.
‍
The spectrum of prompt-driven design
It’s important to note that prompt-driven design is a spectrum; it can be employed as a navigational UX or it can be core to the output as it often is with AI-powered products that generate results. You can think of these two dimensions (navigational vs. core ux) as a way to evaluate which companies are dipping their toe into prompt-driven design and which ones are making it an essential part of their offering.Â
Then there are the companies that build the infrastructure to support prompt-driven design (at every level of the spectrum). These are the companies building the APIs that will allow founders to spin up a prompt-driven product similar to the ways that cloud and data infrastructure made building SaaS companies much easier today.
The spectrum below is meant to be illustrative, not exhaustive, of the types of applications, products, and companies that stand to offer substantial user benefits due to the adoption of prompt-driven design. The range includes products that primarily use a prompt interface for navigation (Superhuman, Glean), the ones where navigation and some generative result are unique parts of their offering (Canva, Replit, Notion), and platforms like Runway or Adept where prompts create outputs as a core part of the experience. Also included is a smattering of infrastructure companies that will facilitate the building of prompt-driven products through their tech.Â
We expect to see more and more companies make prompts a key part of their product strategy. This change has already started. For example, DoNotPay (another Felicis company) has received loads of attention for its AI consumer advocate that negotiates bills on behalf of customers. This implementation, in particular, takes advantage of language learning models, and voice APIs to deliver unique value all based on the simple prompt of “lower my monthly bill, but keep my current plan.”
Founders should recognize this game-changing form of interaction is here to stay for the rest of the decade, if not longer. We are already starting to see organizations hire “prompt designers” or “prompt engineers” because they recognize the massive efficiency gains that prompt-driven products and in-house prompt experts can provide. Until now we have treated computers as input/output machines. We tell them what to do, they do what we say to frustrating exactness. Prompt-driven design allows us to escape the crutches of clicking or tapping and takes us to the best place possible: intent.
‍
‍
‍
This post was written by Mischa Vaughn and Aydin Senkut with contributions from James Detweiler, Dan Bartus, and Evan Armstrong.
1. It is important to point out that the first mouse had been invented by Douglas Englebart out of a lab at Stanford in 1963 but Xerox were the first to really get the power of GUI and a mouse.