In our conversations with developers, it's clear that many companies still struggle to manage their APIs. With the growing reliance on LLMs, these challenges will continue to compound as AI ecosystems and capabilities expand. So, too, will the required data and parameters to communicate them. Maintaining hygiene between APIs is a conversation we should be having now to avoid a new kind of tech debt sabotaging AI's future. To avoid this fate, we should learn from our past and how our reliance on APIs grew.

Many household-name web companies grew using APIs to integrate easily with outside platforms. Salesforce ensured CRM data could interoperate with platforms large and small. Yet, managing each integration would have been cost-prohibitive. Salesforce instead built its platform primarily as a self-service platform, with a myriad of clear and functional APIs that fit each customer's needs. Then, they watched the ecosystem grow around them.

This graphic is an API (Application Programming Interface) timeline presented by Felicis, showcasing key milestones and companies in the evolution of APIs from 2000 to 2020.  2000s (Pioneers): It starts with pioneers like eBay and Salesforce. 2005s (Social dev communities): Around 2005, companies such as Facebook, Shopify, and Twitter appear, emphasizing the growth of social developer communities. API economies are represented by companies like Apigee and cloud providers like AWS, Microsoft Azure, and Google Cloud. 2010s (API-first companies): In 2010, API-first companies like Twilio and Stripe are highlighted, along with the rise of SDK as a Service, with Postman and Swagger mentioned. 2015s (API platforms emerge): By 2015, API platforms emerge with the mention of Kong and OpenAPI Initiative, indicating a trend toward API standardization. 2020s (APIs in AI era): The graphic concludes with the 2020s, focusing on the role of APIs in the AI era, featuring companies like Stainless, Speakeasy, Fern, Unkey, and Liblab. The timeline is visually organized with logos of relevant companies aligned along a horizontal axis, indicating their emergence and impact in API development over time.

Stripe followed a similar path a decade later. Payments, like CRM, require a great deal of information to work properly. Stripe was the first to make the exchange of payment data simple. Suddenly, web companies had a payment platform at their fingertips that, like Salesforce, they could deploy with little technical assistance. Talk to a developer who used Stripe in those early days, and they will likely tell you the company's documentation and logic around their API was significantly helpful—and made Stripe's meteoric rise possible.

If the 2000s gave us the first APIs, and the 2010s cemented their importance, we are now in an era where every tech company is really an API company. Selling to developers (or at least, with their approval) is the norm, and API partnerships are a significant growth vector. Top SaaS companies average over 350 different integrations each

This bar chart titled "Number of Integrations at Top SaaS Companies" illustrates the number of integrations available for various SaaS (Software as a Service) companies.  Shopify leads with the highest number of integrations, exceeding 7,000. Okta and Zapier follow closely, both with over 6,000 integrations. Google Workspace and Atlassian each have around 5,000 integrations. Companies like Slack, Salesforce, and Zoom are in the mid-range, with approximately 3,000 to 4,000 integrations. Box, Zendesk, ChatGPT, and BambooHR have between 2,000 to 3,000 integrations. The chart also lists Hubspot, Coinbase, Domo, Xero, Typeform, Intuit, Datadog, DocuSign, and Pipedrive, with integration counts ranging from around 1,000 to 2,000. The horizontal axis represents the number of integrations, while the vertical axis lists the SaaS companies in descending order of their integration count.

OpenAI, Anthropic, and any other AI-native company looking to build their LLM ecosystem is already thinking in these terms. Of course, like Salesforce, it is cost-prohibitive for them to manage every integration; if these companies want to see real growth, they need to keep their APIs simple and straightforward. Otherwise, no matter how powerful the LLM's processing powers are, CTOs will selectively integrate them.

Keeping the train on track: secure the endpoints

APIs will become increasingly difficult to manage in the age of AI. We are seeing the rise of software features that use LLMs to carry out autonomous tasks: AI agents

Yet, for an agent to take action, the underlying software must know how to speak the language of each connecting piece of software. If I tell an agent to "Schedule my meeting in Chicago," the agent must be able to exchange myriad pieces of data with APIs from travel sites, my calendar, and other applications. Maintaining the pipings between these connections—and keeping them secure—will only get more complex. 

This flowchart provides a step-by-step guide for integrating and handling API calls in software development. The process is broken down into seven key steps:  Read the API documentation: Begin by thoroughly reading the API documentation to identify the correct API calls or combination of calls needed.  Understand data formats: For the relevant API endpoints, understand the expected request and response data formats.  Write HTTP request code: Write the code to form and send HTTP requests as specified by the API documentation.  Parse responses: Write code to parse the API responses into the appropriate data structures.  Handle errors: Anticipate potential errors that may occur during API calls and write code to manage them.  Manage authentication and rate limits: Write code to handle authentication, retries, and rate limiting to ensure the API calls are robust and secure.  Test and rewrite code: Finally, test and refine the code for all relevant API calls, ensuring that everything functions as expected.  The flowchart uses arrows to show the progression through these steps, highlighting the cyclical nature of testing and refining API integration.
Source: Speakeasy.

One of my go-to analogies when explaining APIs has been trains. APIs work something like tracks, ensuring each data point—person—arrives at the proper station. But imagine if we used different track designs for various terrains and trains, and as they flew around the country, they had to change their wheels to fit the new tracks. It would be chaos.

This is what will likely happen as the number of APIs increases: the languages they're written in will also increase. For developers, this creates significant maintenance and security challenges. If the API for an LLM is written in C++ and Python, but the AI agent using the inputs needs SQL, then developers will have to build a translator in between. That alone is simple, but if the API pushes an update, the developer must update the translator, test the agent again, check security, etc. If their company ships an update—which companies do at a high rate these days—they'll have to repeat the process again to ensure a secure exchange. Managing this at scale is a costly proposition of time and resources and can, in turn, become deprioritized.

AI is evolving quickly, and data won't stay in consistent formats either. Text has expanded into unstructured forms such as audio, images, and video. Data standards will come and go as companies and movements within the development world update—and fight over—their preferences. To beat the analogy into the ground, managing APIs in the current AI environment is like trying to build train tracks during an earthquake: the ground is still shifting, and the dust is obscuring the path forward.

Then, add in the small open-source projects where developers deploy niche software to patch or augment a feature. Volunteers usually manage these projects. Updates are sporadic, and the code is accessible to anyone. As even Apple learned recently, tiny integrations can create huge security vulnerabilities.

Software as a management issue

The final but critical three-letter acronym is the software development kit or SDK. These are the tools an API provides developers to streamline their build. As The API Economy points out in a recent post, a good SDK automatically handles technical aspects such as authentication, serialization, retries, and boilerplates—and applies updates across all applications without needing to rebuild the integrations. A good SDK maintains compatibility with each new update of the API.

SDKs are, to extract one final gasp from the train analogy, the workers, the pieces that need the latest schedules and technology embedded in their function. When companies cannot update their SDKs properly, the API falls into disrepair and disuse. We see fragmentation today with frameworks like GraphQL, gRPC, and REST being used for different applications. This is problematic, to say the least. CTOs ideally want to use one language across all API applications.

AI will bring a lot of advancement to APIs. LLMs will provide better API discovery—how could a developer possibly sift through them all today?––and more precise distillation of an API documentation. We will see AI optimize performance by routing requests and building protocols in ways engineers hadn't imagined. AI will grow into both the super-powered assistant to developers and the gateway for each API.

Yet adding intelligence doesn't mean reducing complexity. The rise of AI will create more fail points in the creation of new features, which will, in turn, create more work for their human-developer counterparts. Today, AI is still in its Wild West phase. Companies are still struggling to integrate and figure out the best technical and business paths forward. But when the dust starts to settle, those who took a calmer, smarter, broader view of AI's capabilities will end up the winners. 

If you or your company have an eye on that view, please get in touch.