In our conversations with developers, it's clear that many companies still struggle to manage their APIs. With the growing reliance on LLMs, these challenges will continue to compound as AI ecosystems and capabilities expand. So, too, will the required data and parameters to communicate them. Maintaining hygiene between APIs is a conversation we should be having now to avoid a new kind of tech debt sabotaging AI's future. To avoid this fate, we should learn from our past and how our reliance on APIs grew.
Many household-name web companies grew using APIs to integrate easily with outside platforms. Salesforce ensured CRM data could interoperate with platforms large and small. Yet, managing each integration would have been cost-prohibitive. Salesforce instead built its platform primarily as a self-service platform, with a myriad of clear and functional APIs that fit each customer's needs. Then, they watched the ecosystem grow around them.
Stripe followed a similar path a decade later. Payments, like CRM, require a great deal of information to work properly. Stripe was the first to make the exchange of payment data simple. Suddenly, web companies had a payment platform at their fingertips that, like Salesforce, they could deploy with little technical assistance. Talk to a developer who used Stripe in those early days, and they will likely tell you the company's documentation and logic around their API was significantly helpful—and made Stripe's meteoric rise possible.
If the 2000s gave us the first APIs, and the 2010s cemented their importance, we are now in an era where every tech company is really an API company. Selling to developers (or at least, with their approval) is the norm, and API partnerships are a significant growth vector. Top SaaS companies average over 350 different integrations each
OpenAI, Anthropic, and any other AI-native company looking to build their LLM ecosystem is already thinking in these terms. Of course, like Salesforce, it is cost-prohibitive for them to manage every integration; if these companies want to see real growth, they need to keep their APIs simple and straightforward. Otherwise, no matter how powerful the LLM's processing powers are, CTOs will selectively integrate them.
Keeping the train on track: secure the endpoints
APIs will become increasingly difficult to manage in the age of AI. We are seeing the rise of software features that use LLMs to carry out autonomous tasks: AI agents.
Yet, for an agent to take action, the underlying software must know how to speak the language of each connecting piece of software. If I tell an agent to "Schedule my meeting in Chicago," the agent must be able to exchange myriad pieces of data with APIs from travel sites, my calendar, and other applications. Maintaining the pipings between these connections—and keeping them secure—will only get more complex.
One of my go-to analogies when explaining APIs has been trains. APIs work something like tracks, ensuring each data point—person—arrives at the proper station. But imagine if we used different track designs for various terrains and trains, and as they flew around the country, they had to change their wheels to fit the new tracks. It would be chaos.
This is what will likely happen as the number of APIs increases: the languages they're written in will also increase. For developers, this creates significant maintenance and security challenges. If the API for an LLM is written in C++ and Python, but the AI agent using the inputs needs SQL, then developers will have to build a translator in between. That alone is simple, but if the API pushes an update, the developer must update the translator, test the agent again, check security, etc. If their company ships an update—which companies do at a high rate these days—they'll have to repeat the process again to ensure a secure exchange. Managing this at scale is a costly proposition of time and resources and can, in turn, become deprioritized.
AI is evolving quickly, and data won't stay in consistent formats either. Text has expanded into unstructured forms such as audio, images, and video. Data standards will come and go as companies and movements within the development world update—and fight over—their preferences. To beat the analogy into the ground, managing APIs in the current AI environment is like trying to build train tracks during an earthquake: the ground is still shifting, and the dust is obscuring the path forward.
Then, add in the small open-source projects where developers deploy niche software to patch or augment a feature. Volunteers usually manage these projects. Updates are sporadic, and the code is accessible to anyone. As even Apple learned recently, tiny integrations can create huge security vulnerabilities.
Software as a management issue
The final but critical three-letter acronym is the software development kit or SDK. These are the tools an API provides developers to streamline their build. As The API Economy points out in a recent post, a good SDK automatically handles technical aspects such as authentication, serialization, retries, and boilerplates—and applies updates across all applications without needing to rebuild the integrations. A good SDK maintains compatibility with each new update of the API.
SDKs are, to extract one final gasp from the train analogy, the workers, the pieces that need the latest schedules and technology embedded in their function. When companies cannot update their SDKs properly, the API falls into disrepair and disuse. We see fragmentation today with frameworks like GraphQL, gRPC, and REST being used for different applications. This is problematic, to say the least. CTOs ideally want to use one language across all API applications.
AI will bring a lot of advancement to APIs. LLMs will provide better API discovery—how could a developer possibly sift through them all today?––and more precise distillation of an API documentation. We will see AI optimize performance by routing requests and building protocols in ways engineers hadn't imagined. AI will grow into both the super-powered assistant to developers and the gateway for each API.
Yet adding intelligence doesn't mean reducing complexity. The rise of AI will create more fail points in the creation of new features, which will, in turn, create more work for their human-developer counterparts. Today, AI is still in its Wild West phase. Companies are still struggling to integrate and figure out the best technical and business paths forward. But when the dust starts to settle, those who took a calmer, smarter, broader view of AI's capabilities will end up the winners.
If you or your company have an eye on that view, please get in touch.