skip to content
Far World Labs

Making Space for AI

/

The Power of Generated Data

One of the most surprising features of generative AI is its ability to generate structured data. Most applications contain multitudes of data. Applications largely exist to usher data in and out of databases, data can flow in and out of APIs to issue commands, and UIs are just mashups of data from many places. Generative AIs can produce this data artificially, driving intelligent adaptations via state changes at many levels. The new AI tools dramatically change what’s possible for software engineers and end-users alike, opening up exciting possibilities in support of more ambitious user goals and greater software craftsmanship. We’re able to rethink everything.

Opportunities and Challenges in UIs and APIs

Automorphing Design

Take the responsiveness of applications. Today, our UIs adapt to factors such as screen size and network conditions and control details like page layout and selectively hiding or resizing components. UIs are not content-aware and not usually aware of user intent. The user has to explicitly click around and use information scent to forage through a static world of pages and forms. Future UIs can instead shapeshift towards user needs. This might take the form of introducing navigation elements like menu items and form inputs in response to context. Background AIs can use gathered context to generate screens or micro-apps on demand. This is far beyond what we think of responsive design today. Will responsive design take a back seat to automorphing design? We’ll see.

API Intelligence

APIs and their corresponding architectural elements now have similar opportunities for adaptive generative behavior. An endpoint might vary its behavior based on user context queried from an LLM-powered analytics service, and use that context to parameterize database queries, say for the case of searching a product catalog. Imagine content negotiation in the HTTP sense that actually negotiated. Imagine the objects of the negotiation could be shaped and created via intelligent interactions between AIs. Nothing remotely like this was possible a few years ago. Interfaces in the abstract sense have almost literally come alive with possibility.

These examples present several challenges, however:

  • How do we make data-driven systems in a way that supports AI contrli?
  • How do we avoid changing state in confusing ways?
  • How do we make these changes visible?
  • How do we manage the latency of generative operations?
  • How do we contain the errors produced in generated data?

Configuration as State

Today’s configuration-oriented APIs like Kubernetes and Redux are some of the better places to apply changes created by generative AI. Unlike procedural code, generated data can be validated to conform to expected shapes and datatypes with schema and contract validators. Our standard approaches to testing can be applied to the more complicated system behaviors. AIs can be given the reigns, while developers rest assured their systems will continue working.

Master Integration Complexity

AIs can also check their work and verify they have done everything correctly. They can be used almost everywhere that has an interface, after all, but is that really a good idea? API-based LLM operations add to latency and cost, and they can substantially increase the volume of internal changes to deal with. All the data an AI generates is variability that the old and fussy parts of our systems have to deal with. Good luck with that. In the case of new AI techniques, it’s certainly true that with great power comes great responsibility.

Enforce Data Integrity

There’s actually a better way to ensure formal correctness of data with LLMs, however. The idea is this: At their core, they are simply sequence generators, but if the program constrains what next tokens of the sequence are allowed, the generation can remain syntactically valid at all times. If the task requires producing valid JSON or valid Python, for example, it’s possible to maintain correct syntax with every token produced. The output could be sampled and invalid tokens replaced with lower temperature alternatives until valid tokens are found. This is a massive guarantee for integrations where data integrity is crucial.

Human-Centered Design

Perhaps the greatest area of difficulty of including LLM data is in the user interface, where amazing AI features are closest to the user and can generate the most tangible business value. UI developers must ensure our data and data-derived elements are well-behaved no matter what user context might emerge in the output of internal LLMs. More things can go wrong in more ways if we insert our context-blind assistants behind arbitrary software interfaces. If the AI is augmenting the behavior of Photoshop, let’s say, we wouldn’t want it irreversibly adding shapes to hidden layers. We would prefer having a visible and non-destructive changes, with the user informed of when and where those changes could happen. With AI it’s important to uphold the principle of least surprise, keeping the user in the loop of important parts of any new loops involving AI collaboration.

Giving AI Space to Collaborate

Trust and Transparency

This is what I mean by giving AI space. Our intelligent systems won’t interface via mouse and cursor such that we can treat them as collaborative human agents. We often won’t see changes they’re making, and they may change things broadly or at a frequency we need help understanding. They’ll operate not just at the top-level user interface but potentially at every level of an application, some being low-level places well beyond the user’s mental model. To the extent we can, systems should support fine-grained moderation over AI contributions, where the user can approve or reject AI-generated data and operations taken by AI. AI-focused anti-corruption adapters can queue changes for approval and log changes once made. These patterns will give the user a greater sense of safety when working with applications backed by intelligent agents and will support scaling these integrations much further.

Human Agency

To the extent an interface implements direct manipulation over system objects, the user can correspondingly tell what changed because of their actions or because of something AI has done. If the LLM extracts intent in a parallel thread and performs actions at a distance from what a user is currently attending to, things could very quickly become frustrating for users and developers. If the AIs make visible changes to system objects, cue the user so they can anticipate what’s about to change. Use conventions for collaborative interactions when possible. The system should set clear expectations and logical boundaries around around what might change from moment to moment.

Guarantee Safe Recovery

Making AI actions reversible is just as crucial as making user-initiated operations undoable. The model for state change in systems should account for this, whether dealing with UIs or developer-facing state changes.

Concluding Thoughts

LLMs are taking center stage as new elements in our software development toolkits for good reason. The problem-solving power encapsulated behind a simple API call is like nothing we’ve seen before. The downside of this is that including them in our systems introduces enormous degrees of data variability. It’s primarily this detail I think most developers will neglect to manage, and will unwittingly add debt and breakage across applications. Applications will get smarter but also become more complex. This is one of a few reasons why I don’t see AI completely automating software development anytime soon. Our jobs will become both easier and harder. We would be wise to integrate these elements carefully in order to avoid making our systems unstable and frustrating. Principles of design and engineering remain ever-critical.