In the early 1990s, as a part of the Advanced Local Loop group in a major telecoms research lab, my focus was on the “last mile” problem—delivering services to homes. Particularly, I delved into the potential outcomes when the transition from analog to digital services in the network was complete.
Immersing myself in the lab’s library, I contemplated the future of computing in a scenario of universal bandwidth. A captivating concept that caught my attention was ubiquitous computing, envisioning a world where computers seamlessly fade into the background, and software agents act as our intermediaries, engaging with network services on our behalf. This visionary idea sparked initiatives at prominent tech companies such as Apple, IBM, General Magic, and others.
In the early 1990s, the notion of software agents, particularly intelligent and autonomous agents, emerged as a groundbreaking concept. Pioneered by MIT professor Pattie Maes, these agents were envisioned as adaptive programs capable of extracting information for users and modifying their behavior dynamically. This forward-looking research laid the groundwork for the evolution of software agents, although it took over 30 years for the industry to fully embrace these ambitious ideas.
One notable initiative driving the realization of intelligent agents is Microsoft’s Semantic Kernel team. Leveraging OpenAI’s Assistant model, they are developing intelligent agents along with a suite of tools for managing various functions. Semantic Kernel is evolving into a runtime for contextual conversations, streamlining the orchestration model and managing prompt history.
The emergence of an AI stack, exemplified by Microsoft’s Copilot model, signifies a contemporary implementation of an agent stack. This extends from AI-ready infrastructure and foundational models to plugin support, fostering a cohesive ecosystem across Microsoft and OpenAI platforms.
Semantic Kernel introduces plugins to facilitate LLM user interfaces, eliminating the need to explicitly manage histories. By handling conversation state, Semantic Kernel becomes the contextual agent, interacting with external tools through plugins. These plugins, which add LLM-friendly descriptions to methods, enhance language understanding, enabling users to trigger actions with flexible and intuitive language.
The integration of OpenAI plugins further extends Semantic Kernel’s capabilities, allowing access to external APIs through semantic descriptions. Semantic Kernel acts as the orchestrator, managing context and chaining calls to various APIs, creating a powerful tool for constructing intelligent agents.
Semantic Kernel’s move towards autonomy introduces planners that enable dynamic workflow generation, including loops and conditional statements. The Handlebars planner, for instance, creates orchestrations based on user instructions, utilizing defined plugins for task completion. The testing framework, facilitated by Azure AI Studio’s Prompt Flow tool, ensures accuracy and reliability of planners and plugins.
While Semantic Kernel’s agent model deviates from the original concept of sending intelligent code to remote platforms, it aligns with modern distributed application development. The approach leverages APIs, cloud resources, and microservices, allowing agents to orchestrate workflows intelligently across distributed systems.
In retrospect, the vision of software agents has transformed from running arbitrary code on remote servers to orchestrated workflows spanning distributed systems. The contemporary model, exemplified by Semantic Kernel, harnesses APIs, vector searches, and plugins within a simplified programmatic framework to construct a modern alternative to the original agent premise. The evolution underscores the shift from the benign to the malicious, with today’s security considerations vastly different from the landscape of the mid-1990s.