🔄 Big News! bazed.ai is now sagentic.ai. Same vision, new name!

Skip to content

Concepts ​

At the heart of Sagentic lies the concept of autonomous agents. Let's break down what we mean by this and why our approach is both unique and powerful.

What Are Autonomous Agents? ​

Autonomous agents are software programs designed to operate on their own, making decisions and taking actions based on their environment and internal state. They're not just scripts that run in response to a command; they're more like independent entities that can understand and interact with the world around them.

These agents can be deployed across a variety of computing platforms, from cloud servers to your personal laptop. They're versatile, interacting with everything the host system offers—files, databases, network connections—and can even call upon other programs and APIs to accomplish their tasks.

Sagentic Vision of Agents ​

In Sagentic, agents are not sprawling, complex systems but rather discrete, specialized modules. They are simply TypeScript classes that implement a straightforward interface, which means the sky's the limit when it comes to what you can run inside them. Whether it's calling APIs, utilizing any npm library, working with the file system, or even spinning up a headless browser.

This design choice makes agents in Sagentic akin to React components. They have a lifecycle, with defined stages for initialization, execution, and cleanup. This familiar pattern for developers means that creating and managing the behavior of agents becomes as intuitive as building UI components, but instead of rendering visuals, they're performing tasks and making decisions autonomously.

We have coined the term "skilled autonomous agents" to describe this approach. Each agent is designed to perform a specific well-defined task and operates within strict, yet easy to understand, boundaries. This approach ensures that agents are reliable and predictable, which is essential for building complex systems.

Agent Autonomy ​

While some may believe that an agent's autonomy stems from its prompts and the reasoning provided by Large Language Models (LLMs), Sagentic recognizes that true autonomy encompasses more. Each Sagentic agent is equipped to maintain one or more conversations with an LLM, referred to as Threads. These Threads organize agent's interactions with LLMs, chaining prompts and answers into immutable data structures.

Reliance on LLMs for decision-making can often yield unpredictable results, a situation exacerbated when multiple agents are cooperating. To address this, Sagentic agents are designed as state machines with clearly defined types for inputs, outputs, and internal states. This design choice frames their operations — putting the 'framework' in Sagentic, if you will.

There are numerous benefits to this approach. First, it allows agents to be deterministic, which is essential for reliable and predictable outcomes. Second, it enables agents to be stopped, paused and resumed at any moment. Third, it allows agents to be introspected, which is crucial for debugging and auditing. Finally, it enables agents to be scaled across multiple machines, which is essential for large deployments.

Agent Orchestration ​

Sagentic agents are designed to be orchestrated by a runtime engine, referred to as the Sagentic Runtime. This runtime is responsible for managing the execution of agents, ensuring that they are running at the right time and in the right order. It also handles the communication between agents, allowing them to exchange data and coordinate their actions.

Our approach to agent orchestration is inspired by serverless computing. Each agent is a self-contained unit that can be executed independently, and the runtime is responsible for managing the execution of these units. This opens up a practical way to operate swarms of independent agents in an uniform and scalable manner.

Thanks to well-defined interfaces and strict typing, swarms can include agents coming from different developers and organizations. Sagentic runtime ensures that each agent in the swarm has the minimal set of permissions required to perform its task, and that the swarm as a whole is operating within the context of permissions and data given by the user who invokes the swarm.

Custom Tools ​

In Sagentic, developers can define Tools using straightforward TypeScript. These Tools are typed functions that can execute any code required by the agent. Each agent can be equipped with these Tools, and their schemas are accessible to the LLM. This setup allows the LLM to utilize the Tools autonomously at each step of a Thread, expanding its local decision-making capabilities.

We see the tools as appendages to the agent, allowing it to interact with its environmemnt consisting of multiple services and APIs on different levels: from the local file system to the SaaS APIs. Thus, instead of building a set of re-usable tools for all agents, we believe that each agent should be equipped with the tools it needs to perform its task. By giving developers the ability to turn any TypeScript function into a tool we enable them to interface agents with their existing codebases and services.

Crucially, even though the LLM can invoke these Tools, control is always returned to the agent, ensuring that the agent's logic directs the overall process. This maintains the agent's autonomy while preserving orderly execution of it's tasks.

Additionally, because agents and Tools are designed with similar interfaces, agents can also be used as Tools by other agents. This interoperability allows for complex task execution and enhances the collaborative potential within a swarm of agents.

TIP

Learn more about making Tools.

Strong Typing ​

Agents in Sagentic transform LLM interactions and Tool calls into structured exchanges, similar to filling out forms, rather than engaging in open-ended conversations. This structured approach is essential for precise and reliable outcomes. While LLMs are not inherently designed to provide structured information, the strong type system in Sagentic ensures that the agents' operations are predictable and consistent, harnessing the LLM's capabilities within a clear framework.

This allows agents and tools to be developed and tested independently. It ensures that the LLM can be notified about invalid outputs allowing it to self-correct. Finally it gives developers the confidence that their agents will perform according to the contract defined in their code.