Your First Agent ​
Welcome to your first foray into creating an autonomous agent with Sagentic! Let's walk through the code snippet, which is a simple example of an agent that interacts with a Large Language Model (LLM) once and then stops. This example is included in the new project scaffolding to get you started.
We will be discussing HelloAgent
, it is located in the agents
directory of the starter project, in a file named hello.ts
. This agent serves as a template and a practical starting point for understanding how to build your own agents with Sagentic. You can look at this file to see a simple implementation of an agent that interacts with a Large Language Model (LLM) and processes input and output data.
Understanding the Code ​
import { OneShotAgent, AgentOptions, ModelType } from "sagentic";
// Define input type for the agent
interface HelloAgentOptions extends AgentOptions {
person: string;
}
// Export the agent class
export default class HelloAgent extends OneShotAgent<
HelloAgentOptions, // Input type for the agent
string // Output type for the agent
> {
// Set the model used by the agent
model: ModelType = ModelType.GPT35;
// Set the system prompt
systemPrompt: string =
"Your task is to explain why a specific person is based." +
"Speculate, limit your response to a sentence.";
// Prepare the input for the LLM call
async input(): Promise<string> {
return `Why is ${this.options.person} based?`;
}
// Process the output from the LLM call
async output(answer: string): Promise<string> {
return answer;
}
}
The OneShotAgent
Generic ​
In this example, we're using OneShotAgent
, which is a subclass of BaseAgent
. The OneShotAgent
is makes a single call to the LLM and return the result. It's the simplest agent type in Sagentic, perfect for getting started.
Input and Output Types ​
Notice the use of generics OneShotAgent<HelloAgentOptions, string>
in the class definition. This is where we define the input and output types for our agent. The HelloAgentOptions
interface extends AgentOptions
and includes a person
property, which is a string
. The output type is simply a string in this case.
Properties ​
The systemPrompt
property sets the context for the LLM. It's a fixed instruction that tells the LLM what the task is. In this case, it's asking the LLM to explain why the specified person is "based", and to keep the response to a single sentence.
The model
property specifies which Large Language Model (LLM) the agent should use for processing requests, with ModelType.GPT35
indicating that this agent is configured to interact with the GPT-3.5 model.
Preprocessing and Postprocessing Data ​
The input()
method is where we prepare the data that will be sent to the LLM. In this example, we're constructing a prompt asking why a specific person is "based", using the person
property from our agent's options.
The output()
method is where we handle the data returned from the LLM. Here, we're simply returning the answer provided by the LLM, but this method can be used to process or format the response as needed.
Exporting the Agent ​
Finally, notice the export default
statement. This is how we export the agent class so that the Sagentic runtime can recognize and use it.
For the Sagentic runtime to pick up and manage the agent, it must also be re-exported in the index.ts
file located in the root directory of the project. This step ensures that all agents intended for use are properly registered and available for the runtime to execute.
Testing Your HelloAgent ​
To test the HelloAgent
you've created, follow these steps:
Start the Dev Server: Ensure your local development server is running by executing
npm run dev
oryarn dev
in your project's root directory.Spawn the Agent with CURL: Use
curl
to send a POST request to the/spawn
endpoint with the necessary data. Here's thecurl
command:bashcurl -X POST http://localhost:3000/spawn -H "Content-Type: application/json" -d '{ "type": "<your-project>/HelloAgent", "options": { "person": "Joe" } }'
TIP
Replace
<your-project>
with the name of your project. You can find the name of your project in thepackage.json
file. Agents are namespaced because they are unique to each project and user. In the future this will allow you to seamlessly call other agents from different projects without any conflicts.Review the Response: If the agent is working correctly, you should receive a response similar to the following:
json{ "success": true, "result": "Joe is based because they have a strong sense of self-awareness and are constantly seeking personal growth and improvement.", "session": { "cost": 0.000083, "tokens": { "gpt-3.5-turbo-16k": 0.000083 }, "elapsed": 1.003 } }
The session
field in the response provides useful metrics about the agent's execution, including the cost associated with the LLM call, the number of tokens used, and the time taken to process the request in seconds. This information can help you understand the efficiency of your agent and the resources it consumes during operation.
Conclusion ​
This example agent is a great starting point for understanding how agents are structured in Sagentic. It showcases the simplicity of creating agents that can perform tasks using LLMs. In future articles, we'll delve deeper into the BaseAgent
class and explore more complex agent behaviors.
For now, try running this agent on your local Sagentic development server, change the code and see what kind of responses you get!