Memory
Memory Feature
The gollm package now includes a memory feature that allows Language Models (LLMs) to maintain context across multiple interactions. This feature is particularly useful for building chatbots, maintaining conversation history, or any application where context retention is important.
How It Works
The memory feature works by:
Storing previous interactions (both user inputs and LLM responses).
Including this stored context when generating new responses.
Automatically managing the token count to stay within specified limits.
Usage
To use the memory feature:
When creating a new LLM instance, use the
SetMemory
option:
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-4o-mini"),
gollm.SetAPIKey(os.Getenv("OPENAI_API_KEY")),
gollm.SetMemory(4000), // Enable memory with a 4000 token limit
)
Use the LLM as usual. The memory will automatically be maintained:
response, err := llm.Generate(ctx, gollm.NewPrompt("Hello, who are you?"))
fmt.Println(response)
// The next generation will include the context of the previous interaction
response, err = llm.Generate(ctx, gollm.NewPrompt("What did I just ask you?"))
fmt.Println(response)
If needed, you can clear the memory:
if memoryLLM, ok := llm.(interface{ ClearMemory() }); ok {
memoryLLM.ClearMemory()
}
Example: Simple Chatbot
Here's a basic example of how to use the memory feature to create a simple chatbot:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/teilomillet/gollm"
)
func main() {
llm, err := gollm.NewLLM(
gollm.SetProvider("openai"),
gollm.SetModel("gpt-3.5-turbo"),
gollm.SetAPIKey(os.Getenv("OPENAI_API_KEY")),
gollm.SetMemory(4000),
)
if err != nil {
log.Fatalf("Failed to create LLM: %v", err)
}
ctx := context.Background()
// First interaction
response, err := llm.Generate(ctx, gollm.NewPrompt("Hello, who are you?"))
if err != nil {
log.Fatalf("Error: %v", err)
}
fmt.Printf("Chatbot: %s\n", response)
// Second interaction (with context)
response, err = llm.Generate(ctx, gollm.NewPrompt("What was my first question?"))
if err != nil {
log.Fatalf("Error: %v", err)
}
fmt.Printf("Chatbot: %s\n", response)
// Clear memory
if memoryLLM, ok := llm.(interface{ ClearMemory() }); ok {
memoryLLM.ClearMemory()
}
// Interaction after clearing memory
response, err = llm.Generate(ctx, gollm.NewPrompt("What was my first question?"))
if err != nil {
log.Fatalf("Error: %v", err)
}
fmt.Printf("Chatbot: %s\n", response)
}
This example demonstrates:
Creating an LLM with memory
Maintaining context across multiple interactions
Clearing the memory and starting a new conversation
Considerations
Token Limit: The
SetMemory
option sets a maximum token limit. Once this limit is reached, older messages will be removed to make room for new ones.Performance: Using memory increases token usage, which may affect API costs and response times.
Privacy: Be mindful of storing sensitive information in memory, especially in long-running applications.
Advanced Usage
For more advanced usage, including a full-fledged chatbot implementation, refer to the examples/memory_chatbot.go
file in the repository.
Last updated
Was this helpful?