Memory
Memory Feature
The gollm package now includes a memory feature that allows Language Models (LLMs) to maintain context across multiple interactions. This feature is particularly useful for building chatbots, maintaining conversation history, or any application where context retention is important.
How It Works
The memory feature works by:
Storing previous interactions (both user inputs and LLM responses).
Including this stored context when generating new responses.
Automatically managing the token count to stay within specified limits.
Usage
To use the memory feature:
When creating a new LLM instance, use the
SetMemory
option:
Use the LLM as usual. The memory will automatically be maintained:
If needed, you can clear the memory:
Example: Simple Chatbot
Here's a basic example of how to use the memory feature to create a simple chatbot:
This example demonstrates:
Creating an LLM with memory
Maintaining context across multiple interactions
Clearing the memory and starting a new conversation
Considerations
Token Limit: The
SetMemory
option sets a maximum token limit. Once this limit is reached, older messages will be removed to make room for new ones.Performance: Using memory increases token usage, which may affect API costs and response times.
Privacy: Be mindful of storing sensitive information in memory, especially in long-running applications.
Advanced Usage
For more advanced usage, including a full-fledged chatbot implementation, refer to the examples/memory_chatbot.go
file in the repository.
Last updated