4. Custom config
Setting up the environment: The example starts by checking for the OPENAI_API_KEY environment variable:
Creating a custom configuration: The example defines a custom configuration using various ConfigOption functions:
This configuration sets various parameters like the provider, model, temperature, max tokens, timeout, retry settings, debug level, and API key.
Creating an LLM instance with custom configuration: The example creates a new LLM instance using the custom configuration:
Defining a custom prompt template: The example creates a custom prompt template for topic analysis:
This template includes directives for the analysis and specifies an output format.
Analyzing multiple topics: The example defines a list of topics and analyzes each one using the custom prompt template:
For each topic, it executes the prompt template, generates an analysis using the LLM, and prints the result. Note the use of
gollm.WithJSONSchemaValidation()
to ensure the response matches the expected schema.Demonstrating dynamic configuration changes: After analyzing the topics, the example shows how to dynamically change configuration options:
These lines change the temperature and max tokens settings of the LLM instance.
Getting current provider and model: Finally, the example demonstrates how to retrieve the current provider and model of the LLM instance:
In summary, this example showcases:
How to create a custom configuration for an LLM instance
How to use a custom prompt template with specific directives and output format
How to analyze multiple topics using the same template
How to use JSON schema validation for responses
How to dynamically change configuration options after the LLM instance is created
How to retrieve current provider and model information
This example is particularly useful for developers who need fine-grained control over their LLM configurations and want to understand how to create and use custom prompt templates for consistent analyses across multiple topics.
Last updated