4. Custom config

  1. Setting up the environment: The example starts by checking for the OPENAI_API_KEY environment variable:

    apiKey := os.Getenv("OPENAI_API_KEY")
    if apiKey == "" {
        log.Fatalf("OPENAI_API_KEY environment variable is not set")
    }
  2. Creating a custom configuration: The example defines a custom configuration using various ConfigOption functions:

    customConfig := []gollm.ConfigOption{
        gollm.SetProvider("openai"),
        gollm.SetModel("gpt-4o-mini"),
        gollm.SetTemperature(0.7),
        gollm.SetMaxTokens(150),
        gollm.SetTimeout(30 * time.Second),
        gollm.SetMaxRetries(3),
        gollm.SetRetryDelay(2 * time.Second),
        gollm.SetDebugLevel(gollm.LogLevelInfo),
        gollm.SetAPIKey(apiKey),
    }

    This configuration sets various parameters like the provider, model, temperature, max tokens, timeout, retry settings, debug level, and API key.

  3. Creating an LLM instance with custom configuration: The example creates a new LLM instance using the custom configuration:

    llm, err := gollm.NewLLM(customConfig...)
    if err != nil {
        log.Fatalf("Failed to create LLM client: %v", err)
    }
  4. Defining a custom prompt template: The example creates a custom prompt template for topic analysis:

    analysisPrompt := gollm.NewPromptTemplate(
        "CustomAnalysis",
        "Analyze a given topic",
        "Analyze the following topic: {{.Topic}}",
        gollm.WithPromptOptions(
            gollm.WithDirectives(
                "Consider technological, economic, and social implications",
                "Provide at least one potential positive and one potential negative outcome",
                "Conclude with a balanced summary",
            ),
            gollm.WithOutput("Analysis:"),
        ),
    )

    This template includes directives for the analysis and specifies an output format.

  5. Analyzing multiple topics: The example defines a list of topics and analyzes each one using the custom prompt template:

    topics := []string{
        "The widespread adoption of artificial intelligence",
        "The implementation of a four-day work week",
        "The transition to renewable energy sources",
    }
    
    for _, topic := range topics {
        prompt, err := analysisPrompt.Execute(map[string]interface{}{
            "Topic": topic,
        })
        if err != nil {
            log.Printf("Failed to execute prompt template for topic '%s': %v\n", topic, err)
            continue
        }
    
        analysis, err := llm.Generate(ctx, prompt, gollm.WithJSONSchemaValidation())
        if err != nil {
            log.Printf("Failed to generate analysis for topic '%s': %v\n", topic, err)
            continue
        }
    
        fmt.Printf("Topic: %s\nAnalysis:\n%s\n\n", topic, analysis)
    }

    For each topic, it executes the prompt template, generates an analysis using the LLM, and prints the result. Note the use of gollm.WithJSONSchemaValidation() to ensure the response matches the expected schema.

  6. Demonstrating dynamic configuration changes: After analyzing the topics, the example shows how to dynamically change configuration options:

    llm.SetOption("temperature", 0.9)
    llm.SetOption("max_tokens", 200)

    These lines change the temperature and max tokens settings of the LLM instance.

  7. Getting current provider and model: Finally, the example demonstrates how to retrieve the current provider and model of the LLM instance:

    fmt.Printf("Current Provider: %s\n", llm.GetProvider())
    fmt.Printf("Current Model: %s\n", llm.GetModel())

In summary, this example showcases:

  • How to create a custom configuration for an LLM instance

  • How to use a custom prompt template with specific directives and output format

  • How to analyze multiple topics using the same template

  • How to use JSON schema validation for responses

  • How to dynamically change configuration options after the LLM instance is created

  • How to retrieve current provider and model information

This example is particularly useful for developers who need fine-grained control over their LLM configurations and want to understand how to create and use custom prompt templates for consistent analyses across multiple topics.

Last updated