3. Compare providers
Setting up the environment: The example starts by checking for the OPENAI_API_KEY environment variable, which is crucial for authentication with the OpenAI API.
apiKey := os.Getenv("OPENAI_API_KEY") if apiKey == "" { log.Fatalf("OPENAI_API_KEY environment variable is not set") }
Creating LLM clients: The example creates two LLM clients, one for GPT-4o-mini and another for GPT-4o, using a helper function
createLLM
:llmGPT3, err := createLLM("openai", "gpt-4o-mini", apiKey) llmGPT4, err := createLLM("openai", "gpt-4o", apiKey)
The
createLLM
function sets up each LLM with specific configurations:func createLLM(provider, model, apiKey string) (gollm.LLM, error) { return gollm.NewLLM( gollm.SetProvider(provider), gollm.SetModel(model), gollm.SetAPIKey(apiKey), gollm.SetMaxTokens(300), gollm.SetMaxRetries(3), gollm.SetDebugLevel(gollm.LogLevelInfo), ) }
Example 1: Basic Prompt with Comparison This example demonstrates comparing responses to a simple prompt:
basicPrompt := gollm.NewPrompt("Explain the concept of machine learning in simple terms.") compareBasicPrompt(ctx, basicPrompt, llmGPT3, llmGPT4)
The
compareBasicPrompt
function generates responses from both models and prints them:func compareBasicPrompt(ctx context.Context, prompt *gollm.Prompt, llm1, llm2 gollm.LLM) { response1, err := llm1.Generate(ctx, prompt) // Error handling... response2, err := llm2.Generate(ctx, prompt) // Error handling... fmt.Printf("%s %s Response:\n%s\n\n", llm1.GetProvider(), llm1.GetModel(), response1) fmt.Printf("%s %s Response:\n%s\n", llm2.GetProvider(), llm2.GetModel(), response2) }
Example 2: Prompt with Directives and Output This example compares responses to a more complex prompt with directives and specified output format:
directivePrompt := gollm.NewPrompt("Explain the concept of blockchain technology", gollm.WithDirectives( "Use a simple analogy to illustrate", "Highlight key features", "Mention potential applications", ), gollm.WithOutput("Explanation of blockchain:"), ) compareDirectivePrompt(ctx, directivePrompt, llmGPT3, llmGPT4)
The
compareDirectivePrompt
function works similarly tocompareBasicPrompt
.Example 3: Prompt Template and JSON Schema This example demonstrates the use of a prompt template and JSON schema generation:
templatePrompt := gollm.NewPromptTemplate( "CustomAnalysis", "Analyze a given topic", "Analyze the following topic from multiple perspectives: {{.Topic}}", gollm.WithPromptOptions( gollm.WithDirectives( "Consider economic, social, and environmental impacts", "Provide pros and cons", "Conclude with a balanced summary", ), gollm.WithOutput("Analysis:"), ), )
It also generates a JSON schema for prompts:
schemaBytes, err := llmGPT3.GetPromptJSONSchema() if err != nil { log.Fatalf("Failed to generate JSON schema: %v", err) } fmt.Printf("JSON Schema for Prompts:\n%s\n", string(schemaBytes))
The template is then executed with a specific topic:
prompt, err := templatePrompt.Execute(map[string]interface{}{ "Topic": "The adoption of autonomous vehicles", })
Finally, it compares the responses from both models using the
compareTemplatePrompt
function.Comparison Functions: The example includes three comparison functions (
compareBasicPrompt
,compareDirectivePrompt
, andcompareTemplatePrompt
) that follow the same pattern:Generate a response from each LLM
Handle any errors
Print the responses, clearly labeling which model produced each response
In summary, this example demonstrates how to:
Set up multiple LLM clients with different models
Create and use basic prompts, prompts with directives, and prompt templates
Generate JSON schemas for prompts
Compare responses from different models for the same prompts
Handle errors and debug information
This example is particularly useful for developers who want to compare the performance and output of different LLM models, helping them choose the most suitable model for their specific use case or to understand the differences in responses between models.
Last updated
Was this helpful?