3. Compare providers
Setting up the environment: The example starts by checking for the OPENAI_API_KEY environment variable, which is crucial for authentication with the OpenAI API.
Creating LLM clients: The example creates two LLM clients, one for GPT-4o-mini and another for GPT-4o, using a helper function
createLLM
:The
createLLM
function sets up each LLM with specific configurations:Example 1: Basic Prompt with Comparison This example demonstrates comparing responses to a simple prompt:
The
compareBasicPrompt
function generates responses from both models and prints them:Example 2: Prompt with Directives and Output This example compares responses to a more complex prompt with directives and specified output format:
The
compareDirectivePrompt
function works similarly tocompareBasicPrompt
.Example 3: Prompt Template and JSON Schema This example demonstrates the use of a prompt template and JSON schema generation:
It also generates a JSON schema for prompts:
The template is then executed with a specific topic:
Finally, it compares the responses from both models using the
compareTemplatePrompt
function.Comparison Functions: The example includes three comparison functions (
compareBasicPrompt
,compareDirectivePrompt
, andcompareTemplatePrompt
) that follow the same pattern:Generate a response from each LLM
Handle any errors
Print the responses, clearly labeling which model produced each response
In summary, this example demonstrates how to:
Set up multiple LLM clients with different models
Create and use basic prompts, prompts with directives, and prompt templates
Generate JSON schemas for prompts
Compare responses from different models for the same prompts
Handle errors and debug information
This example is particularly useful for developers who want to compare the performance and output of different LLM models, helping them choose the most suitable model for their specific use case or to understand the differences in responses between models.
Last updated