1. Basic usage
Setting up the environment: The example starts by checking for the OPENAI_API_KEY environment variable. This is crucial for authenticating with the OpenAI API.
Creating an LLM instance: The example demonstrates how to create a new LLM instance with custom configuration using the
gollm.NewLLM()
function and various configuration options.This configuration:
Sets the provider to OpenAI
Uses the GPT-3.5-turbo model
Sets the API key
Limits the response to 200 tokens
Configures retry behavior (3 retries with a 2-second delay)
Sets the debug level to Info
Basic Prompt: The example demonstrates creating a simple prompt and generating a response.
Example 2: Advanced Prompt
This example showcases creating a more complex prompt with directives and output specifications.
This advanced prompt:
Asks for a comparison of programming paradigms
Provides specific directives for the AI to follow
Sets an output prefix
Limits the response length to 300 tokens
Example 3: Prompt with Context
This example demonstrates how to provide context to a prompt:
This prompt:
Asks for a summary
Provides context about IoT
Limits the response to 100 tokens
Example 4: JSON Schema Generation and Validation
This part of the example shows how to generate a JSON schema for a prompt and validate prompts:
It also demonstrates validation by creating an invalid prompt:
Example 5: Using Chain of Thought
The final example demonstrates the use of the Chain of Thought feature:
This feature helps in generating step-by-step explanations for complex topics.
In summary, this example file provides a comprehensive overview of gollm's basic usage, showcasing various prompt types, configuration options, and advanced features like JSON schema validation and Chain of Thought reasoning.
Last updated