Ollama Example

Using Ollama with gollm

This guide describes how to use Ollama with the gollm library.

Usage example

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/teilomillet/gollm"
)

func main() {
    // Create a new LLM instance with Ollama provider
    llm, err := gollm.NewLLM(
        gollm.SetProvider("ollama"),
        gollm.SetModel("llama3.1"),
        gollm.SetDebugLevel(gollm.LogLevelWarn),
    )
    if err != nil {
        log.Fatalf("Failed to create LLM: %v", err)
    }

    // Create a prompt using NewPrompt function
    prompt := gollm.NewPrompt("Who was the first person to walk on the moon?")

    // Generate a response
    ctx := context.Background()
    response, err := llm.Generate(ctx, prompt)
    if err != nil {
        log.Fatalf("Failed to generate response: %v", err)
    }

    fmt.Printf("Response: %s\n", response)
}

Configuration Options

When creating a new LLM instance with Ollama, you can use the following configuration options:

  • gollm.SetProvider("ollama"): Specifies Ollama as the provider.

  • gollm.SetModel(modelName): Sets the Ollama model to use (e.g., "llama3.1").

  • gollm.SetDebugLevel(level): Sets the debug level for logging.

  • gollm.SetOllamaEndpoint(endpoint): Sets a custom Ollama API endpoint (optional).

Important Notes

  1. Server Requirement: Ensure that the Ollama server is running before executing your Go program. Start it with:

    ollama serve
  2. Model Availability: Make sure you've pulled the model you want to use before trying to use it in your program:

    ollama pull llama3.1
  3. Custom Endpoint: Use gollm.SetOllamaEndpoint() to specify a custom Ollama API endpoint if you're not using the default http://localhost:11434.

  4. Error Handling: Always check for errors when creating the LLM instance and generating responses.

By following this guide, you can easily use Ollama with gollm in your Go programs. The configuration options allow you to customize the Ollama setup according to your needs.

Last updated