simple infra for chat-based conversation with different AI providers
| .gitignore | ||
| LICENSE | ||
| llm-common.scm | ||
| llm-openai.scm | ||
| llm-provider.scm | ||
| llm.egg | ||
| llm.release-info | ||
| llm.scm | ||
| README.md | ||
llm
A provider-agnostic LLM chat API client for CHICKEN Scheme with tool calling support.
Quick Start
(import llm)
;; Create a conversation
(define conv (llm/chat system: "You are a helpful assistant."))
;; Send a message
(let-values ([(conv ok?) (llm/send conv "What is 2 + 2?")])
(when ok?
(print (llm/get-last-response conv))))
Usage
Basic Conversation
(import llm)
;; Create conversation with optional parameters
(define conv (llm/chat
system: "You are a helpful assistant."
model: "gpt-4o"
temperature: 0.7
max-tokens: 1000))
;; Send messages (returns multiple values)
(let-values ([(conv ok?) (llm/send conv "Hello!")])
(if ok?
(print (llm/get-last-response conv))
(print "Request failed")))
File Attachments
Attach images, PDFs, or text files to messages:
;; Image analysis
(let-values ([(conv ok?) (llm/send conv "What's in this image?" file: "photo.jpg")])
(when ok?
(print (llm/get-last-response conv))))
;; PDF analysis
(let-values ([(conv ok?) (llm/send conv "Summarize this document" file: "report.pdf")])
(when ok?
(print (llm/get-last-response conv))))
;; Text file (inlined into the message)
(let-values ([(conv ok?) (llm/send conv "Review this code" file: "main.scm")])
(when ok?
(print (llm/get-last-response conv))))
Tool Calling
Register tools that the LLM can invoke:
(import llm)
;; Register a tool
(llm/register-tool!
'get-weather ;; kebab-case name
'((type . "function")
(function . ((name . "get_weather") ;; snake_case in schema
(description . "Get current weather for a location")
(parameters . ((type . "object")
(properties . ((location . ((type . "string")
(description . "City name")))))
(required . #("location")))))))
(lambda (params)
(let ((location (alist-ref 'location params)))
`((success . #t)
(temperature . 72)
(conditions . "sunny")
(location . ,location)))))
;; Create conversation with tools enabled
(define conv (llm/chat
system: "You can check the weather."
tools: '(get-weather)))
;; The LLM will automatically call tools when needed
(let-values ([(conv ok?) (llm/send conv "What's the weather in Paris?")])
(when ok?
(print (llm/get-last-response conv))))
Hooks
Monitor responses and tool execution:
(define conv (llm/chat
system: "You are helpful."
tools: '(get-weather)
on-response-received: (lambda (msg)
(print "Got response: " msg))
on-tool-executed: (lambda (name args result)
(print "Tool " name " called with " args)
(print "Result: " result))))
Cost Tracking
Track token usage and costs:
(let-values ([(conv ok?) (llm/send conv "Hello!")])
(when ok?
(let ((tokens (llm/get-tokens conv))
(cost (llm/get-cost conv)))
(print "Input tokens: " (car tokens))
(print "Output tokens: " (cdr tokens))
(print "Total cost: $" cost))))
Providers
The library uses a provider abstraction that allows different LLM backends. OpenAI is the default provider.
Switching Providers
(import llm llm-provider llm-openai)
;; Switch default provider globally
(llm/use-provider some-other-provider)
;; Or per-conversation
(define conv (llm/chat
system: "Hello"
provider: some-other-provider))
Provider Interface
Each provider implements these procedures:
| Procedure | Purpose |
|---|---|
prepare-message |
Convert internal message to API format |
build-payload |
Construct the API request body |
call-api |
Make HTTP request to the API |
parse-response |
Extract content/tools from API response |
extract-tool-calls |
Normalize tool calls from response |
format-tool-result |
Format tool result for the API |
get-model-pricing |
Return pricing for a model |
Creating a New Provider
(import llm-provider llm-common)
(define my-provider
(make-llm-provider
'my-provider ;; name (symbol)
my-prepare-message ;; (msg include-file?) -> api-format msg
my-build-payload ;; (messages tools model temp max-tokens) -> payload
my-call-api ;; (endpoint payload) -> response
my-parse-response ;; (response) -> normalized alist
my-format-tool-result ;; (tool-call-id result) -> tool msg
my-get-model-pricing ;; (model-name) -> pricing alist
my-extract-tool-calls)) ;; (message) -> list of (id name arguments)
The parse-response procedure should return an alist with:
success- #t or #fmessage- the raw message object (for history)content- text content stringtool-calls- vector of tool calls (or #f)finish-reason- "stop", "tool_calls", etc.input-tokens- prompt token countoutput-tokens- completion token count
The extract-tool-calls procedure should return a list of alists with:
id- tool call IDname- function namearguments- JSON string of arguments
The get-model-pricing procedure should return an alist with:
input-price-per-1m- cost per 1M input tokensoutput-price-per-1m- cost per 1M output tokens
API Reference
Conversation Management
(llm/chat #!key system tools history model temperature max-tokens on-response-received on-tool-executed provider)- Create a new conversation(llm/send conversation message #!key file)- Send a message, returns(values conversation success)(llm/get-last-response conversation)- Get the last assistant response text
Tool Registration
(llm/register-tool! name schema implementation)- Register a tool(llm/get-registered-tools [tool-names])- Get tool schemas as a vector
Cost and Usage
(llm/get-cost conversation)- Get total cost in USD(llm/get-tokens conversation)- Get(input . output)token counts
Provider Control
(llm/use-provider [provider])- Get or set the current provider (parameter)(current-provider [provider])- Same as above (fromllm-providermodule)
Environment Variables
OPENAI_API_KEY- Required for OpenAI provider
License
BSD-3-Clause