This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
LLM Action is a GitHub Action that enables interaction with OpenAI-compatible LLM services. It supports any OpenAI-compatible API endpoint including OpenAI, Azure OpenAI, Ollama, LocalAI, LM Studio, vLLM, and other self-hosted services.
# Run all tests with race detection and coverage
go test -race -cover -coverprofile=coverage.out ./...
# Run a specific test
go test -v -run TestName ./...# Run golangci-lint (requires golangci-lint v2.6)
golangci-lint run --verbose
# Check Dockerfile
hadolint Dockerfile
# Fix the golang format
golangci-lint fmt# Build the binary locally
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o llm-action .
# Build Docker image
docker build -t llm-action .The action reads configuration from environment variables with INPUT_ prefix:
export INPUT_API_KEY="your-api-key"
export INPUT_INPUT_PROMPT="your prompt"
export INPUT_MODEL="gpt-4o" # optional
export INPUT_BASE_URL="https://api.openai.com/v1" # optional
export INPUT_CA_CERT="/path/to/ca-cert.pem" # optional, for self-signed certs
export INPUT_HEADERS="X-Custom-Header:value" # optional, custom HTTP headers
export INPUT_DEBUG="true" # optional
go run .main.go - Entry point and orchestration
run()function orchestrates the entire flow: config loading → client creation → message building → API call → output handlingmaskAPIKey()securely masks API keys in debug output (shows first/last 4 chars)- Uses
github.com/appleboy/com/gh.SetOutput()to set GitHub Actions outputs
config.go - Configuration management
LoadConfig()reads all inputs from environment variables (GitHub Actions sets these withINPUT_prefix)- Required:
api_key,input_prompt - Optional with defaults:
base_url,model,temperature,max_tokens,skip_ssl_verify,ca_cert,system_prompt,tool_schema,headers,debug - Each optional parameter has dedicated parse methods (
parseTemperature,parseMaxTokens,parseSkipSSL,parseDebug) - Prompts (
system_prompt,input_prompt,tool_schema) are loaded viaLoadPrompt()with Go template rendering - CA certificates are loaded via
LoadContent()without template rendering
client.go - OpenAI client initialization
NewClient()creates configured OpenAI client fromgithub.com/sashabaranov/go-openaicreateInsecureHTTPClient()creates HTTP client with SSL verification disabled (marked with#nosec G402for gosec)- SSL skip is intentionally configurable for local/self-hosted LLM services
message.go - Message construction
BuildMessages()constructs OpenAI chat completion message array- System prompt is prepended if provided, followed by user input prompt
- Returns slice of
openai.ChatCompletionMessage
tool_schema.go - Structured output via function calling
ToolMetastruct represents the function schema for structured outputParseToolSchema()parses JSON schema string toToolMetaToOpenAITool()convertsToolMetato OpenAI tool formatParseFunctionArguments()parses function call response intomap[string]stringfor GitHub Actions outputsBuildOutputMap()combines raw response with parsed tool arguments, handling reservedresponsefield
prompt_loader.go - Flexible content loading
LoadPrompt()loads content from plain text, file path, or URL, then renders as Go templateLoadContent()loads content from plain text, file path, or URL without template rendering- Supports
file://prefix and automatic file detection viaos.Stat() - URL fetching includes 30-second timeout and User-Agent header
template.go - Go template rendering
RenderTemplate()processes Go templates with environment variables as databuildTemplateData()creates template data map from all environment variables- Variables with
INPUT_prefix are accessible both with and without the prefix (e.g.,INPUT_MODEL→{{.MODEL}}or{{.INPUT_MODEL}})
- Environment variables (
INPUT_*) →LoadConfig()→Configstruct- Prompts:
LoadPrompt()→loadFromFile()/loadFromURL()→RenderTemplate() - CA cert:
LoadContent()→loadFromFile()/loadFromURL()(no template rendering)
- Prompts:
Config→NewClient()→ OpenAI client with custom base URL, SSL settings, and CA certConfig→BuildMessages()→ OpenAI message format- If
tool_schemaprovided:ParseToolSchema()→ToOpenAITool()→ Add tools to request - Client + Messages (+ Tools) →
CreateChatCompletion()→ API call to LLM service - API response → Extract content (or function call arguments) →
BuildOutputMap()→gh.SetOutput()→ GitHub Actions outputs
- Defined in
action.ymlwith inputs/outputs specification - Runs using Docker (multi-stage build in
Dockerfile) - Docker image uses non-root user (appuser:1000) for security
- Go binary is statically compiled (
CGO_ENABLED=0) for Alpine Linux base image
- Go version: 1.25
- Linter configuration in
.golangci.ymlincludes: gosec, govet, staticcheck, errcheck, and formatting tools (gofmt, gofumpt, goimports, golines) - Security scanning via gosec is enabled; intentional security exceptions are marked with
#noseccomments - All tests should include race detection (
-raceflag)
- Table-driven tests for configuration parsing (see
config_test.go) - Mock/test implementations for message building (see
message_test.go) - Test coverage is uploaded to Codecov via CI
- GitHub Actions workflows in
.github/workflows/:testing.yml- Runs tests and linting on push/PRtrivy.yml- Security scanningdocker.yml- Docker image buildinggoreleaser.yml- Release automationcodeql.yml- Code security analysis
github.com/sashabaranov/go-openai- OpenAI API client librarygithub.com/appleboy/com- GitHub Actions helper utilities for output handlinggithub.com/yassinebenaid/godump- Pretty printing for debug mode