Supported providers
| Provider | LLM_PROVIDER value | Notes |
|---|---|---|
| OpenAI | openai | GPT-4o and other OpenAI models |
| Anthropic | anthropic | Claude Opus, Sonnet, and Haiku |
| Azure OpenAI | azure-openai | Azure-hosted OpenAI models |
| Google Gemini | gemini | Gemini Pro and other Gemini models |
| Amazon Bedrock | amazon-bedrock | Claude models via AWS Bedrock |
How VectorLint loads credentials
VectorLint resolves credentials in this order. Later sources override earlier ones:- Built-in defaults
- Global config —
~/.vectorlint/config.toml - Local
.envfile in your project root - Shell environment variables
config.toml and override it per project with a .env file, or override both with environment variables in Continuous Integration/Continuous Deployment (CI/CD).
Generate a configuration file
Runvectorlint init to generate ~/.vectorlint/config.toml with placeholder values for all supported providers. Fill in the key for the provider you want to use and leave the others blank.
OpenAI
Get your API key at platform.openai.com/api-keys. Global config (~/.vectorlint/config.toml)
.env file
Anthropic
Get your API key at console.anthropic.com. Global config (~/.vectorlint/config.toml)
.env file
Azure OpenAI
In addition to the API key, Azure OpenAI requires your resource endpoint, deployment name, and API version. Find these in the Azure portal under your Azure OpenAI resource. Global config (~/.vectorlint/config.toml)
.env file
Google Gemini
Get your API key at aistudio.google.com/app/apikey. Global config (~/.vectorlint/config.toml)
.env file
Amazon Bedrock
VectorLint accesses Claude models through Amazon Bedrock. You can omit AWS credentials if your environment has an Identity and Access Management (IAM) role or if you’ve configured a credential profile in~/.aws/credentials.
Global config (~/.vectorlint/config.toml)
.env file
If your environment already provides AWS credentials through an IAM role or
~/.aws/credentials, you can omit AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.Search provider (optional)
Thetechnical-accuracy evaluator verifies factual claims against live web search. You need a separate search provider credential. VectorLint currently supports Perplexity for this purpose.
Global config (~/.vectorlint/config.toml)
.env file
base evaluator rules.
Tracking LLM costs
You can configure per-token pricing so VectorLint calculates and reports estimated costs after each run. Check your provider’s pricing page for current values. Global config (~/.vectorlint/config.toml)
.env file
Using VectorLint in CI/CD
In GitHub Actions, store credentials as repository secrets and pass them as environment variables. Never hard-code API keys in workflow files.Security practices
Keep credentials out of version control. Add.env and any local config files to .gitignore:
Troubleshooting
Authentication failed or Invalid API key — Verify the key value is correct, confirm it hasn’t expired, and check that your account has available credits.
Unknown provider: xyz — Check that LLM_PROVIDER matches one of the supported values exactly: openai, anthropic, azure-openai, gemini, amazon-bedrock. The value is case-sensitive.
Configuration not loading — Verify the global config is at ~/.vectorlint/config.toml and the file is readable. Confirm the TOML syntax is valid. As a quick test, try passing the credentials as environment variables directly.
Rate limit exceeded — Reduce the Concurrency setting in the .vectorlint.ini file in your project, upgrade your API tier with the provider, or switch to a provider with higher rate limits.