Start with VECTORLINT.md, not rule pack files
Start withVECTORLINT.md. It requires no configuration, activates immediately, and brings your most important standards in plain language before you commit to structured prompts.
After you run VECTORLINT.md against your content library and complete your first assessments, you can begin tuning the rules in ‘VECTORLINT.md’ based on your initial findings. This way, you’ll have a sense of if you have styling gaps that warrant a dedicated rule pack file.
Keep VECTORLINT.md under 800 tokens
VectorLint emits a warning at 4,000 tokens, but a practical target is much lower. Under 800 tokens leaves headroom for rule-specific prompts to add context without the combined system prompt becoming unwieldy. Long context degrades LLM precision and increases API costs on every evaluation. If yourVECTORLINT.md is growing, that’s usually a sign that some rules are specific enough to belong in a dedicated rule pack file instead.
Write specific prompts, not general ones
The quality of VectorLint’s findings is directly proportional to the specificity of your prompts. Vague prompts produce vague findings — or worse, inconsistent findings that vary by run. Instead of:Give rules domain context
LLMs evaluate content relative to an implied standard. Make that standard explicit with a context block in your rule prompt. Tell the model who the audience is, what they value, and what good looks like for your specific content type.Use weights that reflect real priorities
In judge rules, weights are the single most important configuration decision. They determine which criteria actually control the final score. Treat them as a statement of your content team’s values — not arbitrary numbers.Tier strictness by content type
Not all content deserves the same quality bar. Apply strictness in proportion to how much a failure costs — measured in user trust, support load, or brand impact.Start permissive, tighten over time
When rolling VectorLint out to a team for the first time, resist the urge to apply strict settings immediately. A workflow that generates too many findings on day one loses the team’s trust before it earns it.- Start with
CONFIDENCE_THRESHOLD=0.75andstandardstrictness - Run against your existing content library and review findings as a team
- Identify which findings are consistently useful vs. consistently dismissed
- Raise strictness on your highest-stakes content first
- Raise
CONFIDENCE_THRESHOLDonce your rules are stable
Set a higher confidence threshold in CI than locally
In CI, a false positive blocks a merge. SetCONFIDENCE_THRESHOLD higher in your CI environment than in local development so only the highest-confidence findings gate a merge. Lower-confidence candidates still surface locally where a writer can evaluate them in context.
Gate CI only on production-bound content
Limit your CI workflow’spaths filter to directories that actually ship to users. Checking drafts or work-in-progress in CI creates unnecessary friction and noise.
RunRules= in .vectorlint.ini — VectorLint skips them entirely and they never reach CI.
Validate new rules before raising strictness
When you write a new rule, run it atlenient strictness and low CONFIDENCE_THRESHOLD first. Review everything it flags. Once you’re confident the rule’s coverage is correct and its false positive rate is acceptable, raise both settings to production levels.
Skipping this step leads to rules that look correct on paper but produce noise in practice — which erodes team confidence in the entire workflow.
Next steps
- Tuning evaluation precision — detailed guidance on CONFIDENCE_THRESHOLD and strictness
- CI Integration — set up content quality gates in your pipeline
- Customize style rules — write effective prompts for rule pack files