// SKILLAI Optimization

Context Budgeting

Optimize context window usage and reduce token costs. Intelligently manages how much conversation history is sent to the AI model per request.

/context-budgeting

KEY FEATURES

1

Automatic conversation history trimming based on relevance

2

Configurable token budget per request

3

Priority-based message retention (system > recent > old)

4

Token usage analytics and cost projections

5

Compatible with all AI providers (Claude, GPT, Gemini)

CONFIGURATION EXAMPLE

openclaw.json
"skills": {
  "entries": {
    "context-budgeting": { "enabled": true }
  }
}

EXAMPLE CONVERSATION

U

How much am I spending on tokens?

B

Context Budget Report: - Average context size: 2,400 tokens/request (down from 4,100) - Estimated monthly savings: €3.20 - Compression ratio: 41% reduction - Messages retained: Last 15 (was 30) Your budget is set to 3,000 tokens/request. Want to adjust?

TIPS & BEST PRACTICES

Start with a 4,000-token budget and reduce gradually

Important system messages are never trimmed

Combine with Memory Tiering to offload old conversations to storage

Enable on your bot

Context Budgeting · AI Optimization

DASHBOARD