Docs/Agent
Context Management
The context window is the Agent's working memory. It determines how much information the Agent can "see" during a single reasoning step.
Every AI model has a context length limit — Claude supports up to 200K tokens, GPT-4o supports 128K. As conversations grow, earlier messages exceed this window. XClaw handles this through message compaction: it automatically condenses earlier exchanges into shorter summaries, preserving key decisions and context while freeing token space for new interactions.
Context also comes from other sources. Workspace Skill instructions, persisted knowledge from the memory system, and file contents the Agent has read during the current session are all injected into context. XClaw's context engine orchestrates these sources, ensuring the most relevant information gets priority within the limited token budget.
The @mention mechanism lets you precisely reference files and resources in conversation. The Agent understands these references and pulls the corresponding content into context, saving you from manually pasting large code blocks.
Conversation branching is another useful mechanism. When you want to explore different reasoning paths, you can fork a new branch from any point in the conversation without losing the original progress.
How to
Context management is mostly automatic — XClaw handles message compaction and token budgeting for you. But you can actively control what enters the context.
Type @ in the input box and a file/resource picker pops up. Select a file and its content is injected directly into the current conversation's context for the Agent to use right away. Much easier than copying and pasting code manually.
Want to explore different solutions? Next to any message in the conversation, you'll find a "Branch" button. Click it to fork a new conversation path from that point — the original conversation stays intact. You can freely switch between branches to compare approaches.
If you notice the Agent's response quality declining (usually in very long conversations), consider starting a fresh session and using @mention to bring in key files. This gives the Agent a clean context to work with.