Docs/Features
Security & Privacy
XClaw's security design revolves around one principle: your data doesn't leave your machine unless you explicitly choose to use a cloud service.
All session data, configuration, and credentials are stored locally. XClaw itself doesn't operate any cloud services, doesn't collect usage data, and doesn't track user behavior. We cannot see your conversations.
When you connect to cloud AI models, conversations are sent to the respective model provider (Anthropic, OpenAI, etc.). This is an inherent requirement of using cloud models and cannot be avoided. If you want no data to leave your machine, use local models via Ollama.
Credential management uses OS-level encryption (macOS Keychain, Windows Credential Store). OAuth tokens are encrypted locally, and data transmission during the automatic refresh process uses TLS-encrypted channels.
Permission modes provide a security boundary for Agent behavior. In Explore and Confirm modes, the Agent cannot modify files or execute commands without your knowledge. Even in Auto mode, XClaw performs additional safety checks on dangerous operations (like file deletion or rm commands).
XClaw's codebase contains no telemetry, analytics, or advertising SDKs.
How to
Security and privacy protections are mostly enabled by default in XClaw — no extra configuration needed. But knowing a few key settings helps:
If you have strict data privacy requirements, configure only local models (like Ollama) in Settings > AI Models. This ensures your conversations never leave your machine.
Permission mode is your most important security control. When tackling an unfamiliar task, use "Explore" mode so the Agent can only read, not write. Once you've confirmed the Agent's plan is sound, switch to "Confirm" mode to execute. Only use "Auto" mode for repetitive tasks you fully trust.
Want to audit what the Agent has done? Every session's complete operation history is recorded in the session details — you can see which files the Agent read, which commands it ran, and what it changed.
If you suspect an API key has been compromised, update it immediately in Settings > AI Models. XClaw uses OS-level encryption for credential management, but rotating keys is still the safest practice.