ClewdR Xerxes-2
winget install --id=Xerxes-2.ClewdR -e
High-Performance LLM Proxy Specifically built for Claude (Claude.ai) and Gemini (Google AI Studio, Google Vertex AI) Core Advantages Full-Featured Frontend - Integrated React frontend providing a complete functional experience Efficient Architecture - Occupies one-tenth the resources compared to script language implementations, with ten times the performance, easily handling thousands of requests per second - Event-driven design, decoupled logic, supports hot reloading and multiple configuration methods - High-performance response caching supported by Moka technology - Multi-threaded asynchronous processing based on Tokio and Axum - Fingerprint-level Chrome simulation Rquest HTTP client Intelligent Cookie Management - Automatic classification and management of account status - Fine-grained polling mechanism to maximize resource utilization Full Platform Compatibility - Rust static compilation, single binary deployment, no environment dependencies needed - Native support for macOS/Android and other platforms - Extremely low memory usage (only single-digit MB) - No need for virtual machines or complex dependencies Enhanced Features - Built-in proxy server support (no TUN required) - Concurrent cache request handling - Gemini additional support: - Google AI Studio and Google Vertex AI - OpenAI compatible mode / Gemini format - Painless HTTP Keep-Alive support - Claude additional support: - OpenAI compatible mode / Claude format - Extend Thinking - Stop sequences implemented on the proxy side - Image attachment uploads - Web search - Claude Max
README
🎯 What is ClewdR?
ClewdR is a production-grade, high-performance proxy server engineered specifically for Claude (Claude.ai, Claude Code) and Google Gemini (AI Studio, Vertex AI). Built with Rust for maximum performance and minimal resource usage, it provides enterprise-level reliability with consumer-friendly simplicity.
🏆 Why ClewdR?
- 🚄 10x Performance: Outperforms script-language implementations
- 💾 1/10th Memory: Uses only single-digit MB in production
- 🔧 Production Ready: Handles thousands of requests per second
- 🌐 Multi-Platform: Native support for Windows, macOS, Linux, Android
✨ Core Features
🎨 Full-Featured Web Interface
- React-powered dashboard with real-time monitoring
- Multi-language support (English/Chinese)
- Secure authentication with auto-generated passwords
- Hot configuration reload without service interruption
- Visual cookie & key management
🏗️ Enterprise Architecture
- Tokio + Axum async runtime for maximum throughput
- Event-driven design with decoupled components
- Moka-powered caching with intelligent invalidation
- Chrome-level fingerprinting for seamless API access
- Multi-threaded processing with optimal resource usage
🧠 Intelligent Resource Management
-
Smart cookie rotation with status classification
-
API key health monitoring and automatic failover
-
Rate limiting protection with exponential backoff
-
Connection pooling with keep-alive optimization
🌍 Universal Compatibility
- Static compilation - single binary, zero dependencies
- Cross-platform native - Windows, macOS, Linux, Android
- Docker ready with optimized images
- Reverse proxy friendly with custom endpoint support
🚀 Protocol Support
Claude Integration
- ✅ Claude.ai web interface
- ✅ Claude Code specialized support
- ✅ System prompt caching for efficiency
- ✅ Extended Thinking mode
- ✅ Image attachments & web search
- ✅ Custom stop sequences
Google Gemini Integration
- ✅ AI Studio & Vertex AI
- ✅ OAuth2 authentication for Vertex
- ✅ HTTP Keep-Alive optimization
- ✅ Model switching with automatic detection
API Compatibility
-
✅ OpenAI format - drop-in replacement
-
✅ Native formats - Claude & Gemini
-
✅ Streaming responses with real-time processing
📊 Performance Metrics
Metric | ClewdR | Traditional Proxies |
---|---|---|
Memory Usage | <10 MB | 100-500 MB |
Requests/sec | 1000+ | 100-200 |
Startup Time | <1 second | 5-15 seconds |
Binary Size | ~15 MB | 50-200 MB |
Dependencies | Zero | Node.js/Python + libs |
🚀 Quick Start Guide
Step 1: Download & Run
# Download the latest release for your platform
wget https://github.com/Xerxes-2/clewdr/releases/latest/download/clewdr-[platform]
# Extract the binary (if necessary)
tar -xzf clewdr-[platform].tar.gz
# Navigate to the directory
cd clewdr-[platform]
# Make executable (Linux/macOS)
chmod +x clewdr
# Run ClewdR
./clewdr
📦 Platform Downloads
Platform | Architecture | Download Link |
---|---|---|
🪟 Windows | x64 | clewdr-windows-x64.exe |
🐧 Linux | x64 | clewdr-linux-x64 |
🐧 Linux | ARM64 | clewdr-linux-arm64 |
🍎 macOS | x64 | clewdr-macos-x64 |
🍎 macOS | ARM64 (M1/M2) | clewdr-macos-arm64 |
🤖 Android | ARM64 | clewdr-android-arm64 |
Step 2: Access Web Interface
- 🌐 Open your browser to
http://127.0.0.1:8484
- 🔐 Use the Web Admin Password displayed in the console
- 🎉 Welcome to ClewdR's management interface!
> 💡 Pro Tips:
>
> - Forgot password? Delete clewdr.toml
and restart
> - Docker users: Password appears in container logs
> - Change password: Use the web interface settings
Step 3: Configure Your Services
🍃 Claude Setup
- Add Cookies: Paste your Claude.ai session cookies
- Configure Proxy: Set upstream proxy if needed
- Test Connection: Verify cookie status in dashboard
🔹 Gemini Setup
- Add API Keys: Input your Google AI Studio keys
- Vertex AI (Optional): Configure OAuth2 for enterprise
- Model Selection: Choose your preferred models
Step 4: Connect Your Applications
ClewdR provides multiple API endpoints. Check the console output for available endpoints:
🔗 API Endpoints
# Claude Endpoints
Claude Web: http://127.0.0.1:8484/v1/messages # Native format
Claude OpenAI: http://127.0.0.1:8484/v1/chat/completions # OpenAI compatible
Claude Code: http://127.0.0.1:8484/code/v1/messages # Claude Code
# Gemini Endpoints
Gemini Native: http://127.0.0.1:8484/v1/v1beta/generateContent # Native format
Gemini OpenAI: http://127.0.0.1:8484/gemini/chat/completions # OpenAI compatible
Vertex AI: http://127.0.0.1:8484/v1/vertex/v1beta/ # Vertex AI
⚙️ Application Configuration Examples
SillyTavern Configuration
{
"api_url": "http://127.0.0.1:8484/v1/chat/completions",
"api_key": "your-api-password-from-console",
"model": "claude-3-sonnet-20240229"
}
Continue VSCode Extension
{
"models": [
{
"title": "Claude via ClewdR",
"provider": "openai",
"model": "claude-3-sonnet-20240229",
"apiBase": "http://127.0.0.1:8484/v1/",
"apiKey": "your-api-password-from-console"
}
]
}
Cursor IDE Configuration
{
"openaiApiBase": "http://127.0.0.1:8484/v1/",
"openaiApiKey": "your-api-password-from-console"
}
Step 5: Verify & Monitor
- ✅ Check cookie/key status in the web dashboard
- ✅ Monitor request logs for successful connections
- ✅ Test with a simple chat request
- ✅ Enjoy blazing-fast LLM proxy performance!
Community Resources
Github Aggregated Wiki:
Acknowledgements
- Clewd Modified Version - A modified version of the original Clewd, providing many inspirations and foundational features.
- Clove - Provides the support logic for Claude Code.