SII Agent SDK for Python
Python SDK for SII-CLI - AI Agent Framework
Overview
SII Agent SDK is the Python interface for SII-CLI and lets Python developers programmatically access the powerful AI agent capabilities of the CLI.
The SDK borrows the ergonomics of claude-agent-sdk-python, so existing Claude SDK users will feel at home while still enjoying the SII-specific integrations.
Key Features
- ✅ Claude-style API: APIs mirror claude-agent-sdk-python to reduce the learning curve
- ✅ YOLO autonomous execution: One prompt is enough to let the agent finish the entire job (powered by SimpleYoloAgent)
- ✅ Multiple auth modes: Support for USE_SII, USE_GEMINI, USE_OPENAI, and more
- ✅ SII-exclusive tools: Deep integrations with
sii_cognitions,sii_deep_research,sii_hybrid_search, etc. - ✅ Streaming responses: JSONL-based streaming with real-time progress updates
- ✅ Type-safe: Complete type hints and IDE support
- ✅ Process isolation: Node.js Bridge keeps Python and the SII-CLI core in separate processes
Latest Release
- PyPI:
sii-agent-sdk-0.1.9 - Release Date: 2025-11-14
- Status: Alpha (the API may still evolve)
Since 0.1.2 the Python SDK has been available on PyPI. It still relies on the globally installed
@gair/sii-clipackage to provide the Node.js Bridge.
Architecture
The following diagram (see
docs/src/fig1.png) shows how the Python SDK, the Node.js Bridge, and the SII-CLI core exchange data.
Component Overview
Python SDK (
sii_agent_sdk/)- Provides a Pythonic API surface
- Manages the Bridge process lifecycle
- Parses JSONL event streams
- Wraps type-safe messages
Node.js Bridge (
bridge/)- Connects Python to the Node.js ecosystem
- Handles JSONL protocol communication
- Manages SII-CLI core instances
- Offers process isolation and failure recovery
SII-CLI Core (
../core/)- Reuses all existing CLI capabilities
- AgentService: conversational interactions
- SimpleYoloAgent: autonomous execution
- ToolRegistry: tool discovery and management
Installation
pip install sii-agent-sdkPrerequisites
- Python: >= 3.10
- Node.js: >= 20.0.0
- SII-CLI: Built
@gair/sii-clipackage, install withnpm install -g @gair/sii-cli. See the sii-cli installation guide for details.
Authentication Methods
SII Agent SDK supports three authentication flows, each targeting a different use case:
| Auth Type | Description | Environment Variables | Available Tools | Best Fit |
|---|---|---|---|---|
USE_SII (default) | Sign in with an SII platform account | SII_USERNAMESII_PASSWORD | All tools + SII exclusives | Full feature set |
USE_OPENAI | Use an OpenAI-compatible API key | SII_OPENAI_BASE_URLOPENAI_API_KEY orSII_OPENAI_API_KEY | Core tools | Cost control |
USE_OPENAI_WITH_SII_TOOLS ✨ | Hybrid mode: OpenAI generation + SII tools + upload | SII_OPENAI_BASE_URLSII_OPENAI_API_KEYSII_USERNAMESII_PASSWORD | All tools + SII exclusives | Best of both worlds 🚀 |
1. SII Authentication (full feature set)
Use your SII platform account to unlock every feature:
export SII_USERNAME="your-username"
export SII_PASSWORD="your-password"
export SII_BASE_URL="https://www.opensii.ai/backend" # optionalfrom sii_agent_sdk import query, SiiAgentOptions
async for msg in query(
prompt="...",
options=SiiAgentOptions(auth_type="USE_SII")
):
print(msg)Advantages
- ✅ Access every SII-exclusive tool (cognition DB, deep research, etc.)
- ✅ Full feature parity with the CLI
- ✅ Trajectory uploads happen automatically
2. OpenAI Authentication
Use an OpenAI (or compatible) API key when you want the lowest cost:
export SII_OPENAI_BASE_URL="https://api.openai.com/v1" # Required base_url
export SII_OPENAI_API_KEY="sk-..." # Required OpenAI API keyℹ️ Note:
SII_OPENAI_BASE_URLis mandatory so the Bridge knows where to send OpenAI requests.
- For the official OpenAI service, keep
https://api.openai.com/v1.- For Azure OpenAI, Together, Fireworks, or other compatible services, switch to the provider-specific base URL, e.g.
https://your-resource.openai.azure.com/openai.
async for msg in query(
prompt="...",
options=SiiAgentOptions(
auth_type="USE_OPENAI",
model="gpt-4o" # Pick an OpenAI model
)
):
print(msg)Advantages
- ✅ Lower API cost
- ✅ Faster responses
- ✅ Access to the newest OpenAI models
Limitations
- ❌ Cannot call SII-exclusive tools
- ❌ No trajectory uploads
3. Hybrid Authentication (recommended! ✨)
Hybrid mode combines the strengths of both auth types—OpenAI handles content generation, while SII provides tools and uploads.
# Provide both sets of credentials
export SII_OPENAI_BASE_URL="https://api.openai.com/v1"
export SII_OPENAI_API_KEY="sk-..."
export SII_USERNAME="your-username"
export SII_PASSWORD="your-password"ℹ️ Note:
SII_OPENAI_BASE_URLis still required in hybrid mode. Replace it with your third-party base URL if you use a custom OpenAI-compatible service.
async for msg in query(
prompt="Search for the latest AI trends and summarize them",
options=SiiAgentOptions(
auth_type="USE_OPENAI_WITH_SII_TOOLS", # Hybrid mode
model="gpt-4o"
)
):
print(msg)How it works
- 🤖 Generation: OpenAI models produce the content (cheaper and faster)
- 🔧 Tool calls: SII backends handle cognition DB, deep research, etc.
- 📊 Data uploads: Trajectories automatically sync to the DC side
Benefits
- ✅ OpenAI cost and latency
- ✅ Full SII toolset
- ✅ Automatic data uploads
- ✅ Truly the best of both worlds 🎯
See examples/hybrid_auth_example.py for a complete workflow.
Automatic Authentication Detection
If you do not set auth_type, the SDK inspects environment variables and selects the best option for you:
# Let the SDK decide which auth mode fits the current env vars
async for msg in query(prompt="Hello"):
print(msg)Detection priority
- Hybrid mode if OpenAI credentials,
SII_OPENAI_BASE_URL, and SII credentials are all present - OpenAI mode if only OpenAI credentials +
SII_OPENAI_BASE_URLare found - SII mode if only SII credentials exist
Quick Reference for Environment Variables
# Hybrid mode (recommended)
export SII_OPENAI_BASE_URL="https://api.openai.com/v1"
export SII_OPENAI_API_KEY="sk-..."
export SII_USERNAME="your-username"
export SII_PASSWORD="your-password"
# OpenAI mode
export SII_OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_API_KEY="sk-..." # or SII_OPENAI_API_KEY
# SII mode
export SII_USERNAME="your-username"
export SII_PASSWORD="your-password"
# Optional tweaks
export SII_BASE_URL="https://www.opensii.ai/backend"
export SII_OPENAI_MODEL="gpt-4o"Reuse MCP Servers Installed Through the CLI
Running /mcp install writes server definitions to ./.sii/config.json (project level) or ~/.sii/config.json (global). The Python SDK reads the same configuration at startup, registers the discovered tools in the ToolRegistry, and therefore mirrors the MCP experience provided by the CLI.
Workflow:
- Inside SII CLI run
/mcp install github github_token="ghp_xxxx"(or pick any other preset). - Make sure the working directory where your Python code runs can read the same
.siifolder (or rely on the global config). - Run the SDK—MCP tools that were installed through the CLI will automatically be available without extra
SiiAgentOptionstweaks.
Validate with the example packages/python-sdk/examples/mcp_cli_reuse.py:
python packages/python-sdk/examples/mcp_cli_reuse.pyThe script calls the GitHub MCP server installed through the CLI and prints the newest issues in your repo.
Control Trajectory Upload (enable_data_upload)
By default the SDK matches the CLI: in USE_SII or hybrid mode, every trajectory is uploaded to opensii.ai. If you want to override that behavior per invocation without changing global shell variables, set SiiAgentOptions(enable_data_upload=...):
async for msg in query(
prompt="Run locally only—do not upload",
options=SiiAgentOptions(
auth_type="USE_SII",
enable_data_upload=False, # Overrides SII_ENABLE_DATA_UPLOAD
),
):
print(msg)enable_data_upload=False: force disable uploads even ifSII_ENABLE_DATA_UPLOAD=true.enable_data_upload=True: force enable uploads even if the env var is false.None(default): followSII_ENABLE_DATA_UPLOAD; if unset, fall back to the CLI-compatible default.
Quick Start
1. Basic Query
The simplest usage relies on the default configuration:
import anyio
from sii_agent_sdk import query
async def main():
async for message in query(prompt="Who are you?"):
print(message)
anyio.run(main)2. YOLO Autonomous Execution Mode
Let the agent handle every step with zero manual confirmation:
import anyio
import os
from pathlib import Path
from sii_agent_sdk import query, SiiAgentOptions
def build_prompt() -> str:
return """
Please build a two-player Gomoku web UI with the following requirements:
1. **Tech stack**: Plain HTML + CSS + JavaScript (no frameworks)
2. **Game features**:
- 15x15 board
- Local two-player mode with alternating black/white stones
- Display whose turn it is
- Automatically detect wins (five in a row)
- Button to restart the game
- Undo move support
3. **UI design**:
- Polished board UI with grid lines
- Clear rendering of black and white stones
- Visible game status (turn, winner, etc.)
- Responsive layout that scales to various screens
4. **File layout**:
- `gomoku.html` as the single HTML file containing everything
- CSS and JS can be embedded directly inside the HTML for convenience
5. **Bonus ideas** (optional):
- Celebration animation when someone wins
- Optional sound effects
- Board coordinates
Please produce runnable code and save it to `gomoku.html`.
"""
async def main():
print("🎮 Start test: build a two-player Gomoku front-end")
print("=" * 60)
output_dir = Path(__file__).parent / "gomoku_game_output"
output_dir.mkdir(exist_ok=True)
print(f"📁 Output directory: {output_dir}")
print("🔍 Trajectories will be stored in .sii/conversation_logs/")
print("=" * 60)
prompt = build_prompt()
print("\n📤 Sending request to the AI...")
print("-" * 60)
try:
message_count = 0
async for msg in query(
prompt=prompt,
options=SiiAgentOptions(
yolo=True, # Enable YOLO mode for autonomous tool calls
max_turns=15,
cwd=str(output_dir),
auth_type="USE_SII",
system_prompt="You are a helpful front-end engineer. Build beautiful and functional web apps.",
),
):
message_count += 1
print(f"\n💬 Message #{message_count}:")
print(msg)
print("-" * 60)
print(f"\n✅ Finished! Received {message_count} messages.")
except Exception as e:
print(f"\n❌ Error: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
anyio.run(main)3. Multi-round YOLO Sessions (SiiAgentSession)
Some tasks require multiple rounds inside the same working directory (plan → implement → polish). SiiAgentSession provides a lightweight persistent session wrapper so you can reuse the same session_id, execution context, and history across YOLO rounds.
Example script:
examples/multi_round_yolo_session.py
import anyio
from sii_agent_sdk import SiiAgentOptions, SiiAgentSession
def build_prompts() -> list[str]:
return [
"Hello",
"Please continue and summarize the next steps based on the previous result.",
"Add a README.md that explains how to run the generated project.",
]
async def main() -> None:
prompts = build_prompts()
options = SiiAgentOptions(
yolo=True,
max_turns=80,
auth_type="USE_SII",
model="GLM-4.6",
cwd="./multi_round_project",
timeout_ms=240000,
)
session = SiiAgentSession(options)
for round_index, prompt in enumerate(prompts, start=1):
print(f"\n===== Round {round_index}: {prompt} =====")
async for message in session.run(prompt):
print(message)
print(f"\nSession ID: {session.session_id}")
print(f"History length: {len(session.history)} messages")
anyio.run(main)Key parameter tips
| Option | Purpose | Recommendation |
|---|---|---|
SiiAgentOptions.yolo | Toggle YOLO mode | True (required for autonomous multi-round runs) |
max_turns | Max turns per round | Adjust based on complexity (80 in the example) |
cwd | Working directory | Keep a shared directory for output reuse |
timeout_ms | Timeout per round (ms) | Default 120000; extend if needed |
auth_type / model | Auth + model choice | Same as single-run YOLO; pick what you need |
SiiAgentSession requests a session ID from the Bridge on the first run() call and reuses it afterward. It also accumulates session.history, which you can inspect to inject manual feedback or custom logging.
Differences vs. single-run query()
| Scenario | Single query() | SiiAgentSession.run() |
|---|---|---|
| Session ID | Always re-created | Shared across rounds |
| History | Not retained by default | Automatically appended to session.history |
| Environment | Rebuilt every time | First-round context is reused |
| Use cases | One-off tasks | Chains like plan → execute → verify |
Stick with async for msg in query(...) for single YOLO runs. Switch to SiiAgentSession when a project spans multiple rounds so you avoid reinitialization or producing multiple trajectory files.
4. Using SII-exclusive Tools
Leverage the cognition capabilities available only on SII:
import anyio
from sii_agent_sdk import query, SiiAgentOptions
async def main():
async for message in query(
prompt="Research the latest advances in large language model training",
options=SiiAgentOptions(
auth_type="USE_SII",
allowed_tools=[
"sii_cognitions",
"sii_deep_research",
"sii_hybrid_search",
],
max_turns=5,
),
):
if message.type == "assistant_message":
print(message.content[0].text)
anyio.run(main)5. Custom working directory and model
Tune the execution environment:
import anyio
from sii_agent_sdk import query, SiiAgentOptions
async def main():
async for message in query(
prompt="Analyze the code structure under src/",
options=SiiAgentOptions(
cwd="/path/to/your/project",
model="gemini-2.0-flash-exp",
temperature=0.7,
max_turns=8,
allowed_tools=["read_file", "list_dir", "grep_search"],
),
):
print(message)
anyio.run(main)API Reference
query() function
A simple one-off query interface that fits most cases.
async def query(
prompt: str,
*,
options: SiiAgentOptions | None = None,
) -> AsyncIterator[Message]Parameters
prompt: Task description from the useroptions: Optional configuration (see below)
Returns: An async iterator that yields Message objects
SiiAgentOptions
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Literal, Optional, Union
@dataclass
class SiiAgentOptions:
# Authentication
auth_type: Optional[str] = None # "USE_SII", "USE_OPENAI", "USE_OPENAI_WITH_SII_TOOLS"
# Leave empty to auto-detect
# Execution
yolo: bool = False # YOLO autonomous execution
max_turns: int = 10 # Max turns per run
timeout_ms: int = 120000 # Timeout in milliseconds
log_events: bool = True # Toggle telemetry events
# Tool control
allowed_tools: Optional[List[str]] = None # Allow-list of tools
# Model configuration
model: Optional[str] = None # Model name such as "gpt-4o" or "claude-sonnet-4.5"
temperature: Optional[float] = None # Sampling temperature (0.0-2.0)
system_prompt: Optional[str] = None # Custom system prompt
run_model: Literal["agent", "ask"] = "agent" # agent = full CLI prompt, ask = lightweight Q&A
enable_data_upload: Optional[bool] = None # Override trajectory uploads
# Environment
cwd: Optional[Union[Path, str]] = None # Working directory
env: Dict[str, str] = field(default_factory=dict) # Extra env vars to pass to the BridgeSwitching run models (run_model)
agent(default): Reuse the full SII-CLI system prompt, environment context, and tool injection. Ideal for complex tasks.ask: Only injects thesystem_promptyou specify (defaults to the Ask prompt if omitted) and disables CLI-level tool hints for lightweight Q&A.
options = SiiAgentOptions(
run_model="ask",
system_prompt="You are a patient teaching assistant. Respond in English.",
)If you omit system_prompt in Ask mode, the SDK injects the default text:
You are Ask Model running inside the SII CLI environment. Answer the user's question directly and concisely using only the provided context. Do not assume tool access unless explicitly mentioned.
Message types
All messages inherit from the Message base class:
# Status updates
class StatusMessage(Message):
type: Literal["status"]
status: str # "initializing", "authenticating", "ready", "running"
message: str
auth_type: Optional[str]
available_tools: Optional[List[str]]
# Assistant replies
class AssistantMessage(Message):
type: Literal["assistant_message"]
role: Literal["assistant"]
content: List[ContentBlock]
# Tool call
class ToolCallMessage(Message):
type: Literal["tool_call"]
tool_name: str
tool_call_id: str
args: Dict[str, Any]
# Tool result
class ToolResultMessage(Message):
type: Literal["tool_result"]
tool_call_id: str
result: Dict[str, Any]
# Completion summary
class CompletedMessage(Message):
type: Literal["completed"]
metadata: Dict[str, Any] # turns_used, time_elapsed, tokens_used
# Error payload
class ErrorMessage(Message):
type: Literal["error"]
error: Dict[str, Any] # code, message, detailsExamples
Full example list
See the examples/ directory for more demos:
examples/
├── quick_start.py # Quick start (auto auth detection)
├── hybrid_auth_example.py # Hybrid auth demo ✨
├── gomoku_openai_example.py # OpenAI mode example
└── debug_quick_start.py # Debug helpersRun them with:
cd examples
# Quick start
python quick_start.py
# Hybrid auth
python hybrid_auth_example.py
# OpenAI mode
python gomoku_openai_example.pyDevelopment
Project structure
packages/python-sdk/
├── sii_agent_sdk/ # Python SDK source
│ ├── __init__.py
│ ├── query.py # query() implementation
│ ├── types.py # Type definitions
│ ├── errors.py # Exceptions
│ ├── bridge.py # Bridge process manager
│ └── _internal/
│ ├── protocol.py # JSONL protocol parsing
│ └── process.py # Process management
├── bridge/ # Node.js Bridge
│ ├── src/
│ │ ├── index.ts # Bridge entry point
│ │ ├── protocol.ts # Protocol handling
│ │ ├── executor.ts # Executor
│ │ ├── stream.ts # Event stream helpers
│ │ └── types.ts # Shared types
│ ├── package.json
│ └── tsconfig.json
├── tests/
│ ├── test_query.py
│ ├── test_bridge.py
│ └── test_types.py
├── examples/
├── docs/
├── pyproject.toml
└── README.mdContributing
We welcome contributions! Please:
- Fork the repo
- Create a feature branch:
git checkout -b feature/your-feature - Commit your changes:
git commit -am "Add new feature" - Push the branch:
git push origin feature/your-feature - Open a Pull Request
See CONTRIBUTING.md for more details.
Development guidelines
- Code style: Black (line-length=100)
- Type checking: mypy in strict mode
- Test coverage: aim for >= 80%
- Commit messages: follow Conventional Commits
License
Apache License 2.0 — see LICENSE
Acknowledgements
- Inspired by claude-agent-sdk-python
- Thanks to the Anthropic team for the excellent SDK design
- Thanks to the SII-CLI core team for the underlying infrastructure
Related Links
Heads up: This project is still evolving quickly, so APIs may change. Watch CHANGELOG.md for updates.
