Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added
- **Automatic Binary Download**: SDK now automatically downloads capiscio-core binary if not found
- Downloads from GitHub releases (defaults to v2.4.0)
- Platform detection for macOS (arm64/x86_64), Linux (arm64/x86_64), and Windows
- Binary caching in `~/.capiscio/bin/` directory
- Automatic executable permissions for Unix-like systems
- Fallback search order: `CAPISCIO_BINARY` env var → local development path → system PATH → cached binary → auto-download

### Changed
- **Improved Process Management**: Enhanced error logging and binary discovery

## [2.4.1] - 2026-02-08

### Added
Expand Down
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -547,10 +547,21 @@ if pm.is_running():

**Auto-Start Behavior:**
- ✅ Automatically downloads `capiscio-core` binary if not found
- Downloads from GitHub releases (capiscio/capiscio-core)
- Supports macOS (arm64/x86_64), Linux (arm64/x86_64), and Windows
- Caches binary in `~/.capiscio/bin/` for reuse
- Sets executable permissions automatically on Unix-like systems
- ✅ Starts on Unix socket by default (`~/.capiscio/rpc.sock`)
- ✅ Handles server crashes and restarts
- ✅ Cleans up on process exit

**Binary Search Order:**
1. `CAPISCIO_BINARY` environment variable (if set)
2. `capiscio-core/bin/capiscio` relative to SDK (development mode)
3. System PATH (`capiscio-core` command)
4. Previously downloaded binary in `~/.capiscio/bin/`
Comment on lines +560 to +562
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This search-order item says the PATH lookup is for a capiscio-core command, but the SDK looks for capiscio (see ProcessManager.find_binary() using shutil.which("capiscio")). Update the README to reflect the actual executable name to avoid confusing installation/debugging.

Copilot uses AI. Check for mistakes.
5. Auto-download from GitHub releases (latest compatible version)
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

README mentions auto-download uses the "latest compatible version", but the implementation is pinned to CORE_VERSION = "2.4.0". Consider clarifying the README to reflect that the SDK downloads a fixed version (or update the code to actually resolve a compatible version dynamically).

Suggested change
5. Auto-download from GitHub releases (latest compatible version)
5. Auto-download from GitHub releases (SDK-pinned `capiscio-core` version)

Copilot uses AI. Check for mistakes.

## How It Works

### 1. The Handshake
Expand Down
102 changes: 94 additions & 8 deletions capiscio_sdk/_rpc/process.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,29 @@
"""Process manager for the capiscio-core gRPC server."""

import atexit
import logging
import os
import platform
import shutil
import stat
import subprocess
import time
from pathlib import Path
from typing import Optional
from typing import Optional, Tuple

import httpx

logger = logging.getLogger(__name__)

# Default socket path
DEFAULT_SOCKET_DIR = Path.home() / ".capiscio"
DEFAULT_SOCKET_PATH = DEFAULT_SOCKET_DIR / "rpc.sock"

# Binary download configuration
CORE_VERSION = "2.4.0"
GITHUB_REPO = "capiscio/capiscio-core"
CACHE_DIR = DEFAULT_SOCKET_DIR / "bin"


class ProcessManager:
"""Manages the capiscio-core gRPC server process.
Expand Down Expand Up @@ -72,8 +84,9 @@ def find_binary(self) -> Optional[Path]:

Search order:
1. CAPISCIO_BINARY environment variable
2. capiscio-core/bin/capiscio relative to SDK
2. capiscio-core/bin/capiscio relative to SDK (development)
3. System PATH
4. Downloaded binary in ~/.capiscio/bin/
"""
# Check environment variable
env_path = os.environ.get("CAPISCIO_BINARY")
Expand All @@ -96,7 +109,85 @@ def find_binary(self) -> Optional[Path]:
if which_result:
return Path(which_result)

# Check previously downloaded binary
cached = self._get_cached_binary_path()
if cached.exists():
return cached

return None

@staticmethod
def _get_platform_info() -> Tuple[str, str]:
"""Determine OS and architecture for binary download."""
system = platform.system().lower()
machine = platform.machine().lower()

if system == "darwin":
os_name = "darwin"
elif system == "linux":
os_name = "linux"
elif system == "windows":
os_name = "windows"
else:
raise RuntimeError(f"Unsupported operating system: {system}")

if machine in ("x86_64", "amd64"):
arch_name = "amd64"
elif machine in ("arm64", "aarch64"):
arch_name = "arm64"
else:
raise RuntimeError(f"Unsupported architecture: {machine}")

return os_name, arch_name

@staticmethod
def _get_cached_binary_path() -> Path:
"""Get the path where the downloaded binary would be cached."""
os_name, arch_name = ProcessManager._get_platform_info()
ext = ".exe" if os_name == "windows" else ""
filename = f"capiscio-{os_name}-{arch_name}{ext}"
return CACHE_DIR / CORE_VERSION / filename

def _download_binary(self) -> Path:
"""Download the capiscio-core binary for the current platform.

Downloads from GitHub releases to ~/.capiscio/bin/<version>/.
Returns the path to the executable.
"""
os_name, arch_name = self._get_platform_info()
target_path = self._get_cached_binary_path()

if target_path.exists():
return target_path

ext = ".exe" if os_name == "windows" else ""
filename = f"capiscio-{os_name}-{arch_name}{ext}"
url = f"https://github.com/{GITHUB_REPO}/releases/download/v{CORE_VERSION}/{filename}"

logger.info("Downloading capiscio-core v%s for %s/%s...", CORE_VERSION, os_name, arch_name)

target_path.parent.mkdir(parents=True, exist_ok=True)
try:
with httpx.stream("GET", url, follow_redirects=True, timeout=60.0) as resp:
resp.raise_for_status()
with open(target_path, "wb") as f:
for chunk in resp.iter_bytes(chunk_size=8192):
f.write(chunk)

Comment on lines +151 to +176
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-downloading and executing a binary from GitHub without any integrity/authenticity verification is a significant supply-chain risk. Consider verifying a published SHA256 (or signature) for the exact asset before marking it executable/using it, and/or require an explicit opt-in env var for auto-download.

Copilot uses AI. Check for mistakes.
# Make executable
st = os.stat(target_path)
os.chmod(target_path, st.st_mode | stat.S_IEXEC)

Comment on lines +165 to +180
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The auto-download installs and executes a platform binary fetched from GitHub releases without any integrity verification (checksum/signature). This is a supply-chain risk: a compromised release asset or MITM could lead to arbitrary code execution. Consider publishing/embedding expected SHA256 sums (or verifying a signed provenance file) and validating the download before chmod/execute, and fail closed with a clear error if verification fails.

Copilot uses AI. Check for mistakes.
logger.info("Installed capiscio-core v%s at %s", CORE_VERSION, target_path)
return target_path

except Exception as e:
if target_path.exists():
target_path.unlink()
raise RuntimeError(
f"Failed to download capiscio-core from {url}: {e}\n"
"You can also set CAPISCIO_BINARY to point to an existing binary."
) from e

def ensure_running(
self,
Expand Down Expand Up @@ -129,12 +220,7 @@ def ensure_running(
# Find binary
binary = self.find_binary()
if binary is None:
raise RuntimeError(
"capiscio binary not found. Please either:\n"
" 1. Set CAPISCIO_BINARY environment variable\n"
" 2. Install capiscio-core and add to PATH\n"
" 3. Build capiscio-core locally"
)
binary = self._download_binary()
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New behavior auto-downloads a binary when none is found (network call + filesystem write). There are no unit tests covering the selection order and download path, or failure modes (HTTP errors, partial downloads, permission issues). Add tests that mock httpx streaming + filesystem to assert: cached binary wins, download happens only when needed, and failures clean up the partial file and surface a helpful error.

Suggested change
binary = self._download_binary()
binary = self._download_binary()
if binary is None:
raise RuntimeError(
"capiscio-core binary not found and automatic download failed; "
"ensure the binary is installed or that the SDK can download it."
)

Copilot uses AI. Check for mistakes.
self._binary_path = binary

# Set up socket path
Expand Down
6 changes: 5 additions & 1 deletion capiscio_sdk/badge_keeper.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,9 +201,13 @@ def _run_keeper(self) -> None:
"""Background thread that runs the keeper loop."""
try:
# Initialize RPC client
# When rpc_address is None, CapiscioRPCClient auto-starts capiscio-core
# via ProcessManager (socket at ~/.capiscio/rpc.sock).
# Only pass an explicit address if one was configured.
self._rpc_client = CapiscioRPCClient(
address=self.config.rpc_address or "unix:///tmp/capiscio.sock"
address=self.config.rpc_address,
)
self._rpc_client.connect()

logger.debug("BadgeKeeper thread started, streaming events from core...")

Expand Down
86 changes: 78 additions & 8 deletions capiscio_sdk/connect.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,9 @@ def _ensure_did_registered(
}
payload = {"did": did}
if public_key_jwk:
payload["publicKey"] = public_key_jwk
# Server expects publicKey as a JSON string (Go *string), not a raw object.
# The string must contain a valid Ed25519 JWK per RFC-003.
payload["publicKey"] = json.dumps(public_key_jwk) if isinstance(public_key_jwk, dict) else public_key_jwk

try:
resp = httpx.patch(url, headers=headers, json=payload, timeout=30.0)
Expand Down Expand Up @@ -220,6 +222,8 @@ def connect(
keys_dir: Optional[Path] = None,
auto_badge: bool = True,
dev_mode: bool = False,
domain: Optional[str] = None,
agent_card: Optional[dict] = None,
) -> AgentIdentity:
"""
Connect to CapiscIO and get a fully-configured agent identity.
Expand All @@ -239,6 +243,8 @@ def connect(
keys_dir: Directory for keys (default: ~/.capiscio/keys/{agent_id}/)
auto_badge: Whether to automatically request a badge
dev_mode: Use self-signed badges (Trust Level 0)
domain: Agent domain for badge issuance (default: derived from server_url host)
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

domain is documented as “for badge issuance”, but it is never used in the badge request/keeper setup. Either wire domain into the badge issuance flow, or update the docstring to reflect what the parameter actually does.

Suggested change
domain: Agent domain for badge issuance (default: derived from server_url host)
domain: Optional agent domain metadata (currently does not affect badge issuance)

Copilot uses AI. Check for mistakes.
agent_card: A2A Agent Card dict to store in the registry (displayed in dashboard)

Returns:
AgentIdentity with full credentials and methods
Expand All @@ -256,6 +262,8 @@ def connect(
keys_dir=keys_dir,
auto_badge=auto_badge,
dev_mode=dev_mode,
domain=domain,
agent_card=agent_card,
)
return connector.connect()

Expand Down Expand Up @@ -300,6 +308,8 @@ def __init__(
keys_dir: Optional[Path],
auto_badge: bool,
dev_mode: bool,
domain: Optional[str] = None,
agent_card: Optional[dict] = None,
):
self.api_key = api_key
self.name = name
Expand All @@ -308,6 +318,13 @@ def __init__(
self.keys_dir = keys_dir
self.auto_badge = auto_badge
self.dev_mode = dev_mode
self.agent_card = agent_card
# Derive domain: explicit > hostname from server_url
if domain:
self.domain = domain
else:
from urllib.parse import urlparse
self.domain = urlparse(self.server_url).hostname or "localhost"

# HTTP client for registry API
self._client = httpx.Client(
Expand Down Expand Up @@ -346,6 +363,11 @@ def connect(self) -> AgentIdentity:
did = self._init_identity()
logger.info(f"DID: {did}")

# Step 3.5: Activate agent on server
# The DB defaults agents to "inactive" — we need to explicitly set "active"
# after successful identity initialization.
self._activate_agent()
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New behavior: connect() now always calls _activate_agent() after identity initialization, but there are no unit tests covering the activation request/response handling. Since connect.py already has unit tests, add tests asserting the expected GET+PUT calls and that non-200 responses remain non-fatal.

Suggested change
self._activate_agent()
try:
self._activate_agent()
except Exception as exc:
# Activation failures should be non-fatal: log and continue.
logger.warning("Agent activation failed (non-fatal): %s", exc)

Copilot uses AI. Check for mistakes.

Comment on lines 365 to +370
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description is focused on automatic capiscio-core binary download, but this change also alters registry API behavior (agent activation step, new parameters domain/agent_card, and new /v1/sdk/agents endpoints). Please update the PR description (or split into a separate PR) so reviewers can assess these API/behavior changes explicitly.

Copilot uses AI. Check for mistakes.
# Step 4: Set up badge (if auto_badge)
badge = None
badge_expires_at = None
Expand Down Expand Up @@ -393,7 +415,7 @@ def _ensure_agent(self) -> Dict[str, Any]:
try:
if self.agent_id:
# Fetch specific agent
resp = self._client.get(f"/v1/agents/{self.agent_id}")
resp = self._client.get(f"/v1/sdk/agents/{self.agent_id}")
if resp.status_code == 200:
data = resp.json()
return data.get("data", data)
Expand All @@ -409,7 +431,7 @@ def _ensure_agent(self) -> Dict[str, Any]:
return local_agent

# List agents and find by name or use first one
resp = self._client.get("/v1/agents")
resp = self._client.get("/v1/sdk/agents")
if resp.status_code != 200:
raise RuntimeError(f"Failed to list agents (status {resp.status_code})")
except httpx.RequestError as e:
Expand Down Expand Up @@ -457,7 +479,7 @@ def _find_agent_from_local_keys(self) -> Optional[Dict[str, Any]]:
if local_did:
agent_id = user_keys_dir.name
try:
resp = self._client.get(f"/v1/agents/{agent_id}")
resp = self._client.get(f"/v1/sdk/agents/{agent_id}")
if resp.status_code == 200:
agent_data = resp.json().get("data", resp.json())
server_did = agent_data.get("did")
Expand Down Expand Up @@ -498,7 +520,7 @@ def _find_agent_from_local_keys(self) -> Optional[Dict[str, Any]]:

# Verify agent exists on server with matching DID
try:
resp = self._client.get(f"/v1/agents/{agent_id}")
resp = self._client.get(f"/v1/sdk/agents/{agent_id}")
if resp.status_code == 200:
agent_data = resp.json().get("data", resp.json())
server_did = agent_data.get("did")
Expand All @@ -521,7 +543,7 @@ def _create_agent(self) -> Dict[str, Any]:
name = self.name or f"Agent-{os.urandom(4).hex()}"

try:
resp = self._client.post("/v1/agents", json={
resp = self._client.post("/v1/sdk/agents", json={
"name": name,
"protocol": "a2a",
})
Expand Down Expand Up @@ -616,7 +638,7 @@ def _ensure_did_registered(self, did: str, public_jwk: dict) -> Optional[str]:
"""
try:
# Check if server already has a DID for this agent
resp = self._client.get(f"/v1/agents/{self.agent_id}")
resp = self._client.get(f"/v1/sdk/agents/{self.agent_id}")
if resp.status_code != 200:
logger.warning(f"Failed to check agent DID status: {resp.status_code}")
return None
Expand All @@ -634,9 +656,12 @@ def _ensure_did_registered(self, did: str, public_jwk: dict) -> Optional[str]:
# Server has no DID - try to register using PATCH (partial update)
logger.info("Registering DID with server...")

# Server expects publicKey as a JSON string (Go *string), not a raw object.
# The string must contain a valid Ed25519 JWK per RFC-003.
pk_str = json.dumps(public_jwk) if isinstance(public_jwk, dict) else public_jwk
resp = self._client.patch(
f"/v1/sdk/agents/{self.agent_id}/identity",
json={"did": did, "publicKey": public_jwk},
json={"did": did, "publicKey": pk_str},
)

if resp.status_code == 200:
Expand All @@ -654,6 +679,51 @@ def _ensure_did_registered(self, did: str, public_jwk: dict) -> Optional[str]:

return None

def _activate_agent(self):
"""Set agent status to 'active' on the server.

The DB defaults agents to 'inactive'. After successful identity
initialization, we activate the agent so the dashboard shows
the correct status and badge flow can proceed.

Uses GET-then-PUT to avoid overwriting existing fields with zero values,
since the server's UpdateAgent writes all fields from the map.
"""
try:
# First, fetch the current agent data to preserve existing fields
resp = self._client.get(f"/v1/sdk/agents/{self.agent_id}")
if resp.status_code != 200:
logger.debug(f"Could not fetch agent for activation: {resp.status_code}")
return

agent_data = resp.json().get("data", resp.json())
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resp.json() is called twice in agent_data = resp.json().get("data", resp.json()), which reparses the body and can be surprisingly expensive. Store the parsed JSON in a local variable once and reuse it.

Suggested change
agent_data = resp.json().get("data", resp.json())
resp_json = resp.json()
agent_data = resp_json.get("data", resp_json)

Copilot uses AI. Check for mistakes.

# Merge: keep all existing fields, update status, name, domain, and agent card
agent_data["status"] = "active"
if self.name:
agent_data["name"] = self.name
if self.domain:
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.domain is always set (derived from server_url when not explicitly provided), so _activate_agent() will always overwrite the server-side domain field on every connect. Consider only sending domain when the caller explicitly provided it (or when the server field is empty).

Suggested change
if self.domain:
# Only set domain if the server doesn't already have one to avoid overwriting
if self.domain and not agent_data.get("domain"):

Copilot uses AI. Check for mistakes.
agent_data["domain"] = self.domain
if self.agent_card:
agent_data["agentCard"] = self.agent_card

# Remove server-managed fields that shouldn't be sent back
for field in ("created_at", "updated_at", "user_id", "org_id", "trust_level"):
agent_data.pop(field, None)

resp = self._client.put(
f"/v1/sdk/agents/{self.agent_id}",
json=agent_data,
)

if resp.status_code == 200:
logger.info("Agent activated on server")
else:
logger.debug(f"Agent activation returned {resp.status_code} - non-critical")
except Exception as e:
# Don't fail connection just because activation failed
logger.debug(f"Agent activation failed: {e} - non-critical")

def _setup_badge(self):
"""Set up BadgeKeeper for automatic badge management."""
try:
Expand Down
1 change: 1 addition & 0 deletions docs/getting-started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ keywords: A2A Security installation, Python middleware, agent protection, pip in
- **Python:** 3.10 or higher
- **Operating System:** Linux, macOS, or Windows
- **Dependencies:** Automatically installed via pip
- **capiscio-core Binary:** Automatically downloaded if not found (no manual installation needed)

## Install from PyPI

Expand Down
Loading
Loading