Python API Reference
Installation
pip install protomcpImports
from protomcp import tool, ToolResult, tool_managerfrom protomcp.context import ToolContextfrom protomcp.log import ServerLoggerfrom protomcp.group import tool_group, actionfrom protomcp.local_middleware import local_middlewarefrom protomcp.server_context import server_contextfrom protomcp.telemetry import telemetry_sink, ToolCallEventfrom protomcp.sidecar import sidecarfrom protomcp.workflow import workflow, step, StepResult, get_registered_workflows, clear_workflow_registryfrom protomcp.discovery import configure@tool(...)
Registers a function as an MCP tool.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
description | str | required | Human-readable description of what the tool does |
output_type | dataclass type | None | Dataclass type used to generate the structured output JSON Schema |
title | str | "" | Display name shown in the MCP host UI |
destructive | bool | False | Hint: the tool has destructive side effects |
idempotent | bool | False | Hint: calling the tool multiple times has the same effect as once |
read_only | bool | False | Hint: the tool does not modify state |
open_world | bool | False | Hint: the tool may access resources outside the current context |
task_support | bool | False | Hint: the tool supports long-running async task semantics |
hidden | bool | False | If True, the tool is registered but hidden from the initial tool list |
Returns: The original function (unmodified).
The function name becomes the tool name. Type hints on parameters are used to generate the JSON Schema for tool inputs.
from protomcp import tool, ToolResult
@tool("Multiply two numbers", title="Multiply", read_only=True)def multiply(a: float, b: float) -> ToolResult: return ToolResult(result=str(a * b))Supported parameter types
| Python type | JSON Schema |
|---|---|
str | {"type": "string"} |
int | {"type": "integer"} |
float | {"type": "number"} |
bool | {"type": "boolean"} |
list | {"type": "array"} |
dict | {"type": "object"} |
list[T] | {"type": "array", "items": <T schema>} |
dict[K, V] | {"type": "object", "additionalProperties": <V schema>} |
str | int / Union[str, int] | {"anyOf": [{"type": "string"}, {"type": "integer"}]} |
Optional[T] | type of T, not required |
Literal["a", "b"] | {"type": "string", "enum": ["a", "b"]} |
Parameters without defaults and without Optional are added to the required array. Parameters with Optional[T] or a default value are optional.
Skipped parameters
Parameters named self, cls, or ctx are skipped during schema generation. Parameters with type ToolContext (if present) are also skipped.
ToolResult
Returned by every tool handler.
from dataclasses import dataclassfrom typing import Optional
@dataclassclass ToolResult: result: str = "" is_error: bool = False enable_tools: Optional[list[str]] = None disable_tools: Optional[list[str]] = None error_code: Optional[str] = None message: Optional[str] = None suggestion: Optional[str] = None retryable: bool = FalseFields
| Field | Type | Default | Description |
|---|---|---|---|
result | str | "" | The result string returned to the MCP host |
is_error | bool | False | Set to True to indicate an error |
enable_tools | list[str] | None | None | Tool names to enable after this call |
disable_tools | list[str] | None | None | Tool names to disable after this call |
error_code | str | None | None | Machine-readable error code |
message | str | None | None | Human-readable error message |
suggestion | str | None | None | Recovery suggestion for the AI or user |
retryable | bool | False | Whether retrying this call might succeed |
ToolContext
Injected by protomcp into tool handlers that declare a ctx: ToolContext parameter. Provides progress reporting and cancellation detection.
from protomcp.context import ToolContextConstructor
ToolContext(progress_token: str, send_fn: Callable)Not constructed directly — protomcp creates and injects it.
Methods
report_progress(progress, total=0, message="")
Send a progress notification to the MCP host.
| Parameter | Type | Default | Description |
|---|---|---|---|
progress | int | required | Current progress value |
total | int | 0 | Total expected value (0 means unknown) |
message | str | "" | Optional human-readable status message |
No-op if the host did not supply a progress_token for this call.
ctx.report_progress(50, 100, "Halfway done")is_cancelled() -> bool
Returns True if the MCP host has sent a cancellation for this call. Thread-safe.
if ctx.is_cancelled(): return ToolResult(is_error=True, message="Cancelled")sample(messages, max_tokens, **kwargs)
Request an LLM completion from the MCP client (sampling). Returns a dict with the response content.
| Parameter | Type | Default | Description |
|---|---|---|---|
messages | list[dict] | required | List of message dicts with role and content keys |
max_tokens | int | required | Maximum tokens in the response |
response = ctx.sample( messages=[{"role": "user", "content": "Summarize this text"}], max_tokens=500,)ServerLogger
Sends structured log messages to the MCP host over the protomcp protocol.
from protomcp.log import ServerLoggerConstructor
ServerLogger(send_fn: Callable, name: str = "")Not constructed directly — protomcp creates and injects it. The name field identifies the logger source in log messages.
Methods
All log methods have the same signature:
method(message: str, *, data=None)data is an optional value that is serialized to JSON and included in the log envelope. If data is None, the message string is used as the data payload.
| Method | MCP log level |
|---|---|
debug(message, *, data=None) | debug |
info(message, *, data=None) | info |
notice(message, *, data=None) | notice |
warning(message, *, data=None) | warning |
error(message, *, data=None) | error |
critical(message, *, data=None) | critical |
alert(message, *, data=None) | alert |
emergency(message, *, data=None) | emergency |
logger.info("Starting job", data={"job_id": "abc123"})logger.error("Job failed", data={"error": "timeout"})tool_manager
Module-level object for programmatic tool list control. Only available when running under protomcp (raises RuntimeError if called outside protomcp).
from protomcp import tool_managertool_manager.enable(tool_names)
Enable the specified tools. Returns the updated list of active tool names.
active: list[str] = tool_manager.enable(["write_file", "delete_file"])tool_manager.disable(tool_names)
Disable the specified tools. Returns the updated list of active tool names.
active: list[str] = tool_manager.disable(["write_file", "delete_file"])tool_manager.set_allowed(tool_names)
Switch to allowlist mode. Only the specified tools are active. Returns the updated list of active tool names.
active: list[str] = tool_manager.set_allowed(["read_file", "search"])tool_manager.set_blocked(tool_names)
Switch to blocklist mode. All tools except the specified ones are active. Returns the updated list of active tool names.
active: list[str] = tool_manager.set_blocked(["delete_database"])tool_manager.get_active_tools()
Get the current list of active tool names.
active: list[str] = tool_manager.get_active_tools()tool_manager.batch(enable, disable, allow, block)
Perform multiple operations atomically. All parameters are optional lists of tool names.
active: list[str] = tool_manager.batch( enable=["write_file"], disable=["read_only_mode"], allow=None, block=None,)Internal API (for testing)
These are not part of the public API but are useful in tests:
get_registered_tools()
Returns a copy of the current tool registry.
from protomcp.tool import get_registered_tools, ToolDef
tools: list[ToolDef] = get_registered_tools()clear_registry()
Clears the tool registry. Call this in test setup to avoid cross-test contamination.
from protomcp.tool import clear_registry
clear_registry()ToolDef
@dataclassclass ToolDef: name: str description: str input_schema_json: str # JSON string handler: Callable output_schema_json: str = "" # JSON string, empty if no output_type title: str = "" destructive_hint: bool = False idempotent_hint: bool = False read_only_hint: bool = False open_world_hint: bool = False task_support: bool = False hidden: bool = False@tool_group(...)
Class decorator that registers a group of related actions as one or more MCP tools.
from protomcp.group import tool_groupParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Tool group name |
description | str | "" | Human-readable description |
strategy | str | "union" | "union" (single tool with oneOf schema) or "separate" (one tool per action, namespaced as group.action) |
title | str | "" | Display name |
destructive | bool | False | Hint: destructive side effects |
idempotent | bool | False | Hint: idempotent |
read_only | bool | False | Hint: read-only |
open_world | bool | False | Hint: open world access |
task_support | bool | False | Hint: task support |
hidden | bool | False | Hide from tool list |
Returns: The original class (unmodified).
@tool_group("files", description="File operations", strategy="union")class FileTools: @action("read", description="Read a file") def read(self, path: str) -> ToolResult: return ToolResult(result=open(path).read())@action(...)
Method decorator that marks a method as a group action.
from protomcp.group import actionParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Action name |
description | str | "" | Human-readable description |
requires | list[str] | [] | Required field names — validation fails if missing or empty |
enum_fields | dict[str, list] | {} | Map of field name to valid values — invalid values trigger “Did you mean?” suggestions |
cross_rules | list[tuple[Callable, str]] | [] | List of (condition_fn, error_message) tuples — if condition returns True, validation fails |
hints | dict[str, dict] | {} | Map of hint name to {"condition": Callable, "message": str} — non-blocking advisory messages |
Returns: The original function (unmodified).
@action("deploy", requires=["env"], enum_fields={"env": ["dev", "staging", "prod"]})def deploy(self, env: str, version: str) -> ToolResult: return ToolResult(result=f"Deployed {version} to {env}")ActionDef
@dataclassclass ActionDef: name: str description: str handler: Callable input_schema: dict requires: list[str] = field(default_factory=list) enum_fields: dict[str, list] = field(default_factory=dict) cross_rules: list[tuple[Callable, str]] = field(default_factory=list) hints: dict[str, dict] = field(default_factory=dict)GroupDef
@dataclassclass GroupDef: name: str description: str actions: list[ActionDef] instance: Any strategy: str = "union" title: str = "" destructive_hint: bool = False idempotent_hint: bool = False read_only_hint: bool = False open_world_hint: bool = False task_support: bool = False hidden: bool = Falseget_registered_groups()
Returns a copy of the current group registry.
from protomcp.group import get_registered_groupsgroups: list[GroupDef] = get_registered_groups()clear_group_registry()
Clears the group registry.
from protomcp.group import clear_group_registryclear_group_registry()@server_context(...)
Registers a context resolver that injects a value into tool handlers.
from protomcp.server_context import server_contextParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
param_name | str | required | Name of the parameter to inject into handlers |
expose | bool | True | If False, the parameter is hidden from the tool JSON Schema |
Returns: The original function (unmodified).
The decorated function receives the full args dict and returns the resolved value.
@server_context("project_dir", expose=False)def resolve_project_dir(args: dict) -> str: return os.getcwd()ContextDef
@dataclassclass ContextDef: param_name: str resolver: Callable[[dict], Any] expose: boolresolve_contexts(args)
Runs all registered context resolvers against args. Returns a dict[str, Any] of resolved values keyed by param_name.
get_hidden_context_params()
Returns a set[str] of parameter names where expose=False.
get_registered_contexts()
Returns a copy of the context registry.
clear_context_registry()
Clears the context registry.
@local_middleware(...)
Registers an in-process middleware that wraps tool handlers.
from protomcp.local_middleware import local_middlewareParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
priority | int | 100 | Sort order — lowest priority runs outermost |
Returns: The original function (unmodified).
The decorated function signature is (ctx, tool_name: str, args: dict, next_handler) -> ToolResult. Call next_handler(ctx, args) to continue the chain, or return a ToolResult directly to short-circuit.
@local_middleware(priority=10)def timing_mw(ctx, tool_name, args, next_handler): import time start = time.monotonic() result = next_handler(ctx, args) elapsed = time.monotonic() - start print(f"{tool_name} took {elapsed:.3f}s") return resultLocalMiddlewareDef
@dataclassclass LocalMiddlewareDef: priority: int handler: Callable # (ctx, tool_name, args, next_handler) -> ToolResultbuild_middleware_chain(tool_name, handler)
Builds a composed callable that wraps handler with all registered middleware. Returns (ctx, args_dict) -> ToolResult.
get_local_middleware()
Returns middleware sorted by priority (lowest first).
clear_local_middleware()
Clears the middleware registry.
@telemetry_sink
Registers an observe-only telemetry sink. Sinks receive ToolCallEvent instances but cannot affect tool execution. Exceptions in sinks are silently swallowed.
from protomcp.telemetry import telemetry_sinkReturns: The original function (unmodified).
@telemetry_sinkdef log_events(event: ToolCallEvent): print(f"[{event.phase}] {event.tool_name}")ToolCallEvent
@dataclassclass ToolCallEvent: tool_name: str phase: str # "start", "success", "error", "progress" args: dict action: str = "" result: str = "" error: Optional[Exception] = None duration_ms: int = 0 progress: int = 0 total: int = 0 message: str = ""| Field | Type | Description |
|---|---|---|
tool_name | str | Name of the tool being called |
phase | str | One of "start", "success", "error", "progress" |
args | dict | Arguments passed to the tool |
action | str | Action name (for tool groups) |
result | str | Result string (on success) |
error | Exception | None | Exception (on error) |
duration_ms | int | Elapsed time in milliseconds |
progress | int | Current progress value (on progress) |
total | int | Total progress value (on progress) |
message | str | Human-readable status message |
emit_telemetry(event)
Sends a ToolCallEvent to all registered sinks. Called internally by protomcp.
get_telemetry_sinks()
Returns a copy of the sink list.
clear_telemetry_sinks()
Clears the telemetry sink registry.
@sidecar(...)
Declares a companion process that protomcp manages alongside the server.
from protomcp.sidecar import sidecarParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Unique sidecar identifier |
command | list[str] | required | Process command and arguments |
health_check | str | "" | URL to poll for health (HTTP 200 = healthy) |
start_on | str | "first_tool_call" | "server_start" or "first_tool_call" |
restart_on_version_mismatch | bool | False | Restart if version changes |
health_timeout | float | 30.0 | Seconds to wait for health check |
Returns: The original function (unmodified).
@sidecar(name="redis", command=["redis-server"], start_on="server_start")def redis_sidecar(): passSidecarDef
@dataclassclass SidecarDef: name: str command: list[str] health_check: str = "" start_on: str = "first_tool_call" restart_on_version_mismatch: bool = False health_timeout: float = 30.0 health_interval: float = 1.0 shutdown_timeout: float = 3.0| Property | Description |
|---|---|
pid_file_path | ~/.protomcp/sidecars/{name}.pid |
start_sidecars(trigger)
Starts all sidecars matching the given trigger ("server_start" or "first_tool_call").
stop_all_sidecars()
Stops all running sidecars. Sends SIGTERM, then SIGKILL after shutdown_timeout. Registered with atexit automatically.
get_registered_sidecars()
Returns a copy of the sidecar registry.
clear_sidecar_registry()
Clears the sidecar registry.
@workflow(...)
Class decorator that registers a workflow — a server-defined state machine composed of steps.
from protomcp import workflowParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | required | Unique workflow identifier. Steps are registered as name.step_name tools |
description | str | "" | Human-readable description |
allow_during | list[str] | None | None | Glob patterns for external tools visible during the workflow |
block_during | list[str] | None | None | Glob patterns for external tools hidden during the workflow |
Returns: The original class (unmodified).
The class may define on_cancel(self, current_step, history) and on_complete(self, history) lifecycle methods.
@workflow("deploy", allow_during=["status"])class DeployWorkflow: @step(initial=True, next=["approve"], description="Review changes") def review(self, pr_url: str) -> StepResult: return StepResult(result=f"Reviewing {pr_url}")
@step(terminal=True, description="Approve changes") def approve(self, reason: str) -> StepResult: return StepResult(result=f"Approved: {reason}")@step(...)
Method decorator that marks a method as a workflow step.
from protomcp import stepParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | str | None | method name | Step name. Defaults to the decorated method’s name if not provided |
description | str | "" | Human-readable description |
initial | bool | False | Mark as the workflow entry point. Exactly one step must be initial |
next | list[str] | None | None | Valid next step names. Required for non-terminal steps |
terminal | bool | False | Mark as a workflow exit point. Terminal steps must not have next |
no_cancel | bool | False | Prevent cancellation while at this step |
allow_during | list[str] | None | None | Step-level visibility override (replaces workflow-level, does not merge) |
block_during | list[str] | None | None | Step-level block override (replaces workflow-level, does not merge) |
on_error | dict[type, str] | None | None | Map exception types to target step names for error-driven transitions |
requires | list[str] | None | None | Required field names — validation fails if missing or empty |
enum_fields | dict[str, list] | None | None | Map of field name to valid values |
Returns: The original function (unmodified).
@step(initial=True, next=["approve", "reject"], description="Review changes")def review(self, pr_url: str) -> StepResult: return StepResult(result=f"Reviewing {pr_url}")StepResult
Returned by step handlers to provide the result and optionally narrow the next steps.
from protomcp import StepResult@dataclassclass StepResult: result: str = "" next: list[str] | None = NoneFields
| Field | Type | Default | Description |
|---|---|---|---|
result | str | "" | The result string returned to the agent |
next | list[str] | None | None | Narrow the valid next steps at runtime. Must be a subset of the @step decorator’s next list. If None, uses the full declared next list |
return StepResult(result="Tests passed", next=["promote"])WorkflowDef
@dataclassclass WorkflowDef: name: str description: str steps: list[StepDef] instance: Any allow_during: list[str] | None = None block_during: list[str] | None = None on_cancel: Callable | None = None on_complete: Callable | None = Noneget_registered_workflows()
Returns a copy of the current workflow registry.
from protomcp import get_registered_workflowsworkflows: list[WorkflowDef] = get_registered_workflows()clear_workflow_registry()
Clears the workflow registry and any active workflow state.
from protomcp import clear_workflow_registryclear_workflow_registry()configure(...)
Configures handler auto-discovery.
from protomcp.discovery import configureParameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
handlers_dir | str | "" | Path to the directory containing handler files |
hot_reload | bool | False | Re-import handlers on each discovery pass |
configure(handlers_dir="./handlers", hot_reload=True)discover_handlers()
Imports all .py files in the configured handlers_dir. Files prefixed with _ are skipped. With hot_reload=True, previously loaded modules are cleared first.
get_config()
Returns a copy of the current configuration dict.
reset_config()
Clears the configuration and loaded module cache.