Skip to main content
AgentDbg’s OpenAI Agents SDK integration registers a tracing processor that listens to the SDK’s native span events and converts them into AgentDbg trace records. Import the module once, wrap your entrypoint with @trace, and every LLM call, tool call, and agent handoff appears in your local timeline—no API key required for the example.

What gets captured

The adapter translates three OpenAI Agents SDK span types:
  • Generation spans (GenerationSpanData) — records model name, prompt input, response output, token usage, and model config as an LLM_CALL event.
  • Function spans (FunctionSpanData) — records tool name, input arguments, result, and error status as a TOOL_CALL event.
  • Handoff spans (HandoffSpanData) — records a TOOL_CALL named "handoff" with from_agent and to_agent stored under meta.openai_agents.handoff.
Framework-specific span details (trace ID, span ID, parent ID, timestamps, model config) are stored in meta.openai_agents.* and do not pollute the main event payload.

Installation

Install AgentDbg with the OpenAI extra:
pip install "agentdbg[openai]"
This installs openai-agents alongside AgentDbg. If you import the integration without openai-agents present, you get a clear ImportError with install instructions.

Setting up the adapter

1

Install the package

pip install "agentdbg[openai]"
2

Import the integration module

Importing agentdbg.integrations.openai_agents registers the tracing processor automatically. You only need to import it once, anywhere before your agent runs:
from agentdbg import trace
from agentdbg.integrations import openai_agents  # registers the processor
3

Wrap your entrypoint with @trace

The adapter only records events while an active AgentDbg run is open. Wrap your entrypoint:
@trace(name="my openai agents run")
def run_agent():
    result = Runner.run_sync(agent, input)
    return result

Full example

This example emits deterministic fake spans without making any model or network calls, so it works with no API key:
from agentdbg import trace
from agentdbg.integrations import openai_agents
from agents.tracing import (
    function_span,
    generation_span,
    handoff_span,
    set_trace_processors,
    trace as agents_trace,
)


@trace(name="OpenAI Agents minimal example")
def run_agent():
    # Use only the AgentDbg processor for this local example.
    set_trace_processors([openai_agents.PROCESSOR])

    with agents_trace("AgentDbg OpenAI Agents example"):
        with generation_span(
            input=[{"role": "user", "content": "Summarize AgentDbg in one sentence."}],
            output=[
                {
                    "role": "assistant",
                    "content": "AgentDbg is a local-first timeline debugger for AI agents.",
                }
            ],
            model="gpt-4o-mini",
            model_config={"temperature": 0.0},
            usage={"prompt_tokens": 10, "completion_tokens": 12, "total_tokens": 22},
        ):
            pass

        with function_span(
            name="lookup_docs",
            input={"query": "AgentDbg integrations"},
            output={"hits": 2},
        ):
            pass

        with handoff_span(from_agent="router_agent", to_agent="docs_agent"):
            pass


if __name__ == "__main__":
    run_agent()
    print("Run complete. View with: agentdbg view")
Run it and open the timeline:
uv run --extra openai python examples/openai_agents/minimal.py
agentdbg view

Guardrails with the OpenAI Agents SDK

All AgentDbg guardrails work with the tracing processor. When a guardrail fires, the processor immediately stops the run by bypassing the SDK’s except Exception handler, so no further model calls are made.
from agentdbg import trace, AgentDbgLoopAbort
from agentdbg.integrations import openai_agents


@trace(stop_on_loop=True)
def run_agent():
    result = Runner.run_sync(agent, input)
    return result


try:
    run_agent()
except AgentDbgLoopAbort as exc:
    print(f"Loop detected: {exc}")

Checking for aborts after run

As a defensive fallback, the exception is also stored on PROCESSOR.abort_exception. Use PROCESSOR.raise_if_aborted() to re-raise it if you need to check after a run completes:
from agentdbg.integrations.openai_agents import PROCESSOR

result = Runner.run_sync(agent, input)
PROCESSOR.raise_if_aborted()  # raises AgentDbgGuardrailExceeded if a guardrail fired
The adapter records events only while an explicit AgentDbg run is active. Wrap your entrypoint with @trace or traced_run(...).
The minimal example uses low-level SDK tracing spans with deterministic fake data—no API key needed and no model calls made. It’s a safe way to verify your setup before connecting to a real model.