Skip to main content
AgentDbg’s LangChain integration gives you full observability into your LangChain and LangGraph agents without manually wrapping each call. Add the callback handler once, and every LLM invocation and tool execution is automatically recorded to the active AgentDbg run—ready to inspect in the timeline viewer.

What gets captured

The AgentDbgLangChainCallbackHandler hooks into LangChain’s built-in callback system and records two event types:
  • LLM calls — triggered by on_llm_start / on_chat_model_starton_llm_end. Records model name, prompt, response text, and token usage.
  • Tool calls — triggered by on_tool_starton_tool_end / on_tool_error. Records tool name, input arguments, result, and error status.
LLM errors are recorded as LLM_CALL events with status="error". Tool errors are recorded as TOOL_CALL events with status="error" and include the error message.

Installation

Install AgentDbg with the LangChain extra:
pip install "agentdbg[langchain]"
This installs langchain-core alongside AgentDbg. If you import the integration without langchain-core present, you get a clear ImportError with install instructions.

Setting up the handler

1

Install the package

pip install "agentdbg[langchain]"
2

Wrap your entrypoint with @trace

The handler requires an active AgentDbg run. Use the @trace decorator on the function that calls your chain:
from agentdbg import trace
from agentdbg.integrations import AgentDbgLangChainCallbackHandler

@trace(name="my langchain agent")
def run_agent():
    ...
Alternatively, set AGENTDBG_IMPLICIT_RUN=1 in your environment to start a run automatically without the decorator.
3

Create the handler

Inside your traced function, instantiate the handler:
handler = AgentDbgLangChainCallbackHandler()
config = {"callbacks": [handler]}
4

Pass the config to your chain

Pass config to any LangChain chain, LLM, or tool invocation:
result = my_chain.invoke(input_data, config=config)

Full example

This example uses a fake LLM so it runs without any API key or network calls:
from agentdbg import trace
from agentdbg.integrations import AgentDbgLangChainCallbackHandler

from langchain_core.language_models.fake import FakeListLLM
from langchain_core.tools import tool


@tool
def lookup(query: str) -> str:
    """Look up something (stub tool for demo)."""
    return f"result for: {query}"


@trace(name="langchain minimal example")
def run_agent():
    handler = AgentDbgLangChainCallbackHandler()
    config = {"callbacks": [handler]}

    llm = FakeListLLM(responses=["Traced LLM response."])
    result = lookup.invoke({"query": "demo"}, config=config)
    _ = llm.invoke("Summarize.", config=config)
    return result


if __name__ == "__main__":
    run_agent()
    print("Run complete. View with: agentdbg view")
Run it and open the timeline:
uv run --extra langchain python examples/langchain/minimal.py
agentdbg view

Guardrails with LangChain

All AgentDbg guardrails work with the callback handler. When a guardrail fires (for example, stop_on_loop detecting a repeated pattern), the handler immediately stops the run—bypassing LangChain’s except Exception error handling and LangGraph’s graph executor—so no further token-spending calls are made.
from agentdbg import AgentDbgLoopAbort, trace
from agentdbg.integrations import AgentDbgLangChainCallbackHandler


@trace(stop_on_loop=True, stop_on_loop_min_repetitions=3)
def run_agent():
    handler = AgentDbgLangChainCallbackHandler()
    return graph.invoke(state, config={"callbacks": [handler]})


try:
    run_agent()
except AgentDbgLoopAbort as exc:
    print(f"Stopped the loop: {exc}")

Reusing the handler across runs

If you call your traced function multiple times with the same handler instance, call handler.reset() between runs to clear the abort state:
handler = AgentDbgLangChainCallbackHandler()

for input_data in dataset:
    handler.reset()
    run_agent_with_handler(handler, input_data)

Checking for aborts after invoke

As a defensive fallback, the handler stores any guardrail exception on handler.abort_exception. Use handler.raise_if_aborted() to re-raise it if needed after an invoke() call returns:
result = my_chain.invoke(input_data, config={"callbacks": [handler]})
handler.raise_if_aborted()  # raises AgentDbgGuardrailExceeded if a guardrail fired
The handler requires an active AgentDbg run. Wrap your entrypoint with @trace, use traced_run(...), or set AGENTDBG_IMPLICIT_RUN=1 in your environment.