IR and Backends¶
The Intermediate Representation (IR) decouples the fluent builder API from execution engines. Builders compile to a tree of frozen dataclasses, which backends then compile to engine-specific runnables.
Multiple execution backends
adk-fluent supports multiple execution backends. The IR is the universal contract — every backend compiles from the same IR tree. See Execution Backends for the full comparison.
Backend |
Status |
Durability |
|---|---|---|
ADK |
Stable (default) |
No |
Temporal |
In Development |
Yes — crash recovery, replay |
asyncio |
In Development |
No |
DBOS / Prefect |
Conceptual — not yet implemented |
— |
Why IR?¶
Inspection: Walk and analyze the agent tree without building ADK objects
Testing: Use mock backends for deterministic testing without LLM calls
Visualization: Generate Mermaid diagrams from the IR tree
Portability: The same IR compiles to ADK, Temporal, or asyncio backends
IR Nodes¶
Every builder maps to an IR node type:
Builder |
IR Node |
ADK Type |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
IR nodes are frozen dataclasses – immutable and safe to inspect:
from adk_fluent import Agent
pipeline = Agent("a").instruct("Step 1.") >> Agent("b").instruct("Step 2.")
ir = pipeline.to_ir()
# SequenceNode with two AgentNode children
print(type(ir).__name__) # SequenceNode
print(len(ir.children)) # 2
print(ir.children[0].name) # a
print(ir.children[1].name) # b
Backend Protocol¶
A backend compiles IR nodes into runnable objects:
from adk_fluent.backends import Backend
class Backend(Protocol):
name: str
def compile(self, node, config=None) -> Any: ...
async def run(self, compiled, prompt, **kwargs) -> list[AgentEvent]: ...
async def stream(self, compiled, prompt, **kwargs) -> AsyncIterator[AgentEvent]: ...
@property
def capabilities(self) -> EngineCapabilities: ...
Every backend declares its capabilities (streaming, durability, parallelism, etc.) via EngineCapabilities. See Execution Backends for the full capability matrix.
Backend Implementations¶
Status: Stable — production-ready
Compiles IR to native ADK objects. .to_app() is shorthand for this.
from adk_fluent.backends.adk import ADKBackend
from adk_fluent import Agent, ExecutionConfig
backend = ADKBackend()
ir = (Agent("a") >> Agent("b")).to_ir()
app = backend.compile(ir, config=ExecutionConfig(app_name="demo"))
Or via builder shorthand:
agent = Agent("helper", "gemini-2.5-flash").instruct("Help.")
# .engine("adk") is implicit — no need to specify
Status: In Development — API may change
Compiles IR to Temporal workflows and activities for durable execution.
from temporalio.client import Client
from adk_fluent.backends.temporal import TemporalBackend
client = await Client.connect("localhost:7233")
backend = TemporalBackend(client=client, task_queue="agents")
ir = (Agent("a") >> Agent("b")).to_ir()
plan = backend.compile(ir)
Or via builder shorthand:
agent = Agent("helper").instruct("Help.").engine("temporal", client=client)
See Temporal Guide for detailed usage.
Status: In Development — reference implementation
Pure-Python IR interpreter with no external dependencies.
from adk_fluent.backends.asyncio_backend import AsyncioBackend
backend = AsyncioBackend()
ir = (Agent("a") >> Agent("b")).to_ir()
result = backend.compile(ir)
Or via builder shorthand:
agent = Agent("helper").instruct("Help.").engine("asyncio")
ExecutionConfig¶
Configuration for the compiled App:
from adk_fluent import ExecutionConfig, CompactionConfig
config = ExecutionConfig(
app_name="my_app", # App name (default: "adk_fluent_app")
resumable=True, # Enable session resumability
compaction=CompactionConfig( # Event compaction settings
interval=10, # Compact every N events
overlap=2, # Keep N events of overlap
),
middlewares=(retry_mw, log_mw), # Middleware stack
)
Field |
Type |
Default |
Description |
|---|---|---|---|
|
|
|
Application name |
|
|
|
Enable session resumability |
|
|
|
Event compaction settings |
|
|
|
Middleware stack |
Visualization¶
.to_mermaid() generates a Mermaid graph from the IR:
from adk_fluent import Agent
pipeline = Agent("classifier") >> Agent("resolver") >> Agent("responder")
print(pipeline.to_mermaid())
Output:
graph TD
n1[["classifier_then_resolver_then_responder (sequence)"]]
n2["classifier"]
n3["resolver"]
n4["responder"]
n2 --> n3
n3 --> n4
When agents have .produces() or .consumes() annotations, the diagram includes data-flow edges.
See also
Execution Backends — full backend comparison, capability matrix, and selection guide
Temporal Guide — Temporal-specific patterns, determinism rules, and crash recovery
Execution —
.ask(),.stream(),.session()and how they interact with backends