Getting Started¶
Note
This documentation is for adk-fluent v0.13.5 (PyPI).
This page gets you from zero to a working agent in 5 minutes. By the end, you’ll understand the builder pattern, the expression operators, and when to use each.
Install¶
pip install adk-fluent
Autocomplete works immediately – the package ships with .pyi type stubs for every builder. Type Agent("name"). and your IDE shows all available methods with type hints.
IDE Setup¶
VS Code – install the Pylance extension (included in the Python extension pack). Autocomplete and type checking work out of the box.
PyCharm – works automatically. The .pyi stubs are bundled in the package and PyCharm discovers them on install.
Neovim (LSP) – use pyright as your language server. Stubs are picked up automatically.
Tip
AI Coding Agents adk-fluent ships pre-configured rules for Claude Code, Cursor, Copilot, Windsurf, Cline, and Zed. See Editor & AI Agent Setup for details.
Discover the API¶
The builder pattern catches mistakes at definition time, not runtime:
from adk_fluent import Agent
agent = Agent("demo")
agent. # <- autocomplete shows: .model(), .instruct(), .tool(), .build(), ...
# Typos are caught immediately:
agent.instuction("oops") # -> AttributeError: 'instuction' is not a recognized field.
# Did you mean: 'instruction'?
# Inspect any builder's current state:
print(agent.model("gemini-2.5-flash").instruct("Help.").explain())
# Agent: demo
# Config fields: model, instruction
# See everything available:
print(dir(agent)) # All methods including forwarded ADK fields
Why this matters
In native ADK, LlmAgent(instuction="...") silently ignores the misspelled keyword. The agent runs with no instruction and you debug for an hour wondering why it produces garbage. adk-fluent raises immediately.
Your First Agent¶
from adk_fluent import Agent
agent = Agent("helper", "gemini-2.5-flash").instruct("You are a helpful assistant.").build()
That’s it. agent is a real google.adk.agents.llm_agent.LlmAgent – use it with adk web, adk run, or pass it to any ADK API.
Your First Pipeline¶
Chain agents sequentially with .step() or the >> operator:
from adk_fluent import Agent, Pipeline
# Builder style -- explicit, great for complex configurations
pipeline = (
Pipeline("research")
.step(Agent("searcher", "gemini-2.5-flash").instruct("Search for information."))
.step(Agent("writer", "gemini-2.5-flash").instruct("Write a summary."))
.build()
)
# Operator style -- concise, great for composing reusable parts
pipeline = (
Agent("searcher", "gemini-2.5-flash").instruct("Search for information.")
>> Agent("writer", "gemini-2.5-flash").instruct("Write a summary.")
).build()
Both produce an identical SequentialAgent. The builder style shines when each step needs callbacks, tools, and context engineering. The operator style excels at composing reusable sub-expressions.
Parallel Execution¶
Run agents concurrently with .branch() or the | operator:
from adk_fluent import Agent, FanOut
fanout = (
FanOut("parallel_research")
.branch(Agent("web", "gemini-2.5-flash").instruct("Search the web."))
.branch(Agent("papers", "gemini-2.5-flash").instruct("Search papers."))
.build()
)
# Or with operators:
fanout = (
Agent("web", "gemini-2.5-flash").instruct("Search the web.")
| Agent("papers", "gemini-2.5-flash").instruct("Search papers.")
).build()
Loops¶
Iterate until a condition is met:
from adk_fluent import Agent, Loop
loop = (
Loop("refine")
.step(Agent("writer", "gemini-2.5-flash").instruct("Write draft."))
.step(Agent("critic", "gemini-2.5-flash").instruct("Critique."))
.max_iterations(3)
.build()
)
# Or with operators:
loop = (
Agent("writer", "gemini-2.5-flash").instruct("Write draft.")
>> Agent("critic", "gemini-2.5-flash").instruct("Critique.")
) * 3
Two Styles, Same Result¶
Every workflow can be expressed two ways. Both produce identical ADK objects:
pipeline = (
Pipeline("research")
.step(Agent("web", "gemini-2.5-flash").instruct("Search web.").writes("web_data"))
.step(Agent("analyst", "gemini-2.5-flash").instruct("Analyze {web_data}."))
.build()
)
pipeline = (
Agent("web", "gemini-2.5-flash").instruct("Search web.").writes("web_data")
>> Agent("analyst", "gemini-2.5-flash").instruct("Analyze {web_data}.")
).build()
The builder style shines for complex multi-step workflows where each step is configured with callbacks, tools, and context. The operator style excels at composing reusable sub-expressions:
# Reusable sub-expressions with operators
classify = Agent("classifier", "gemini-2.5-flash").instruct("Classify intent.").writes("intent")
resolve = Agent("resolver", "gemini-2.5-flash").instruct("Resolve {intent}.").tool(lookup_customer)
respond = Agent("responder", "gemini-2.5-flash").instruct("Draft response.")
support_pipeline = classify >> resolve >> respond
# Reuse classify in a different pipeline
escalation_pipeline = classify >> Agent("escalate", "gemini-2.5-flash").instruct("Escalate.")
Putting It Together¶
Here’s a real-world pipeline combining sequential, parallel, state flow, and context isolation:
from adk_fluent import Agent, S, C
MODEL = "gemini-2.5-flash"
# Capture the user's message into state
# Classify with no history (C.none() = sees only the current message)
# Route to specialist agents
# Each agent writes to a named state key for explicit data contracts
support = (
S.capture("customer_message")
>> Agent("classifier", MODEL)
.instruct("Classify intent: billing, technical, or general.")
.context(C.none())
.writes("intent")
>> Agent("resolver", MODEL)
.instruct("Resolve the {intent} issue for: {customer_message}")
.tool(lookup_customer)
.tool(create_ticket)
.writes("resolution")
>> Agent("responder", MODEL)
.instruct("Draft a response summarizing: {resolution}")
)
This pipeline:
Captures the user message into state with
S.capture()(State Transforms)Isolates context with
C.none()so the classifier sees only the current message (Context Engineering)Flows data through named keys with
.writes()(Data Flow)Attaches tools with
.tool()(Builders)
Test Without an API Key¶
You don’t need a Gemini API key to verify your agent logic works. .mock() replaces the LLM with canned responses:
from adk_fluent import Agent
agent = (
Agent("helper", "gemini-2.5-flash")
.instruct("You are a helpful assistant.")
.mock(["Hello! I'm here to help."])
)
# Runs instantly, no API call, no cost
print(agent.ask("Hi there"))
# => Hello! I'm here to help.
This is how all 68 cookbook examples run in CI — every example uses .mock() so tests pass without credentials. Use .mock() during development, remove it when you’re ready for real LLM calls.
Tip
.test() — inline smoke tests
For quick validation, chain .test() directly:
agent.test("What's 2+2?", contains="4") # passes silently or raises AssertionError
See What the LLM Sees¶
One of the most powerful debugging tools: .llm_anatomy() shows the exact prompt, context, and tools the LLM receives.
from adk_fluent import Agent, C
agent = (
Agent("classifier", "gemini-2.5-flash")
.instruct("Classify the customer's intent.")
.context(C.none())
.writes("intent")
)
agent.llm_anatomy()
# System prompt: Classify the customer's intent.
# Context: none (C.none() — current turn only)
# Output key: intent
# Tools: (none)
This prevents the #1 debugging nightmare: “why is my agent producing garbage?” The answer is always in what the LLM sees.
Validate and Debug¶
# Catch config errors before runtime
agent = Agent("x", "gemini-2.5-flash").instruct("Help.").validate()
# See what the builder has configured
agent.explain()
# Generate a visual topology diagram
pipeline.to_mermaid()
# Full diagnostic report
pipeline.doctor()
See Error Reference for every error type with fix-it examples.
Async Environments (Jupyter, FastAPI)¶
Warning
.ask() and .map() are sync methods. They will raise RuntimeError if called inside an async event loop (Jupyter notebooks, FastAPI endpoints, etc.).
Use the async variants instead:
# In Jupyter or FastAPI:
result = await agent.ask_async("What is the capital of France?")
# Streaming:
async for chunk in agent.stream("Tell me a story"):
print(chunk, end="")
# Multi-turn conversation:
async with agent.session() as chat:
print(await chat.send("Hi"))
print(await chat.send("Tell me more"))
Choose Your Path¶
Now that you know the basics, adk-fluent offers three distinct pathways for agent building. All produce native ADK objects – they solve different problems at different abstraction levels.
Full Python control with expression operators (>> | * @ //) and 9 namespace modules. Build any topology with type-checked, IDE-friendly builders.
Best for: Custom workflows, complex routing, dynamic topologies, callback-heavy agents.
Turn YAML + Markdown into executable agent graphs. Domain experts write prompts and topology; engineers inject tools and deploy. One file is docs AND runtime.
Best for: Stable topologies, reusable capability libraries, teams with non-Python domain experts.
Build Claude-Code-class autonomous agents with the H namespace. Five composable layers: intelligence, tools, safety, observability, and runtime.
Best for: Agents that need file/shell access, permissions, sandboxing, token budgets, multi-turn REPL.
Not sure which? See the Decision Guide for a flowchart. All three compose together – a harness can load skills for domain expertise, and skills wire agents as pipelines internally.
What’s Next¶
Deep dive into builders, operators, callbacks, context engineering, and all 9 namespace modules.
68 recipes from simple agents to hero workflows like deep research and customer support triage.
Complete reference for all 135 builders with type signatures, ADK mappings, and examples.
Side-by-side with LangGraph, CrewAI, and native ADK – see exactly where adk-fluent wins.
See also
Expression Language – all 9 operators with composition rules
Patterns – higher-order constructors (review_loop, map_reduce, cascade, fan_out_merge)
Testing –
.mock(),.test(), andcheck_contracts()for testing without API callsMigration Guide – migrate existing native ADK code incrementally