Module: context¶
from adk_fluent import C
Context engineering namespace. Each method returns a frozen CTransform descriptor.
Quick Reference¶
Method |
Returns |
Description |
|---|---|---|
|
|
Suppress all conversation history |
|
|
Keep default conversation history (pass-through) |
|
|
Include only user messages |
|
|
Include user messages + outputs from named agents |
|
|
Exclude outputs from named agents |
|
|
Include last N turn-pairs from conversation history |
|
|
Alias for C.window(n=n) |
|
|
Read named keys from session state as context |
|
|
Render a template string with {key} and {key?} state placeholders |
|
|
Include block only if predicate is truthy at runtime |
|
|
Filter events by metadata: author, type, and/or tag |
|
|
Importance-weighted selection based on recency with exponential decay |
|
|
Structural compaction — merge sequential same-author messages or tool calls |
|
|
Remove duplicate or redundant events. |
|
|
Hard limit on context by turn count or estimated tokens |
|
|
Keep only specified fields from event content |
|
|
Set token budget constraint for context |
|
|
Set priority tier for context ordering |
|
|
Aggressive pruning to fit a hard token limit. |
|
|
Prune stale context based on event timestamp |
|
|
Remove PII or sensitive patterns from context via regex |
|
|
Summarize context via LLM. |
|
|
Select events by semantic relevance to a query via LLM scoring |
|
|
Extract structured data from conversation via LLM |
|
|
Extract atomic facts from conversation via LLM |
|
|
Validate context quality. |
|
|
Read structured notes from scratchpad at |
|
|
Write to scratchpad after agent execution |
|
|
Rolling window with optional summarization of older turns |
|
|
Per-agent selective windowing |
|
|
Select user messages with a strategy |
|
|
Manus-inspired progressive compression cascade |
|
|
Topology-aware context: user messages + named state keys |
|
|
Shared conversational context for multi-agent loops |
|
|
Include current UI surface state in agent context |
Methods¶
C.none() -> CTransform¶
Suppress all conversation history.
C.default() -> CTransform¶
Keep default conversation history (pass-through).
C.user_only() -> CUserOnly¶
Include only user messages.
C.from_agents(*names: str) -> CFromAgents¶
Include user messages + outputs from named agents.
Parameters:
*names(str)
C.exclude_agents(*names: str) -> CExcludeAgents¶
Exclude outputs from named agents.
Parameters:
*names(str)
C.window(*, n: int = 5) -> CWindow¶
Include last N turn-pairs from conversation history.
Parameters:
n(int) — default:5
C.last_n_turns(n: int) -> CWindow¶
Alias for C.window(n=n).
Parameters:
n(int)
C.from_state(*keys: str) -> CFromState¶
Read named keys from session state as context.
Parameters:
*keys(str)
C.template(text: str) -> CTemplate¶
Render a template string with {key} and {key?} state placeholders.
Parameters:
text(str)
C.when(predicate: Callable | str, block: CTransform) -> CWhen¶
Include block only if predicate is truthy at runtime.
String predicate is a shortcut for state key check: C.when(“has_history”, C.rolling(“conversation”)) C.when(lambda s: s.get(“debug”), C.notes(“debug_scratchpad”))
Parameters:
predicate(Callable | str)block(CTransform)
C.recent(*, decay: str = exponential, half_life: int = 10, min_weight: float = 0.1) -> CRecent¶
Importance-weighted selection based on recency with exponential decay.
Parameters:
decay(str) — default:'exponential'half_life(int) — default:10min_weight(float) — default:0.1
C.compact(*, strategy: str = tool_calls) -> CCompact¶
Structural compaction — merge sequential same-author messages or tool calls.
Parameters:
strategy(str) — default:'tool_calls'
C.dedup(*, strategy: str = exact, model: str = gemini-2.5-flash) -> CDedup¶
Remove duplicate or redundant events. strategy=’semantic’ uses LLM.
Parameters:
strategy(str) — default:'exact'model(str) — default:'gemini-2.5-flash'
C.truncate(*, max_turns: int | None = None, max_tokens: int | None = None, strategy: str = tail) -> CTruncate¶
Hard limit on context by turn count or estimated tokens.
Parameters:
max_turns(int | None) — default:Nonemax_tokens(int | None) — default:Nonestrategy(str) — default:'tail'
C.project(*fields: str) -> CProject¶
Keep only specified fields from event content.
Parameters:
*fields(str)
C.budget(*, max_tokens: int = 8000, overflow: str = truncate_oldest) -> CBudget¶
Set token budget constraint for context.
Parameters:
max_tokens(int) — default:8000overflow(str) — default:'truncate_oldest'
C.priority(*, tier: int = 2) -> CPriority¶
Set priority tier for context ordering.
Parameters:
tier(int) — default:2
C.fit(*, max_tokens: int = 4000, strategy: str = strict, model: str = gemini-2.5-flash) -> CFit¶
Aggressive pruning to fit a hard token limit. strategy=’cascade’ uses LLM.
Parameters:
max_tokens(int) — default:4000strategy(str) — default:'strict'model(str) — default:'gemini-2.5-flash'
C.fresh(*, max_age: float = 3600.0, stale_action: str = drop) -> CFresh¶
Prune stale context based on event timestamp.
Parameters:
max_age(float) — default:3600.0stale_action(str) — default:'drop'
C.redact(*patterns: str, replacement: str = [REDACTED]) -> CRedact¶
Remove PII or sensitive patterns from context via regex.
Parameters:
*patterns(str)replacement(str) — default:'[REDACTED]'
LLM-powered methods¶
C.summarize(*, scope: str = all, model: str = gemini-2.5-flash, prompt: str | None = None, schema: dict | None = None) -> CSummarize¶
Summarize context via LLM. Scope: ‘all’, ‘before_window’, ‘tool_results’.
Parameters:
scope(str) — default:'all'model(str) — default:'gemini-2.5-flash'prompt(str | None) — default:Noneschema(dict | None) — default:None
C.relevant(*, query_key: str | None = None, query: str | None = None, top_k: int = 5, model: str = gemini-2.5-flash) -> CRelevant¶
Select events by semantic relevance to a query via LLM scoring.
Parameters:
query_key(str | None) — default:Nonequery(str | None) — default:Nonetop_k(int) — default:5model(str) — default:'gemini-2.5-flash'
C.extract(*, schema: dict | None = None, key: str = extracted, model: str = gemini-2.5-flash) -> CExtract¶
Extract structured data from conversation via LLM.
Parameters:
schema(dict | None) — default:Nonekey(str) — default:'extracted'model(str) — default:'gemini-2.5-flash'
C.distill(*, key: str = facts, model: str = gemini-2.5-flash) -> CDistill¶
Extract atomic facts from conversation via LLM.
Parameters:
key(str) — default:'facts'model(str) — default:'gemini-2.5-flash'
C.validate(*checks: str, model: str = gemini-2.5-flash) -> CValidate¶
Validate context quality. Checks: ‘contradictions’, ‘completeness’, ‘freshness’, ‘token_efficiency’.
Parameters:
*checks(str)model(str) — default:'gemini-2.5-flash'
Scratchpads + Sugar¶
C.notes(key: str = default, *, format: str = plain) -> CNotes¶
Read structured notes from scratchpad at state["_notes_{key}"].
Parameters:
key(str) — default:'default'format(str) — default:'plain'
C.write_notes(key: str = default, *, strategy: str = append, source_key: str | None = None) -> CWriteNotes¶
Write to scratchpad after agent execution.
Strategies: ‘append’, ‘replace’, ‘merge’, ‘prepend’.
Parameters:
key(str) — default:'default'strategy(str) — default:'append'source_key(str | None) — default:None
C.rolling(n: int = 5, *, summarize: bool = False, model: str = gemini-2.5-flash) -> CRolling¶
Rolling window with optional summarization of older turns.
When summarize=True, events before the window are
summarized via LLM.
Parameters:
n(int) — default:5summarize(bool) — default:Falsemodel(str) — default:'gemini-2.5-flash'
C.from_agents_windowed(**agent_windows: int) -> CFromAgentsWindowed¶
Per-agent selective windowing.
Example: C.from_agents_windowed(researcher=1, critic=3)
Parameters:
**agent_windows(int)
C.user(*, strategy: str = all) -> CUser¶
Select user messages with a strategy.
Strategies: ‘all’, ‘first’, ‘last’, ‘bookend’.
Parameters:
strategy(str) — default:'all'
C.manus_cascade(*, budget: int = 8000, model: str = gemini-2.5-flash) -> CManusCascade¶
Manus-inspired progressive compression cascade.
Applies: compact → dedup → summarize → truncate.
Parameters:
budget(int) — default:8000model(str) — default:'gemini-2.5-flash'
C.pipeline_aware(*keys: str) -> CPipelineAware¶
Topology-aware context: user messages + named state keys.
Designed for pipeline agents that need the user’s original message plus structured data from upstream agents, but should NOT see raw intermediate agent conversation history.
Equivalent to C.user_only() + C.from_state(*keys) but
with clearer intent and better contract checker support.
Example: # classifier writes intent, handler sees user msg + intent classifier = Agent(“classify”).writes(“intent”) handler = Agent(“handle”).context(C.pipeline_aware(“intent”)) pipeline = classifier >> handler
Args:
*keys: State key names to include alongside user messages.
Parameters:
*keys(str)
C.with_ui(surface_id: str | None = None) -> CTransform¶
Include current UI surface state in agent context.
Injects the A2UI data model for the given surface (or all surfaces)
into the agent’s context as a <ui_state> block.
Args:
surface_id: Optional surface to include. IfNone, includes all.
Usage: Agent(“renderer”).context(C.with_ui(“dashboard”)) Agent(“updater”).context(C.from_state(“total”) + C.with_ui())
Parameters:
surface_id(str | None) — default:None
Composition Operators¶
+ (union (CComposite))¶
Combine context transforms
| (pipe (CPipe))¶
Chain context processing
Types¶
Type |
Description |
|---|---|
|
Base context transform descriptor |
|
Union of multiple context blocks (via + operator) |
|
Pipe transform: source feeds into transform (via |
|
Read named keys from session state and format as context |
|
Include only the last N turn-pairs from conversation history |
|
Include only user messages from conversation history |
|
Include user messages + outputs from named agents |
|
Exclude outputs from named agents |
|
Render a template string with {key} and {key?} placeholders from state |
|
Filter events by metadata: author, type, and/or tag |
|
Importance-weighted selection based on recency with exponential decay |
|
Structural compaction — merge sequential same-author messages or tool calls |
|
Remove duplicate or redundant events |
|
Hard limit on context size by turn count or estimated tokens |
|
Keep only specific fields from event content |
|
Token budget constraint for context |
|
Priority tier for context ordering (lower = higher priority) |
|
Aggressive pruning to fit a hard token limit |
|
Prune stale context based on event timestamp |
|
Remove PII or sensitive patterns from context via regex |
|
Lossy compression via LLM summarization |
|
Semantic relevance selection via LLM scoring |
|
Structured extraction from conversation via LLM |
|
Fact distillation from conversation via LLM |
|
Context quality validation |
|
Read from an agent’s structured scratchpad stored in session state |
|
Write to an agent’s structured scratchpad after agent execution |
|
Rolling window with optional summarization of older turns |
|
Per-agent selective windowing |
|
User message strategies |
|
Manus-inspired progressive compression cascade |
|
Conditional context inclusion |
|
Topology-aware context for pipeline agents |
|
Shared conversational thread for multi-agent loops |