API Reference
Core authoring and execution
FunctorFlow.Diagram — Type
Diagram(name)A FunctorFlow diagram: the primary user-facing artifact. Diagrams declare objects, morphisms, composed paths, Kan operators, and structural losses. This is the level at which users design architectures.
FunctorFlow.add_object! — Function
add_object!(D, name; kind=:object, shape=nothing, description="", metadata=Dict())Add an object to the diagram. If a placeholder with the same name exists, it is replaced with the fully specified object.
FunctorFlow.add_morphism! — Function
add_morphism!(D, name, source, target; implementation=nothing, ...)Add a morphism (typed arrow) between two objects. Source and target objects are auto-created as placeholders if they don't yet exist.
FunctorFlow.add_left_kan! — Function
add_left_kan!(D, name; source, along, target=nothing, reducer=:sum, ...)Add a left Kan extension (universal aggregation). Covers attention, pooling, neighborhood message passing, context fusion, plan-fragment integration.
FunctorFlow.add_right_kan! — Function
add_right_kan!(D, name; source, along, target=nothing, reducer=:first_non_null, ...)Add a right Kan extension (universal completion / repair). Covers denoising, masked completion, plan repair, partial-view reconciliation.
FunctorFlow.add_obstruction_loss! — Function
add_obstruction_loss!(D, name; paths, comparator=:l2, weight=1.0, ...)Add an obstruction loss measuring non-commutativity between diagram paths. This is the native home for Diagrammatic Backpropagation (DB).
FunctorFlow.compile_to_callable — Function
compile_to_callable(D; morphisms=nothing, reducers=nothing, comparators=nothing)Compile a diagram to a callable executor. This is the backend-neutral execution target.
FunctorFlow.run — Function
run(compiled, inputs; morphisms=nothing, reducers=nothing, comparators=nothing)Execute a compiled diagram with the given inputs. Returns an ExecutionResult with all computed values and losses.
Operations are executed in insertion order. Each operation's result is stored in the environment under its name, available to subsequent operations.
FunctorFlow.to_ir — Function
to_ir(D::Diagram) -> DiagramIRConvert a diagram to its intermediate representation.
DSL and composition
FunctorFlow.@functorflow — Macro
@functorflow name bodyConstruct a FunctorFlow diagram using mathematical notation.
Syntax
Name::kind— declare an object with a semantic kindname = Source → Target— declare a morphismname = Σ(source; along=rel, reducer=:sum)— left Kan extensionname = Δ(source; along=rel)— right Kan extensionname = compose(f, g)— named compositionobstruction(name; paths=[(a,b)], ...)— obstruction lossport(name, ref; direction=:input, type=:kind)— expose port
Example
D = @functorflow MyKET begin
Tokens::messages
Nbrs::relation
Ctx::contextualized_messages
embed = Tokens → Ctx
aggregate = Σ(:Tokens; along=:Nbrs, reducer=:sum)
endFunctorFlow.@diagram — Macro
@diagram name bodyLegacy macro — use @functorflow instead. Preserved for backward compatibility with block builders.
FunctorFlow.include! — Function
include!(parent, child; namespace, object_aliases=nothing)Include a child diagram into a parent diagram under the given namespace. All objects and operations from the child are prefixed with namespace__. Object aliases allow wiring external objects into sub-diagram slots.
Returns an IncludedDiagram for accessing namespaced references.
FunctorFlow.object_ref — Function
Get the namespaced name of an object from an included diagram.
FunctorFlow.operation_ref — Function
Get the namespaced name of an operation from an included diagram.
FunctorFlow.port_spec — Function
Get the port spec from an included diagram.
Block builders
FunctorFlow.ket_block — Function
ket_block(; config=KETBlockConfig(), kwargs...) -> DiagramBuild a KET (Kan Extension Template) block: left-Kan aggregation over an incidence relation. The fundamental aggregation pattern covering attention, pooling, neighborhood message passing, and context fusion.
FunctorFlow.db_square — Function
db_square(; config=DBSquareConfig(), kwargs...) -> DiagramBuild a DB (Diagrammatic Backpropagation) square: measures obstruction to commutativity via ||f∘g - g∘f||. The native FunctorFlow pattern for consistency-aware learning.
FunctorFlow.gt_neighborhood_block — Function
gt_neighborhood_block(; config=GTNeighborhoodConfig(), kwargs...) -> DiagramBuild a GT (Graph Transformer) neighborhood block: lifts tokens to edge messages, then aggregates via left-Kan over the incidence geometry.
FunctorFlow.completion_block — Function
completion_block(; config=CompletionBlockConfig(), kwargs...) -> DiagramBuild a generic right-Kan completion block for denoising, masked completion, plan repair, or partial-view reconciliation.
FunctorFlow.basket_workflow_block — Function
basket_workflow_block(; config=BASKETWorkflowConfig(), kwargs...) -> DiagramBuild a BASKET workflow block: left-Kan with concat reducer to compose local plan fragments into a composed plan.
FunctorFlow.rocket_repair_block — Function
rocket_repair_block(; config=ROCKETRepairConfig(), kwargs...) -> DiagramBuild a ROCKET repair block: right-Kan completion to repair candidates using edit neighborhoods.
FunctorFlow.basket_rocket_pipeline — Function
basket_rocket_pipeline(; config=BasketRocketPipelineConfig(), kwargs...) -> DiagramBuild a two-stage BASKET→ROCKET pipeline: drafts a plan via left-Kan aggregation, then repairs it via right-Kan completion.
FunctorFlow.democritus_assembly_pipeline — Function
democritus_assembly_pipeline(; config=DemocritusAssemblyConfig(), restrict_impl=nothing, kwargs...) -> DiagramBuild a Democritus local-to-global pipeline with an explicit regrounding map back to local claims. The resulting obstruction loss measures whether the global section remains compatible with the original fragments.
FunctorFlow.topocoend_block — Function
topocoend_block(; config=TopoCoendConfig(), infer_neighborhood_impl=nothing, lift_impl=nothing, kwargs...) -> DiagramBuild a TopoCoend-style learned neighborhood block. A morphism first infers a relation from tokens, another morphism lifts tokens into local contexts, and a left-Kan aggregates those local contexts into a global contextualized state.
FunctorFlow.horn_fill_block — Function
horn_fill_block(; config=HornObstructionConfig(), first_face_impl=nothing, second_face_impl=nothing, filler_impl=nothing, kwargs...) -> DiagramBuild a 2-simplex horn filling block. The composed boundary path d12 ∘ d01 is compared against the direct filler d02, turning simplicial coherence into a first-class obstruction loss.
FunctorFlow.bisimulation_quotient_block — Function
bisimulation_quotient_block(; config=BisimulationQuotientConfig(), kwargs...) -> DiagramBuild a behavioral quotient block by composing a bisimulation relation with two observation maps and then taking their coequalizer. This turns behavioral equivalence witnesses into an explicit quotient object.
Lux backend
FunctorFlow.compile_to_lux — Function
compile_to_lux(D::Diagram; morphism_layers=Dict(), reducer_layers=Dict(),
comparator_layers=Dict(), morphisms=nothing, reducers=nothing,
comparators=nothing) -> LuxDiagramModelCompile a FunctorFlow Diagram to a Lux model for differentiable execution.
Lux layers override callable implementations: any morphism or reducer bound as a LuxCore.AbstractLuxLayer will participate in the autograd graph, while non-neural operations pass through unchanged.
Arguments
D: The FunctorFlow diagram to compilemorphism_layers: Dict mapping morphism names to Lux layersreducer_layers: Dict mapping reducer names to Lux layers (e.g.,KETAttentionLayer)comparator_layers: Dict mapping comparator names to Lux layersmorphisms,reducers,comparators: Non-neural callable overrides
Example
using FunctorFlow, Lux, Random
# Build a KET block with learned attention
D = ket_block(; name=:MyKET, reducer=:ket_attention)
model = compile_to_lux(D;
reducer_layers=Dict(:ket_attention => KETAttentionLayer(64; n_heads=4)))
# Initialize parameters
rng = Random.default_rng()
ps, st = Lux.setup(rng, model)
# Forward pass
inputs = Dict(:Values => randn(Float32, 64, 10),
:Incidence => Float32.(ones(10, 10)))
result, st = model(inputs, ps, st)
contextualized = result[:values][:aggregate]FunctorFlow.KETAttentionLayer — Type
KETAttentionLayer(d_model; n_heads=1, dropout=0.0f0, name=:ket_attention)Learnable Kan Extension Transformer (KET) reducer as a Lux layer. Implements scaled multi-head dot-product attention:
Attention(Q, K, V) = softmax(QKᵀ / √d_k ⊙ mask) Vwhere:
- Q, K, V are learned linear projections of the source values
- mask comes from the
alongrelation (incidence geometry) - The output is projected back to
d_modeldimensions
This is the neural implementation of a left-Kan extension: universal aggregation via attention over an incidence relation.
FunctorFlow.DiagramDenseLayer — Type
DiagramDenseLayer(in_dims, out_dims; activation=identity, name=:dense)A dense (fully-connected) morphism layer for use inside FunctorFlow diagrams. Wraps Lux.Dense with FunctorFlow metadata.
FunctorFlow.DiagramChainLayer — Type
DiagramChainLayer(layers...; name=:chain)A sequential composition of Lux layers, corresponding to a FunctorFlow Composition. Each layer's output feeds into the next.
FunctorFlow.LuxDiagramModel — Type
LuxDiagramModel(diagram; morphism_layers=Dict(), reducer_layers=Dict(),
comparator_layers=Dict())A Lux model compiled from a FunctorFlow Diagram. This is the Lux equivalent of TorchCompiledDiagram in the Python implementation.
How it works
- Morphisms bound as Lux layers participate in the autograd graph
- KET reducers bound as
KETAttentionLayerprovide learnable attention - Obstruction losses use neural comparators for differentiable constraints
- Non-neural operations (symbolic reducers, etc.) pass through unchanged
Example
using FunctorFlow, Lux, Random
D = ket_block(; name=:MyKET, reducer=:ket_attention)
model = compile_to_lux(D;
reducer_layers=Dict(:ket_attention => KETAttentionLayer(64)))
rng = Random.default_rng()
ps, st = Lux.setup(rng, model)
# source: (d_model, seq_len), mask: (seq_len, seq_len)
inputs = Dict(
:Values => randn(Float32, 64, 10),
:Incidence => Float32.(ones(10, 10))
)
output, st = model(inputs, ps, st)FunctorFlow.build_ket_lux_model — Function
build_ket_lux_model(d_model; n_heads=4, reducer=:ket_attention, kwargs...)Build a KET block with a learned attention reducer as a Lux model. This is the standard pattern for a Kan Extension Transformer head.
Returns (model, diagram) where model is a LuxDiagramModel.
FunctorFlow.build_db_lux_model — Function
build_db_lux_model(d_model; comparator=:l2, kwargs...)Build a DB square with neural morphisms as a Lux model. The two morphisms f and g are DiagramDenseLayers, and the obstruction loss measures non-commutativity of f∘g vs g∘f using a neural comparator.
Returns (model, diagram).
FunctorFlow.build_gt_lux_model — Function
build_gt_lux_model(d_model; n_heads=4, kwargs...)Build a GT (Graph Transformer) neighborhood block as a Lux model. The lift morphism is a DiagramDenseLayer and the aggregation uses KETAttentionLayer.
Returns (model, diagram).
FunctorFlow.build_basket_rocket_lux_model — Function
build_basket_rocket_lux_model(d_model; n_heads=4, kwargs...)Build a Lux-backed BASKET → ROCKET planner. Both the drafting and repair stages are instantiated as learnable attention reducers, and the draft/repair consistency loss is switched to a differentiable comparator.
Returns (model, diagram).
FunctorFlow.predict_detach_source — Function
predict_detach_source(logits, embedding_weights; position_bias=nothing)Project logits back into embedding space while stopping gradients through the prediction path. This is the reusable helper behind the "predict-detach" pattern used in the vignettes.
logitshas shape(vocab, seq_len[, batch])embedding_weightshas shape(d_model, vocab)position_bias, when provided, is added after the detach boundary so it remains differentiable
Categorical extensions
FunctorFlow.pullback — Function
pullback(D1, D2; over, name=:Pullback) -> PullbackResultConstruct the pullback of two diagrams over a shared interface. The resulting diagram contains both sub-diagrams plus interface morphisms (projections) into a shared base object. The pullback is the universal cone: any other diagram mapping compatibly into D1 and D2 factors uniquely through it.
FunctorFlow.pushout — Function
pushout(D1, D2; along, name=:Pushout) -> PushoutResultConstruct the pushout of two diagrams along a shared sub-object. Creates injection morphisms from each factor into the merged result.
FunctorFlow.product — Function
product(diagrams...; name=:Product) -> ProductResultConstruct the product of multiple diagrams. Combines independent models with projection morphisms into each factor.
FunctorFlow.coproduct — Function
coproduct(diagrams...; name=:Coproduct) -> CoproductResultConstruct the coproduct of multiple diagrams. Ensemble / hypothesis aggregation.
FunctorFlow.equalizer — Function
equalizer(D, f_name, g_name; name=:Equalizer) -> EqualizerResultConstruct the equalizer of two morphisms in a diagram. Enforces f = g. This is closely related to the DB obstruction loss: the equalizer is the sub-object where the two paths agree exactly.
FunctorFlow.coequalizer — Function
coequalizer(D, f_name, g_name; name=:Coequalizer) -> CoequalizerResultConstruct the coequalizer of two parallel morphisms f, g : A → B in a diagram.
The coequalizer is the universal quotient object Q with a map q : B → Q such that q ∘ f = q ∘ g. It identifies elements of B that are related through f and g.
Where an equalizer finds the subobject of A where f and g agree (a limit), the coequalizer quotients B by forcing f and g to agree (a colimit).
AI interpretation
- Equivalence classes: collapse representations that two maps agree should be identified
- Symmetry quotienting: remove redundant structure by identifying symmetric states
- Consensus merging: merge outputs that multiple processing paths declare equivalent
FunctorFlow.build_causal_diagram — Function
build_causal_diagram(name; context=CausalContext(:default), kwargs...) -> CausalDiagramBuild a diagram with explicit causal Kan semantics.
FunctorFlow.interventional_expectation — Function
interventional_expectation(cd::CausalDiagram, obs_data::Dict;
density_ratio_fn=nothing) -> DictCompute Edo[Y] from observational data using importance weighting. If densityratiofn is provided, it computes ρ(y) = pdo(y)/p_obs(y) for each observation.
FunctorFlow.check_coherence — Function
check_coherence(check::SheafCoherenceCheck) -> NamedTupleRun the sheaf gluing axiom checks on sections:
- Locality: each section has non-empty domain
- Gluing: overlapping sections agree on their intersection
- Stability: total coherence penalty is bounded
Returns (passed=Bool, locality=Bool, gluing=Bool, stability=Bool, details=Dict).
FunctorFlow.classify_subobject — Function
classify_subobject(classifier::SubobjectClassifier, inclusion_fn, data) -> DictCompute the characteristic morphism for a subobject inclusion. For each element in data, determines whether it belongs to the subobject. Returns a Dict mapping elements to truth values.