API Reference

Core authoring and execution

FunctorFlow.DiagramType
Diagram(name)

A FunctorFlow diagram: the primary user-facing artifact. Diagrams declare objects, morphisms, composed paths, Kan operators, and structural losses. This is the level at which users design architectures.

source
FunctorFlow.add_object!Function
add_object!(D, name; kind=:object, shape=nothing, description="", metadata=Dict())

Add an object to the diagram. If a placeholder with the same name exists, it is replaced with the fully specified object.

source
FunctorFlow.add_morphism!Function
add_morphism!(D, name, source, target; implementation=nothing, ...)

Add a morphism (typed arrow) between two objects. Source and target objects are auto-created as placeholders if they don't yet exist.

source
FunctorFlow.add_left_kan!Function
add_left_kan!(D, name; source, along, target=nothing, reducer=:sum, ...)

Add a left Kan extension (universal aggregation). Covers attention, pooling, neighborhood message passing, context fusion, plan-fragment integration.

source
FunctorFlow.add_right_kan!Function
add_right_kan!(D, name; source, along, target=nothing, reducer=:first_non_null, ...)

Add a right Kan extension (universal completion / repair). Covers denoising, masked completion, plan repair, partial-view reconciliation.

source
FunctorFlow.add_obstruction_loss!Function
add_obstruction_loss!(D, name; paths, comparator=:l2, weight=1.0, ...)

Add an obstruction loss measuring non-commutativity between diagram paths. This is the native home for Diagrammatic Backpropagation (DB).

source
FunctorFlow.compile_to_callableFunction
compile_to_callable(D; morphisms=nothing, reducers=nothing, comparators=nothing)

Compile a diagram to a callable executor. This is the backend-neutral execution target.

source
FunctorFlow.runFunction
run(compiled, inputs; morphisms=nothing, reducers=nothing, comparators=nothing)

Execute a compiled diagram with the given inputs. Returns an ExecutionResult with all computed values and losses.

Operations are executed in insertion order. Each operation's result is stored in the environment under its name, available to subsequent operations.

source
FunctorFlow.to_irFunction
to_ir(D::Diagram) -> DiagramIR

Convert a diagram to its intermediate representation.

source

DSL and composition

FunctorFlow.@functorflowMacro
@functorflow name body

Construct a FunctorFlow diagram using mathematical notation.

Syntax

  • Name::kind — declare an object with a semantic kind
  • name = Source → Target — declare a morphism
  • name = Σ(source; along=rel, reducer=:sum) — left Kan extension
  • name = Δ(source; along=rel) — right Kan extension
  • name = compose(f, g) — named composition
  • obstruction(name; paths=[(a,b)], ...) — obstruction loss
  • port(name, ref; direction=:input, type=:kind) — expose port

Example

D = @functorflow MyKET begin
    Tokens::messages
    Nbrs::relation
    Ctx::contextualized_messages
    embed = Tokens → Ctx
    aggregate = Σ(:Tokens; along=:Nbrs, reducer=:sum)
end
source
FunctorFlow.@diagramMacro
@diagram name body

Legacy macro — use @functorflow instead. Preserved for backward compatibility with block builders.

source
FunctorFlow.include!Function
include!(parent, child; namespace, object_aliases=nothing)

Include a child diagram into a parent diagram under the given namespace. All objects and operations from the child are prefixed with namespace__. Object aliases allow wiring external objects into sub-diagram slots.

Returns an IncludedDiagram for accessing namespaced references.

source

Block builders

FunctorFlow.ket_blockFunction
ket_block(; config=KETBlockConfig(), kwargs...) -> Diagram

Build a KET (Kan Extension Template) block: left-Kan aggregation over an incidence relation. The fundamental aggregation pattern covering attention, pooling, neighborhood message passing, and context fusion.

source
FunctorFlow.db_squareFunction
db_square(; config=DBSquareConfig(), kwargs...) -> Diagram

Build a DB (Diagrammatic Backpropagation) square: measures obstruction to commutativity via ||f∘g - g∘f||. The native FunctorFlow pattern for consistency-aware learning.

source
FunctorFlow.gt_neighborhood_blockFunction
gt_neighborhood_block(; config=GTNeighborhoodConfig(), kwargs...) -> Diagram

Build a GT (Graph Transformer) neighborhood block: lifts tokens to edge messages, then aggregates via left-Kan over the incidence geometry.

source
FunctorFlow.completion_blockFunction
completion_block(; config=CompletionBlockConfig(), kwargs...) -> Diagram

Build a generic right-Kan completion block for denoising, masked completion, plan repair, or partial-view reconciliation.

source
FunctorFlow.basket_workflow_blockFunction
basket_workflow_block(; config=BASKETWorkflowConfig(), kwargs...) -> Diagram

Build a BASKET workflow block: left-Kan with concat reducer to compose local plan fragments into a composed plan.

source
FunctorFlow.rocket_repair_blockFunction
rocket_repair_block(; config=ROCKETRepairConfig(), kwargs...) -> Diagram

Build a ROCKET repair block: right-Kan completion to repair candidates using edit neighborhoods.

source
FunctorFlow.basket_rocket_pipelineFunction
basket_rocket_pipeline(; config=BasketRocketPipelineConfig(), kwargs...) -> Diagram

Build a two-stage BASKET→ROCKET pipeline: drafts a plan via left-Kan aggregation, then repairs it via right-Kan completion.

source
FunctorFlow.democritus_assembly_pipelineFunction
democritus_assembly_pipeline(; config=DemocritusAssemblyConfig(), restrict_impl=nothing, kwargs...) -> Diagram

Build a Democritus local-to-global pipeline with an explicit regrounding map back to local claims. The resulting obstruction loss measures whether the global section remains compatible with the original fragments.

source
FunctorFlow.topocoend_blockFunction
topocoend_block(; config=TopoCoendConfig(), infer_neighborhood_impl=nothing, lift_impl=nothing, kwargs...) -> Diagram

Build a TopoCoend-style learned neighborhood block. A morphism first infers a relation from tokens, another morphism lifts tokens into local contexts, and a left-Kan aggregates those local contexts into a global contextualized state.

source
FunctorFlow.horn_fill_blockFunction
horn_fill_block(; config=HornObstructionConfig(), first_face_impl=nothing, second_face_impl=nothing, filler_impl=nothing, kwargs...) -> Diagram

Build a 2-simplex horn filling block. The composed boundary path d12 ∘ d01 is compared against the direct filler d02, turning simplicial coherence into a first-class obstruction loss.

source
FunctorFlow.bisimulation_quotient_blockFunction
bisimulation_quotient_block(; config=BisimulationQuotientConfig(), kwargs...) -> Diagram

Build a behavioral quotient block by composing a bisimulation relation with two observation maps and then taking their coequalizer. This turns behavioral equivalence witnesses into an explicit quotient object.

source

Lux backend

FunctorFlow.compile_to_luxFunction
compile_to_lux(D::Diagram; morphism_layers=Dict(), reducer_layers=Dict(),
               comparator_layers=Dict(), morphisms=nothing, reducers=nothing,
               comparators=nothing) -> LuxDiagramModel

Compile a FunctorFlow Diagram to a Lux model for differentiable execution.

Lux layers override callable implementations: any morphism or reducer bound as a LuxCore.AbstractLuxLayer will participate in the autograd graph, while non-neural operations pass through unchanged.

Arguments

  • D: The FunctorFlow diagram to compile
  • morphism_layers: Dict mapping morphism names to Lux layers
  • reducer_layers: Dict mapping reducer names to Lux layers (e.g., KETAttentionLayer)
  • comparator_layers: Dict mapping comparator names to Lux layers
  • morphisms, reducers, comparators: Non-neural callable overrides

Example

using FunctorFlow, Lux, Random

# Build a KET block with learned attention
D = ket_block(; name=:MyKET, reducer=:ket_attention)
model = compile_to_lux(D;
    reducer_layers=Dict(:ket_attention => KETAttentionLayer(64; n_heads=4)))

# Initialize parameters
rng = Random.default_rng()
ps, st = Lux.setup(rng, model)

# Forward pass
inputs = Dict(:Values => randn(Float32, 64, 10),
              :Incidence => Float32.(ones(10, 10)))
result, st = model(inputs, ps, st)
contextualized = result[:values][:aggregate]
source
FunctorFlow.KETAttentionLayerType
KETAttentionLayer(d_model; n_heads=1, dropout=0.0f0, name=:ket_attention)

Learnable Kan Extension Transformer (KET) reducer as a Lux layer. Implements scaled multi-head dot-product attention:

Attention(Q, K, V) = softmax(QKᵀ / √d_k ⊙ mask) V

where:

  • Q, K, V are learned linear projections of the source values
  • mask comes from the along relation (incidence geometry)
  • The output is projected back to d_model dimensions

This is the neural implementation of a left-Kan extension: universal aggregation via attention over an incidence relation.

source
FunctorFlow.DiagramDenseLayerType
DiagramDenseLayer(in_dims, out_dims; activation=identity, name=:dense)

A dense (fully-connected) morphism layer for use inside FunctorFlow diagrams. Wraps Lux.Dense with FunctorFlow metadata.

source
FunctorFlow.DiagramChainLayerType
DiagramChainLayer(layers...; name=:chain)

A sequential composition of Lux layers, corresponding to a FunctorFlow Composition. Each layer's output feeds into the next.

source
FunctorFlow.LuxDiagramModelType
LuxDiagramModel(diagram; morphism_layers=Dict(), reducer_layers=Dict(),
                comparator_layers=Dict())

A Lux model compiled from a FunctorFlow Diagram. This is the Lux equivalent of TorchCompiledDiagram in the Python implementation.

How it works

  1. Morphisms bound as Lux layers participate in the autograd graph
  2. KET reducers bound as KETAttentionLayer provide learnable attention
  3. Obstruction losses use neural comparators for differentiable constraints
  4. Non-neural operations (symbolic reducers, etc.) pass through unchanged

Example

using FunctorFlow, Lux, Random

D = ket_block(; name=:MyKET, reducer=:ket_attention)
model = compile_to_lux(D;
    reducer_layers=Dict(:ket_attention => KETAttentionLayer(64)))

rng = Random.default_rng()
ps, st = Lux.setup(rng, model)

# source: (d_model, seq_len), mask: (seq_len, seq_len)
inputs = Dict(
    :Values => randn(Float32, 64, 10),
    :Incidence => Float32.(ones(10, 10))
)
output, st = model(inputs, ps, st)
source
FunctorFlow.build_ket_lux_modelFunction
build_ket_lux_model(d_model; n_heads=4, reducer=:ket_attention, kwargs...)

Build a KET block with a learned attention reducer as a Lux model. This is the standard pattern for a Kan Extension Transformer head.

Returns (model, diagram) where model is a LuxDiagramModel.

source
FunctorFlow.build_db_lux_modelFunction
build_db_lux_model(d_model; comparator=:l2, kwargs...)

Build a DB square with neural morphisms as a Lux model. The two morphisms f and g are DiagramDenseLayers, and the obstruction loss measures non-commutativity of f∘g vs g∘f using a neural comparator.

Returns (model, diagram).

source
FunctorFlow.build_gt_lux_modelFunction
build_gt_lux_model(d_model; n_heads=4, kwargs...)

Build a GT (Graph Transformer) neighborhood block as a Lux model. The lift morphism is a DiagramDenseLayer and the aggregation uses KETAttentionLayer.

Returns (model, diagram).

source
FunctorFlow.build_basket_rocket_lux_modelFunction
build_basket_rocket_lux_model(d_model; n_heads=4, kwargs...)

Build a Lux-backed BASKET → ROCKET planner. Both the drafting and repair stages are instantiated as learnable attention reducers, and the draft/repair consistency loss is switched to a differentiable comparator.

Returns (model, diagram).

source
FunctorFlow.predict_detach_sourceFunction
predict_detach_source(logits, embedding_weights; position_bias=nothing)

Project logits back into embedding space while stopping gradients through the prediction path. This is the reusable helper behind the "predict-detach" pattern used in the vignettes.

  • logits has shape (vocab, seq_len[, batch])
  • embedding_weights has shape (d_model, vocab)
  • position_bias, when provided, is added after the detach boundary so it remains differentiable
source

Categorical extensions

FunctorFlow.pullbackFunction
pullback(D1, D2; over, name=:Pullback) -> PullbackResult

Construct the pullback of two diagrams over a shared interface. The resulting diagram contains both sub-diagrams plus interface morphisms (projections) into a shared base object. The pullback is the universal cone: any other diagram mapping compatibly into D1 and D2 factors uniquely through it.

source
FunctorFlow.pushoutFunction
pushout(D1, D2; along, name=:Pushout) -> PushoutResult

Construct the pushout of two diagrams along a shared sub-object. Creates injection morphisms from each factor into the merged result.

source
FunctorFlow.productFunction
product(diagrams...; name=:Product) -> ProductResult

Construct the product of multiple diagrams. Combines independent models with projection morphisms into each factor.

source
FunctorFlow.coproductFunction
coproduct(diagrams...; name=:Coproduct) -> CoproductResult

Construct the coproduct of multiple diagrams. Ensemble / hypothesis aggregation.

source
FunctorFlow.equalizerFunction
equalizer(D, f_name, g_name; name=:Equalizer) -> EqualizerResult

Construct the equalizer of two morphisms in a diagram. Enforces f = g. This is closely related to the DB obstruction loss: the equalizer is the sub-object where the two paths agree exactly.

source
FunctorFlow.coequalizerFunction
coequalizer(D, f_name, g_name; name=:Coequalizer) -> CoequalizerResult

Construct the coequalizer of two parallel morphisms f, g : A → B in a diagram.

The coequalizer is the universal quotient object Q with a map q : B → Q such that q ∘ f = q ∘ g. It identifies elements of B that are related through f and g.

Where an equalizer finds the subobject of A where f and g agree (a limit), the coequalizer quotients B by forcing f and g to agree (a colimit).

AI interpretation

  • Equivalence classes: collapse representations that two maps agree should be identified
  • Symmetry quotienting: remove redundant structure by identifying symmetric states
  • Consensus merging: merge outputs that multiple processing paths declare equivalent
source
FunctorFlow.interventional_expectationFunction
interventional_expectation(cd::CausalDiagram, obs_data::Dict;
                           density_ratio_fn=nothing) -> Dict

Compute Edo[Y] from observational data using importance weighting. If densityratiofn is provided, it computes ρ(y) = pdo(y)/p_obs(y) for each observation.

source
FunctorFlow.check_coherenceFunction
check_coherence(check::SheafCoherenceCheck) -> NamedTuple

Run the sheaf gluing axiom checks on sections:

  • Locality: each section has non-empty domain
  • Gluing: overlapping sections agree on their intersection
  • Stability: total coherence penalty is bounded

Returns (passed=Bool, locality=Bool, gluing=Bool, stability=Bool, details=Dict).

source
FunctorFlow.classify_subobjectFunction
classify_subobject(classifier::SubobjectClassifier, inclusion_fn, data) -> Dict

Compute the characteristic morphism for a subobject inclusion. For each element in data, determines whether it belongs to the subobject. Returns a Dict mapping elements to truth values.

source