In Julia, FunctorFlow.jl uses Lux.jl as its differentiable neural backend. Diagrams are compiled into Lux models via compile_to_lux, producing standard Lux layers with extractable parameters and automatic differentiation.
In Python, FunctorFlow uses PyTorch via compile_to_torch. The compiled diagram becomes a torch.nn.Module whose morphisms can be arbitrary nn.Module layers. If PyTorch is not installed, you can still bind NumPy implementations to morphisms and use compile_to_callable for non-differentiable forward passes.
When PyTorch is available, compile_to_torch turns a diagram into a torch.nn.Module. Morphisms can be nn.Module layers with learnable parameters, and gradients flow through the entire diagram.
Dense morphisms with nn.Linear
if HAS_TORCH: torch.manual_seed(42) D_torch = Diagram("TorchPipeline") D_torch.object("S", kind="value") D_torch.morphism("encode", "S", "S") D_torch.morphism("activate", "S", "S") D_torch.morphism("decode", "S", "S") D_torch.compose("encode", "activate", "decode", name="pipeline") D_torch.bind_morphism("encode", nn.Linear(4, 4)) D_torch.bind_morphism("activate", nn.ReLU()) D_torch.bind_morphism("decode", nn.Linear(4, 4)) model = compile_to_torch(D_torch)print("Model type:", type(model).__name__)print("Learnable parameters:", sum(p.numel() for p in model.parameters()))else:print("Skipping — PyTorch not available")
Model type: TorchCompiledDiagram
Learnable parameters: 40
Forward pass
The compiled model accepts a dict mapping object names to tensors and returns a dict of all computed values.
if HAS_TORCH: x = torch.randn(3, 4) out = model({"S": x})print("Output keys:", list(out.keys()))for name, val in out.items():ifisinstance(val, torch.Tensor):print(f" {name}: shape={tuple(val.shape)}, requires_grad={val.requires_grad}")else:print("Skipping — PyTorch not available")
Gradients propagate through all neural morphisms, so the pipeline output carries a grad_fn for back-propagation.
Inspecting intermediate values
Every morphism’s output is available in the result dict — not just the final composed pipeline. This makes it easy to inspect or visualise intermediate representations.
if HAS_TORCH:print("Encoded (first row):", out["encode"][0].detach().numpy())print("Activated (first row):", out["activate"][0].detach().numpy())print("Decoded / pipeline (first row):", out["pipeline"][0].detach().numpy())else:print("Skipping — PyTorch not available")
Since the model is a standard torch.nn.Module, you can compute gradients with the usual PyTorch API.
if HAS_TORCH: x = torch.randn(3, 4) out = model({"S": x}) loss = out["pipeline"].sum() loss.backward()for name, param in model.named_parameters():if param.grad isnotNone:print(f" {name}: grad norm = {param.grad.norm().item():.4f}")else:print("Skipping — PyTorch not available")
lowered_morphisms.encode.weight: grad norm = 1.7252
lowered_morphisms.encode.bias: grad norm = 2.1916
lowered_morphisms.decode.weight: grad norm = 4.5244
lowered_morphisms.decode.bias: grad norm = 6.0000
Composed Neural Morphisms
FunctorFlow’s compose wires morphisms together so the output of one feeds into the next. This mirrors Julia’s compose! call.
A key strength of the FunctorFlow backend is mixing neural morphisms (learnable nn.Module layers) and symbolic morphisms (plain Python functions) in the same diagram. The compiler handles the routing automatically.
In Julia, compile_to_lux treats unbound morphisms as neural layers and bound morphisms as symbolic; the same pattern applies in Python with compile_to_torch.
if HAS_TORCH: torch.manual_seed(42) D_mix = Diagram("MixedModel") D_mix.object("S", kind="value")# Neural: learned encoder D_mix.morphism("encode", "S", "S")# Symbolic: deterministic L2 normalisation D_mix.morphism("normalize", "S", "S") D_mix.bind_morphism("normalize",lambda x: x / (torch.sqrt(torch.sum(x **2, dim=-1, keepdim=True)) +1e-8), )# Neural: learned decoder D_mix.morphism("decode", "S", "S") D_mix.compose("encode", "normalize", "decode", name="pipeline") D_mix.bind_morphism("encode", nn.Linear(4, 4)) D_mix.bind_morphism("decode", nn.Linear(4, 4)) mixed_model = compile_to_torch(D_mix) x = torch.randn(3, 4) out = mixed_model({"S": x})print("Pipeline output shape:", tuple(out["pipeline"].shape))print("Normalised (first row, should be unit norm):") normed = out["normalize"][0].detach()print(f" values: {normed.numpy()}")print(f" L2 norm: {torch.norm(normed).item():.6f}")else:print("Skipping — PyTorch not available")
Pipeline output shape: (3, 4)
Normalised (first row, should be unit norm):
values: [ 0.77576184 0.39364403 -0.18798792 0.45595905]
L2 norm: 1.000000
The normalize morphism uses a plain lambda (no parameters), while encode and decode use learnable nn.Linear instances. Gradients flow through the symbolic normalisation via standard PyTorch autograd.
Julia ↔︎ Python Comparison
Concept
Julia (Lux)
Python (PyTorch)
Compile
compile_to_lux(D) → LuxDiagramModel
compile_to_torch(D) → nn.Module
Dense layer
DiagramDenseLayer(in, out)
nn.Linear(in, out)
Activation
Bound Julia function
nn.ReLU() or lambda
Parameters
Lux.setup(rng, model)
model.parameters()
Forward pass
model(inputs, ps, st)
model({"obj": tensor})
Gradient
Zygote / Enzyme AD
loss.backward()
Fallback
compile_to_callable
compile_to_callable (NumPy)
Summary
compile_to_callable works with any Python callable (NumPy, plain math, etc.) — no framework dependency.
compile_to_torch wraps the diagram as a torch.nn.Module with learnable parameters and full autograd support.
Morphisms can be nn.Module layers, plain functions, or a mix of both.
compose chains morphisms so that intermediate values flow correctly through the pipeline.
All intermediate values are accessible in the output dict, making inspection and debugging straightforward.