LLMWiki.jl
An LLM-maintained, incrementally-compiled knowledge base that compounds over time.
LLMWiki.jl is a Julia implementation of Karpathy's LLM Wiki pattern. Feed raw sources — markdown files, PDFs, or web pages — into the wiki and an LLM automatically extracts concepts, generates encyclopedia-style articles, cross-links them with [[wikilinks]], and keeps everything up to date as sources change.
Built on AgentFramework.jl for LLM orchestration.
Key Features
- Incremental compilation — only recompiles sources that changed (SHA-256 change detection)
- Cross-source dependencies — when two sources share a concept, changing one triggers recompilation of both
- Two-phase pipeline — Phase 1 extracts all concepts, Phase 2 generates pages (eliminates order-dependence)
- Bidirectional wikilinks — automatic
[[wikilink]]insertion with fuzzy title matching - BM25 search — full-text search over generated wiki pages
- Lint engine — detects broken links, orphaned pages, empty content, stale references
- Multiple providers — Ollama, OpenAI, Azure AI via AgentFramework.jl
- Interactive agent — chat with your wiki using
create_wiki_agent - Extensions — optional Mem0.jl (semantic search), SQLite (state backend), RDFLib.jl (knowledge graph)
Quick Start
using LLMWiki
# Initialize a new wiki
config = default_config("my-wiki")
config.model = "qwen3:8b"
init_wiki(config)
# Add sources and compile
ingest!(config, "path/to/article.md")
compile!(config)
# Search and query
results = search_wiki(config, "memory safety"; method=:bm25)
answer = query_wiki(config, "How does Rust handle memory safety?")See the Getting Started guide for a full walkthrough.
Documentation Overview
- Getting Started
- Configuration
- Compilation Pipeline
- Search & Query
- Extensions
- Architecture
- Types
- Operations
- Utilities
License
MIT — Copyright 2026 Simon Frost