LLM Wiki implementation V1
Ticket #252: LLM Wiki Implementation - Structured AI Knowledge Base
Type: Architecture / Governance / AI Tooling
Affected Component: docs/llm-wiki/, .github/agents/copilot-instructions.md, .github/workflows/wiki-guard.yml, tests/test_wiki_structure.py, mkdocs.yml, specs/006-llm-wiki/
1. Context and Objective
The decision to implement an LLM Wiki came from a simple observation: without an organized knowledge base, the AI starts too far from the answer in every session. It scans a large number of files, consumes tokens unnecessarily, and can produce uneven answers from one conversation to another.
The objective of this initiative is therefore clear: provide the AI with a reliable, structured entry point so it can answer faster, more accurately, and more consistently.
Before this implementation, a documentation-noise cleanup was completed to reduce low-value information. This preparatory work is documented in Ticket #245 - Token Noise Cleanup. This report covers the next phase: implementation of the LLM Wiki.
2. Starting Point and Framing Decisions
The session began with a strategic question: should we add Obsidian (and, by extension, more advanced approaches like GraphRAG) to manage the LLM Wiki effectively?
-
Decision made: not for now.
-
Why:
- The current scope of the wiki is not yet large enough;
- adding another tool now would increase maintenance complexity;
- the business priority was to deliver a useful knowledge base quickly, then iterate.
-
Chosen direction:
- V1 = simple structure, documentation discipline, automatic guardrails;
- V2 = content enrichment based on observed real needs.
3. Implemented Solution
The work was delivered in 6 blocks, with a progressive logic.
-
Structure setup
- creation of wiki sections;
- integration into documentation build;
- explicit marking of sections planned for V2.
-
AI behavior framing
- update of Copilot instructions so the wiki is read first;
- creation of a clear entry point (wiki README) to guide information lookup.
-
Architecture decision consolidation
- centralization of major decisions in one place;
- objective: avoid re-debating already validated choices and speed up onboarding.
-
Recurring patterns setup
- formalization of simple practical guides for frequent cases (retries, errors, security, tests);
- objective: reduce response and implementation variability.
-
Task-oriented index
- creation of a usage-oriented index;
- objective: find the right page quickly without knowing the technical tree.
-
Quality and governance
- addition of automatic tests on wiki structure;
- addition of a CI guard to prevent code evolution without corresponding knowledge updates.
4. Validation, Evidence, and Tests
The initiative was validated across 3 axes: stability, publication, and operational usefulness.
- Stability: the test suite stayed green, with the new wiki checks included.
- Publication: the documentation build generates wiki pages correctly.
- Usefulness: before/after timing measurements were run on 5 standard LLM requests.
Before/After Measurement (SC-009)
- Observations:
- Baseline T007 (without wiki): 105 sec on average
- Post-deployment T024 (with wiki): 81 sec on average
- Average observed gain: -23%
The gain is real but below the initial 40% target. 2 test requests out of 5 reached the expected level, while 3 requests remained limited because their corresponding sections are still V2 stubs.
- Validation conclusion:
- V1 is validated and useful;
- full target performance depends on V2 completion.
5. Business Impact
Positive effects already visible:
- less dispersion: key decisions are no longer scattered;
- more consistency: the AI starts from a stable source, not a wide and variable scan;
- better execution speed: measurable time gains on best-covered questions;
- stronger governance: the CI guard limits documentation drift over time.
In short: we transformed an implicit dependency on project history into an explicit, maintainable, reusable asset.
6. V1 Assumed Limits
Observed limits are known and controlled:
- V1 does not yet deeply cover some areas (data, workflows, operations, lessons learned);
- the baseline measure was reconstructed in a post-deployment context, which requires careful interpretation;
- SC-009 is therefore partial at this stage.
These limits do not challenge V1 value; they clearly define the V2 scope.
7. Trajectory and Next Step (V2)
Next step: execute the V2 plan.
V2 priorities:
- complete sections still pending;
- enrich runbooks for the 3 underperforming questions;
- rerun the before/after measurement to target full performance threshold.
Objective: move from an already useful wiki to a fully performant wiki across all reference use cases.
8. Conclusion
V1 of the LLM Wiki is a pragmatic success: we prioritized clarity, speed of impact, and maintenance discipline.
The system is in documentary production, evidence is available, limits are explicit, and the V2 path is clear.