Skip to content

AI Markmap Agent

Status: Canonical Reference
Scope: AI Markmap Agent system in tools/mindmaps/ai-markmap-agent/
Last Updated: January 4, 2026 15:47:39
Created: December 13, 2025 13:40:17
Note: All filenames in this documentation follow the kebab-case naming convention.

A multi-expert refinement system for Markmap improvement using LangGraph.

LangGraph Python License

πŸ“‹ Table of Contents


Overview

This system refines existing high-quality Markmaps through multi-expert review and consensus-based discussion. Instead of generating from scratch, it starts with a baseline Markmap and improves it through domain-specific expert analysis.

Key Features

Feature Description
Refinement Mode Start from a high-quality baseline, not from scratch
Domain Experts Architect, Professor, Engineer perspectives
Consensus Voting Programmatic majority voting (⅔ required)
Natural Language Suggestions in natural language, not rigid formats
Efficient API Calls Only 2N + 1 calls (N = number of experts)

Core Philosophy

"Refinement, Not Creation"

Old Approach New Approach
Create structure from data Start from high-quality baseline
YAML intermediate format Work directly with Markmap
Generic strategist roles Domain-specific experts
AI-based integration Programmatic consensus

Why Refinement is Better

  1. Quality Preservation - Don't reinvent what already works well
  2. Focused Discussion - Experts discuss "what to improve", not "what to create"
  3. Natural Language - AI excels at understanding and generating natural text
  4. Efficient - Fewer API calls, faster iteration

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        AI Markmap Agent                                      β”‚
β”‚                   Refinement Mode β€” 2-Round Discussion                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 0: Load Baseline                                                     β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                                                                             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                        β”‚
β”‚  β”‚ Baseline Markmap                β”‚                                        β”‚
β”‚  β”‚ (e.g., neetcode-ontology-ai.md) β”‚                                        β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                        β”‚
β”‚              β”‚                                                              β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 1: Independent Review (N parallel API calls)                         β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚              β”‚                                                              β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                               β”‚
β”‚   β–Ό                     β–Ό                  β–Ό                               β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”‚
β”‚   β”‚ πŸ—οΈ Architect β”‚    β”‚ πŸ“š Professor β”‚    β”‚ βš™οΈ Engineer  β”‚                 β”‚
β”‚   β”‚ 5-10 ideas   β”‚    β”‚ 5-10 ideas   β”‚    β”‚ 5-10 ideas   β”‚                 β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜                 β”‚
β”‚          β”‚                   β”‚                   β”‚                         β”‚
β”‚          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                         β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 2: Full Discussion (N parallel API calls)                            β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                              β”‚                                             β”‚
β”‚   Each expert sees ALL suggestions, votes: βœ… / ⚠️ / ❌                    β”‚
β”‚   Each expert outputs their Final Adoption List                            β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 3: Consensus Calculation (Code, not AI)                              β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                              β”‚                                             β”‚
β”‚   Majority voting: 2/3 (β‰₯67%) agreement required                           β”‚
β”‚   βœ… Adopted: A1, A3, P1, E1, E4                                           β”‚
β”‚   ❌ Rejected: A2, P2, P3, E2, E3                                          β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 4: Writer (1 API call)                                               β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                              β”‚                                             β”‚
β”‚   Apply adopted improvements to baseline β†’ Refined Markmap                 β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 4: Writer                                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                                                                             β”‚
β”‚   Apply improvements β†’ Raw markdown (saved to debug)                        β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 5: Translation                                                         β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                              β”‚                                             β”‚
β”‚   Translate raw markdown β†’ Translated raw markdown (saved to debug)         β”‚
β”‚                              β”‚                                             β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚  Phase 6: Post-Processing                                                     β”‚
β”‚  ════════════════════════════════════════════════════════════════════════  β”‚
β”‚                              β”‚                                             β”‚
β”‚   Normalize links for BOTH English and translated outputs                   β”‚
β”‚                                                                             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

API Call Efficiency

Experts (N) API Calls Sequential Batches
3 (default) 7 3 (fixed)
5 11 3 (fixed)
7 15 3 (fixed)

Workflow

Phase 0: Load Baseline

Load an existing high-quality Markmap as the starting point.

Phase 1: Independent Review

Each expert independently reviews the baseline and suggests 5-10 improvements: - No group influence - Natural language suggestions - Focus on their domain expertise

Phase 2: Full Discussion

Each expert: 1. Sees all suggestions from all experts 2. Votes on each suggestion (βœ… Agree / ⚠️ Modify / ❌ Disagree) 3. Outputs their final adoption list

Phase 3: Consensus Calculation

Programmatic, not AI: - Count votes for each suggestion - Adopt if β‰₯67% (⅔) agreement - Reject otherwise

Phase 4: Writer

Apply adopted improvements surgically to the baseline: - Minimal changes - Preserve existing quality - Verify links and formatting

Phase 4: Writer

  • Applies adopted improvements to baseline
  • Outputs raw markdown (no post-processing)
  • Saves to debug output for inspection

Phase 5: Translation

  • Translates writer outputs (raw markdown)
  • Both original and translated outputs saved to debug
  • Outputs raw markdown (no post-processing)

Phase 6: Post-Processing

  • Processes both English and translated outputs
  • Link validation and normalization
  • Automatic LeetCode URL generation for all languages
  • GitHub solution link addition
  • Comparison file generation (before/after for each language)

Installation

# Create virtual environment
python -m venv venv

# Activate (Windows)
.\venv\Scripts\activate

# Activate (Unix/macOS)
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Install main project packages (required for post-processing)
# This makes leetcode_datasource available for link normalization
pip install -e ../../../

!!! warning "Post-processing requires main project packages" The post-processing phase uses leetcode_datasource to normalize LeetCode problem links. If you see ModuleNotFoundError: No module named 'leetcode_datasource', run:

```bash
# From ai-markmap-agent directory, with venv activated
pip install -e ../../../
```

Usage

Basic Usage

# Run with default baseline
python main.py

# Specify a baseline file
python main.py --baseline path/to/markmap.md

# Dry run (load data only)
python main.py --dry-run

Translation Only

Translate an existing Markmap without running the full pipeline:

# Translate latest English output to zh-TW (Traditional Chinese - Taiwan)
python translate_only.py

# Translate specific file
python translate_only.py --input path/to/file_en.md

# Custom source/target languages
python translate_only.py --source en --target zh-TW  # Language code uses zh-TW, but filenames use zh-tw

# Also generate HTML
python translate_only.py --html

How to Output neetcode-ontology-agent-evolved-zh-tw.md

The translate_only.py script automatically generates the translated file with the correct naming convention. Here's how to use it:

Method 1: Translate Latest English Output (Recommended)

If you have already generated the English version (neetcode-ontology-agent-evolved-en.md), the script will automatically find it and translate it:

cd tools/mindmaps/ai-markmap-agent

# This will:
# 1. Find the latest English output (from version history or final output)
# 2. Translate it to zh-TW using gpt-5.2
# 3. Save as neetcode-ontology-agent-evolved-zh-tw.md in the final output directory
python translate_only.py

The output will be saved to: - Markdown: ../../docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md - HTML (if using --html): ../../docs/pages/mindmaps/neetcode-ontology-agent-evolved-zh-tw.html

Method 2: Translate a Specific File

If you want to translate a specific English file:

Important: - CLI Tool: Can be run from any directory - relative paths are resolved relative to the script directory - Default output location: When using --output, files are saved to the specified path - Auto-detection: Without --output, files are saved to docs/mindmaps/ (configured in config.yaml) - Version history: The main pipeline also saves to outputs/versions/v1/ for tracking, but translate_only.py only saves to the final location

Working with docs/mindmaps/ directory:

Option 1: From Script Directory (Recommended)

Unix/macOS:

# Navigate to the script directory
cd tools/ai-markmap-agent

# Translate a specific file (auto-detects output to docs/mindmaps/)
python translate_only.py --input ../../docs/mindmaps/neetcode-ontology-agent-evolved-en.md

# Or with explicit output path to docs/mindmaps/
python translate_only.py \
    --input ../../docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    --output ../../docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md

# Translate and generate HTML in one step
python translate_only.py \
    --input ../../docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    --output ../../docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md \
    --html

Windows PowerShell:

# Navigate to the script directory
cd tools\mindmaps\ai-markmap-agent

# Translate a specific file (auto-detects output to docs\mindmaps\)
python translate_only.py --input ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md

# Or with explicit output path to docs\mindmaps\
python translate_only.py `
    --input ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    --output ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md

# Translate and generate HTML in one step
python translate_only.py `
    --input ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    --output ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md `
    --html

Option 2: From Project Root (CLI Tool Mode)

Unix/macOS:

# From project root - relative paths resolved relative to script directory
python tools/mindmaps/ai-markmap-agent/translate_only.py \
    --input docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    --output docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md

# With HTML generation
python tools/mindmaps/ai-markmap-agent/translate_only.py \
    --input docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    --output docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md \
    --html

Windows PowerShell:

# From project root - relative paths resolved relative to script directory
python tools\mindmaps\ai-markmap-agent\translate_only.py `
    --input docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    --output docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md

# With HTML generation
python tools\mindmaps\ai-markmap-agent\translate_only.py `
    --input docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    --output docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md `
    --html

Option 3: Using Absolute Paths

Unix/macOS:

# From any directory using absolute paths
python /path/to/neetcode/tools/mindmaps/ai-markmap-agent/translate_only.py \
    --input /path/to/neetcode/docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    --output /path/to/neetcode/docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md

Windows PowerShell:

# From any directory using absolute paths
python C:\Developer\program\python\neetcode\tools\mindmaps\ai-markmap-agent\translate_only.py `
    --input C:\Developer\program\python\neetcode\docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    --output C:\Developer\program\python\neetcode\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md

Note on Output Locations: - Version History (main pipeline only): tools/mindmaps/ai-markmap-agent/outputs/versions/v1/ (for tracking changes) - Final Output (translate_only.py): docs/mindmaps/ (for actual use) - When using --output, the file is saved to the specified path - When not using --output, it auto-detects based on config (saves to docs/mindmaps/) - Path Resolution: Relative paths are always resolved relative to the script directory (tools/mindmaps/ai-markmap-agent/), not the current working directory Method 3: Translate and Generate HTML in One Step

# Translate and generate HTML output
python translate_only.py --html

This will: 1. Translate the English markdown to Traditional Chinese (Taiwan) 2. Generate the HTML file with the same name

Configuration

The translation uses settings from config/config.yaml:

  • Model: gpt-5.2 (configured in output.naming.languages.zh-tw.translator_model)
  • Source Language: en (default)
  • Target Language: zh-TW (default, language code format; filenames use zh-tw)
  • Max Tokens: translator_max_tokens (configured in output.naming.languages.zh-tw.translator_max_tokens)
  • gpt-5.2: Recommended 128000 (max output capacity: 128,000 tokens, context window: 400,000 tokens)
  • gpt-4o: Recommended 16384 (max output typically 16,384 tokens)
  • gpt-4: Recommended 8192 (max output typically 8,192 tokens)
  • Default: 8192 if not specified
  • Prompt: Uses prompts/translator/zh-tw-translator-behavior.md with comprehensive Taiwan DSA terminology rules

Important: Set translator_max_tokens appropriately for your model. If the value is too small, the API may return empty responses for large translations.

Custom Model Override

You can override the model for a single translation:

python translate_only.py --model gpt-4o

Error Handling

All translation errors include detailed request information for debugging: - Model name and configuration - Prompt size (chars and estimated tokens) - Content size - Max output tokens setting - Full request/response saved to debug output files

Notes

  • The script reads from config/config.yaml for output directory settings
  • Translation prompt enforces Taiwan-specific terminology (not Mainland China terms)
  • API keys are requested at runtime and cleared after execution
  • The output filename automatically replaces the language suffix (e.g., _en β†’ _zh-tw in kebab-case)
  • Always check debug output files when errors occur - they contain the full API request/response

Standalone HTML Converter

For converting Markdown files to HTML without running the full pipeline:

Important: - CLI Tool: Can be run from any directory - relative paths are resolved relative to the script directory - Supports both relative and absolute paths - Template paths are resolved relative to the script directory - Configuration file support: Use convert_to_html.toml for preset conversions and meta description mapping

Configuration File (convert_to_html.toml):

The converter supports a configuration file for: - Preset conversions: Define common conversion tasks with input/output/title - Meta description mapping: Map markdown files to meta description files - Auto-detection: Automatically finds meta description files in tools/mindmaps/meta/ directory

Basic Usage:

Option 1: From Script Directory

Unix/macOS:

# Navigate to script directory
cd tools/ai-markmap-agent

# Basic conversion (output: input.html in same directory)
python convert_to_html.py input.md

# Specify output file
python convert_to_html.py input.md -o output.html

# Custom title
python convert_to_html.py input.md -t "My Mind Map"

# Use custom template
python convert_to_html.py input.md --template templates/custom.html

Windows PowerShell:

# Navigate to script directory
cd tools\mindmaps\ai-markmap-agent

# Basic conversion (output: input.html in same directory)
python convert_to_html.py input.md

# Specify output file
python convert_to_html.py input.md -o output.html

# Custom title
python convert_to_html.py input.md -t "My Mind Map"

# Use custom template
python convert_to_html.py input.md --template templates\custom.html

Using Configuration Presets:

You can define preset conversion tasks in convert_to_html.toml:

# List available presets
python convert_to_html.py --list-presets

# Use a single preset
python convert_to_html.py --preset neetcode-ontology-agent-evolved-en
python convert_to_html.py --preset neetcode-ontology-agent-evolved-zh-tw

# Process all presets at once (recommended for batch conversion)
python convert_to_html.py --all

The --all option processes all presets defined in the configuration file sequentially: - Automatically loads input/output/title from each preset - Loads corresponding meta description files if configured - Generates all HTML files in one command - Shows a summary of successful and failed conversions

Meta Description Support:

The converter automatically loads meta descriptions from the meta_description_file field in each preset: - File path is constructed as: {meta_descriptions_dir}/{meta_description_file} - Default directory: tools/mindmaps/core/meta/ - If meta_description_file is not specified or empty, no meta description tag will be added to the HTML

Option 2: From Project Root (CLI Tool Mode)

Unix/macOS:

# From project root - relative paths resolved relative to script directory
python tools/mindmaps/ai-markmap-agent/convert_to_html.py \
    docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    -o docs/pages/mindmaps/neetcode-ontology-agent-evolved-en.html \
    -t "NeetCode Agent Evolved Mindmap (EN)"

Windows PowerShell:

# From project root - relative paths resolved relative to script directory
python tools\mindmaps\ai-markmap-agent\convert_to_html.py `
    docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    -o docs\pages\mindmaps\neetcode-ontology-agent-evolved-en.html `
    -t "NeetCode Agent Evolved Mindmap (EN)"

Working with docs/mindmaps/ directory:

Option 1: From Script Directory

Unix/macOS:

# Navigate to the tool directory
cd tools/ai-markmap-agent

# Convert English version
python convert_to_html.py \
    ../../docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    -o ../../docs/pages/mindmaps/neetcode-ontology-agent-evolved-en.html \
    -t "NeetCode Agent Evolved Mindmap (EN)"

# Convert Traditional Chinese version
python convert_to_html.py \
    ../../docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md \
    -o ../../docs/pages/mindmaps/neetcode-ontology-agent-evolved-zh-tw.html \
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

Windows PowerShell:

# Navigate to the tool directory
cd tools\mindmaps\ai-markmap-agent

# Convert English version
python convert_to_html.py `
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-en.html `
    -t "NeetCode Agent Evolved Mindmap (EN)"

# Convert Traditional Chinese version
python convert_to_html.py `
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md `
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-zh-tw.html `
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

Option 2: From Project Root (CLI Tool Mode)

Unix/macOS:

# From project root - relative paths resolved relative to script directory
python tools/mindmaps/ai-markmap-agent/convert_to_html.py \
    docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    -o docs/pages/mindmaps/neetcode-ontology-agent-evolved-en.html \
    -t "NeetCode Agent Evolved Mindmap (EN)"

python tools/mindmaps/ai-markmap-agent/convert_to_html.py \
    docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md \
    -o docs/pages/mindmaps/neetcode-ontology-agent-evolved-zh-tw.html \
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

Windows PowerShell:

# From project root - relative paths resolved relative to script directory
python tools\mindmaps\ai-markmap-agent\convert_to_html.py `
    docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    -o docs\pages\mindmaps\neetcode-ontology-agent-evolved-en.html `
    -t "NeetCode Agent Evolved Mindmap (EN)"

python tools\mindmaps\ai-markmap-agent\convert_to_html.py `
    docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md `
    -o docs\pages\mindmaps\neetcode-ontology-agent-evolved-zh-tw.html `
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

This tool is completely independent of the main pipeline and only requires: - Python 3.10+ - jinja2 package

It can be used to convert any Markmap Markdown file to interactive HTML.

Note on Path Resolution: - Relative paths are always resolved relative to the script directory (tools/mindmaps/ai-markmap-agent/), not the current working directory - This allows the tools to work as CLI tools from any directory - Absolute paths work as expected

Windows PowerShell:

# Convert English version
python convert_to_html.py `
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md `
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-en.html `
    -t "NeetCode Agent Evolved Mindmap (EN)"

# Convert Traditional Chinese version
python convert_to_html.py `
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md `
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-zh-tw.html `
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

Batch conversion script (save as convert_all_mindmaps.sh or .bat):

Unix/macOS:

#!/bin/bash
cd "$(dirname "$0")"

# Ensure output directory exists
mkdir -p ../../docs/pages/mindmaps

# Convert English
python convert_to_html.py \
    ../../docs/mindmaps/neetcode-ontology-agent-evolved-en.md \
    -o ../../docs/pages/mindmaps/neetcode-ontology-agent-evolved-en.html \
    -t "NeetCode Agent Evolved Mindmap (EN)"

# Convert Traditional Chinese
python convert_to_html.py \
    ../../docs/mindmaps/neetcode-ontology-agent-evolved-zh-tw.md \
    -o ../../docs/pages/mindmaps/neetcode-ontology-agent-evolved-zh-tw.html \
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

echo "βœ… All conversions complete!"

Windows:

@echo off
cd /d "%~dp0"

REM Ensure output directory exists
if not exist "..\..\docs\pages\mindmaps" mkdir "..\..\docs\pages\mindmaps"

REM Convert English
python convert_to_html.py ^
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-en.md ^
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-en.html ^
    -t "NeetCode Agent Evolved Mindmap (EN)"

REM Convert Traditional Chinese
python convert_to_html.py ^
    ..\..\docs\mindmaps\neetcode-ontology-agent-evolved-zh-tw.md ^
    -o ..\..\docs\pages\mindmaps\neetcode-ontology-agent-evolved-zh-tw.html ^
    -t "NeetCode Agent Evolved Mindmap (繁體中文)"

echo βœ… All conversions complete!

Integration with Pipeline

The standalone tool is automatically used by the main pipeline. The pipeline calls convert_to_html.py programmatically to generate HTML files. This maintains decoupling (the tool can run independently) while enabling integration (the pipeline calls it automatically).

The HTML template is stored in templates/markmap.html and can be customized without modifying code.

API Keys

API keys are entered at runtime and never stored:

python main.py

# You'll be prompted:
# Enter OPENAI API Key: ********
#   βœ“ OPENAI API key accepted

Skip API key prompts:

python main.py --no-openai
python main.py --no-anthropic

Configuration

All settings in config/config.yaml.

Expert Configuration

experts:
  enabled:
    - "architect"
    - "professor"
    - "engineer"

  suggestions:
    min_per_expert: 5
    max_per_expert: 10

  definitions:
    architect:
      name: "Top Software Architect"
      emoji: "πŸ—οΈ"
      model: "gpt-4o"
      focus_areas:
        - "API Kernel abstraction"
        - "Pattern relationships"
        - "Code template reusability"

Refinement Scope

Control what can be changed:

refinement_scope:
  allowed_changes:
    structure:
      enabled: true
      max_depth_change: 1
    content:
      add_content: true
      remove_content: true
      modify_content: true
    problems:
      add_problems: true
      remove_problems: false  # Conservative
      reorder_problems: true

Workflow Settings

workflow:
  discussion_rounds: 2
  parallel_execution: true
  consensus_threshold: 0.67  # 2/3 required

Expert Roles

πŸ—οΈ Top Software Architect

Focus: API design, modularity, system mapping

Reviews for: - Clean API Kernel abstractions - Pattern composability - Code template reusability - System design connections

πŸ“š Distinguished Algorithm Professor

Focus: Correctness, pedagogy, theory

Reviews for: - Concept accuracy - Learning progression - Complexity analysis - Invariant descriptions

βš™οΈ Senior Principal Engineer

Focus: Practical value, interviews, trade-offs

Reviews for: - Interview frequency - Real-world applications - Trade-off explanations - Knowledge discoverability


Project Structure

tools/mindmaps/ai-markmap-agent/
β”œβ”€β”€ config/
β”‚   └── config.yaml              # Main configuration
β”œβ”€β”€ prompts/
β”‚   β”œβ”€β”€ experts/                 # Expert prompts
β”‚   β”‚   β”œβ”€β”€ architect-persona.md
β”‚   β”‚   β”œβ”€β”€ architect-behavior.md
β”‚   β”‚   β”œβ”€β”€ professor-persona.md
β”‚   β”‚   β”œβ”€β”€ professor-behavior.md
β”‚   β”‚   β”œβ”€β”€ engineer-persona.md
β”‚   β”‚   β”œβ”€β”€ engineer-behavior.md
β”‚   β”‚   └── discussion-behavior.md
β”‚   └── writer/
β”‚       β”œβ”€β”€ writer-persona.md
β”‚       β”œβ”€β”€ writer-behavior.md
β”‚       └── markmap-format-guide.md
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ agents/
β”‚   β”‚   β”œβ”€β”€ base_agent.py        # Base agent class
β”‚   β”‚   β”œβ”€β”€ expert.py            # Expert agents
β”‚   β”‚   β”œβ”€β”€ writer.py            # Writer agent
β”‚   β”‚   └── translator.py        # Translator agent
β”‚   β”œβ”€β”€ consensus.py             # Consensus calculation (code)
β”‚   β”œβ”€β”€ graph.py                 # LangGraph workflow
β”‚   β”œβ”€β”€ config_loader.py         # Configuration loading
β”‚   └── ...
β”œβ”€β”€ main.py                      # Entry point
β”œβ”€β”€ outputs/
β”‚   β”œβ”€β”€ versions/                # Version history (v1, v2, ...)
β”‚   β”œβ”€β”€ debug/                   # Debug logs per run
β”‚   └── intermediate/            # Intermediate outputs
└── README.md

Output

Output Files

Final Markmaps are saved to: - Markdown: docs/mindmaps/ - HTML: docs/pages/mindmaps/

Filename format: neetcode-ontology-agent-evolved-{lang}.{ext} (kebab-case)

Examples: - neetcode-ontology-agent-evolved-en.md - neetcode-ontology-agent-evolved-zh-tw.html

Version History

Each run saves a versioned copy to outputs/versions/:

outputs/versions/
β”œβ”€β”€ v1/
β”‚   β”œβ”€β”€ neetcode-ontology-agent-evolved-en.md
β”‚   └── neetcode-ontology-agent-evolved-zh-tw.md
β”œβ”€β”€ v2/
β”‚   └── ...
└── v3/
    └── ...

Note: All filenames follow kebab-case naming convention as specified in Documentation Naming Convention.

Version numbers auto-increment: v1, v2, v3, ...

Versioning Modes

Configure in config/config.yaml:

output:
  versioning:
    enabled: true
    directory: "outputs/versions"
    mode: "continue"      # continue | reset
    prompt_on_reset: true
Mode Behavior
continue Load from latest version (vN), produce vN+1
reset Start fresh from input.baseline.path, produce v1

Reset mode prompts for confirmation. Old versions are deleted only after the pipeline completes successfully (safe: if pipeline fails, old versions are preserved).

Resume Mode

Resume mode allows you to continue execution from a previous pipeline run, supporting: - Reusing completed stage outputs (saves tokens and time) - Re-running from a specific stage (debug-friendly) - Not overwriting original run data (generates new regen run)

Resume vs Reset

Important distinction: - --resume: Reuses debug outputs from previous runs to save API calls. This is about pipeline execution (whether to run new API calls or reuse existing results). - versioning.mode: reset: Deletes old version directories and starts fresh. This is about version management (how to organize final output versions).

These two features are independent: - You can use --resume even when versioning.mode: reset is set - Resume mode reuses debug outputs regardless of versioning mode - Versioning reset only affects final output directories, not debug outputs - When resuming, versioning reset prompts are skipped (reset applies to final output only)

Usage

Method 1: Interactive Resume Mode

python main.py --resume

After startup, it will: 1. Scan all previous runs under outputs/debug/ 2. Display them sorted by time (newest first) 3. Let you select the run to resume 4. Ask whether to reuse each stage's output one by one

Method 2: Start from a Specific Stage

python main.py --resume --from-stage writer

This will automatically: - Select the latest run - Reuse outputs from expert_review, full_discussion, consensus - Re-run from the writer stage

Supported stages: - expert_review - full_discussion - consensus - writer - translate - post_process

Run Naming Rules

  • Original run: run_YYYYMMDD_HHMMSS/
  • Resume from original run: run_YYYYMMDD_HHMMSS_regen_1/
  • Resume again: run_YYYYMMDD_HHMMSS_regen_2/

Important: Original run data is never overwritten, all new outputs are in regen directories.

State Loading

The system automatically loads: - βœ… Consensus data: Loaded from JSON file (if reusing consensus stage) - βœ… Writer output: Loaded from writer output file (if reusing writer stage) - ⚠️ Expert responses: Currently only marked as reused, incomplete recovery (needs improvement)

Notes

  1. Ensure debug_output.enabled = true: Resume mode depends on debug output
  2. API Keys: Still need to provide API keys (even when reusing stages)
  3. Configuration consistency: Resume uses current config, which may differ from original run
  4. Partial state recovery: Currently only partial state recovery is supported, some stages may need to be re-run

Post-Processing Link Generation

Post-processing automatically converts LeetCode problem references to standardized links:

Format:

[LeetCode 11 - Container With Most Water](leetcode_url) Β· [Solution](github_url)

Features: - Includes problem number and title for clarity - Handles multiple AI-generated formats (plain text, links, with/without Solution) - Auto-generates LeetCode URLs from API cache - Adds GitHub solution links when available

Processing Steps (3 Steps)

  1. Text Replacement: LC 11 β†’ LeetCode 11
  2. Remove Artifacts: Plain text Β· Solution removed
  3. Convert to Complete Links: All references β†’ [LeetCode N - Title](url) Β· [Solution](github_url)

Data Sources

  1. Local TOML files (meta/problems/) - Primary source (with solution files)
  2. LeetCode API cache (tools/leetcode-api/crawler/.cache/leetcode_problems.json) - Auto-supplement

Priority: Local TOML with solution > API cache

Workflow Integration

  1. Writer Phase: Generates raw markdown (no post-processing)
  2. Saved to debug output: llm-output-writer-write.md (kebab-case)
  3. Used as input for translation

  4. Translation Phase: Translates raw markdown (no post-processing)

  5. Debug output: translation-before-*.md and translation-after-*.md (kebab-case)
  6. Outputs translated raw markdown

  7. Post-Processing Phase: Processes both English and translated outputs

  8. Input: Raw markdown from writer (English) + translations (e.g., Chinese)
  9. Output: Post-processed markdown with normalized links for all languages
  10. Stats output: βœ“ Converted X LeetCode references, Y with Solution links

Comparison Files

After each post-processing run, a comparison file is automatically generated:

Location: outputs/final/post-processing-comparison-{timestamp}.md (kebab-case)

Contents: - Before/After comparison for each language (English, Chinese, etc.) - Before: Raw content from Writer/Translation (no post-processing) - After: Post-processed content with normalized links

Usage: - Verify link generation correctness for all languages - Check format compliance - Identify improvements needed

LeetCode API Integration

The system automatically syncs with LeetCode API:

# Sync LeetCode problem data (7-day cache)
python tools/leetcode-api/crawler/sync_leetcode_data.py

# Check cache status
python tools/leetcode-api/crawler/sync_leetcode_data.py --check

Integration: - PostProcessor automatically loads and merges API cache data - Missing URLs are auto-generated from API data - No configuration required

See Post-Processing Links Documentation for details.


Module Responsibilities

Module Responsibility
expert.py Domain-specific expert agents
consensus.py Programmatic majority voting
writer.py Refinement-mode writer
graph.py LangGraph workflow orchestration
post_processing.py Link normalization and generation
leetcode_api.py LeetCode API data loading
config_loader.py Configuration management

License

MIT License - See LICENSE for details.