Day 12/2021 - Passage Pathing

Day twelve of AoC 2021 (available at https://adventofcode.com/2021/day/12) drops us into a cave system. The passages branch underground like roots, and we need to trace every possible path from start to end.

The input is a list of connections: start-A, A-b, b-end, etc.

Cave Rules

Two types of caves:

  • Big caves (uppercase): can be visited any number of times
  • Small caves (lowercase): can be visited at most once per path

Special caves start and end can only be visited exactly once — you can't return to start, and reaching end finishes the path.

Building the Graph

First, parse the connections into a graph structure:

interface Input {
    start: string;
    end: string;
}

type Caves = {[key: string]: Node};
type NodeType = 'start' | 'end' | 'big' | 'small';

interface Node {
    type: NodeType;
    name: string;
    connections: string[];
}

const getCaves = (input: Input[]) => {
    const caves: Caves = {};
    const nodeNames = [...new Set(input.map(item => [item.start, item.end]).flat())];
    
    for (const nodeName of nodeNames) {
        let type: NodeType = 'small';
        if (nodeName === nodeName.toUpperCase()) {
            type = 'big';
        }
        caves[nodeName] = {
            type,
            name: nodeName,
            connections: [],
        };
    }

    for (const link of input) {
        caves[link.start].connections.push(link.end);
        caves[link.end].connections.push(link.start);
    }
    return caves;
};

Note: I use a plain object {[key: string]: Node} instead of a Map here — it makes the copy operation simpler.

Connections are bidirectional: A-b means A connects to b AND b connects to A.

Part One: Finding All Paths

Recursive depth-first search. When leaving a small cave, remove it from the graph so we don't revisit:

const copyCaves = (input: Caves): Caves => {
    const result: Caves = {};
    for (const node in input) {
        result[node] = {
            type: input[node].type,
            name: input[node].name,
            connections: [...input[node].connections],
        };
    }
    return result;
};

const getPathsOne = (caves: Caves, start: string, end: string): string[][] => {
    const startNode = caves[start];
    const nexts = startNode.connections;
    const result = [];
    const nextCaves = copyCaves(caves);
    
    if (startNode.type === 'small') {
        delete nextCaves[start];
        for (const key in nextCaves) {
            nextCaves[key].connections = nextCaves[key].connections.filter(item => item !== start);
        }
    }
    
    for (const next of nexts) {
        if (next === end) {
            result.push([start, end]);
            continue;
        }
        const nodePaths = getPathsOne(nextCaves, next, end);
        for (const nodePath of nodePaths) {
            result.push([start, ...nodePath]);
        }
    }
    return result;
};

The copyCaves function does a proper deep copy — spreading the connections array so modifications don't affect the original.

Part Two: One Bonus Visit

Now we can visit one small cave twice. This adds a "bonus" flag to the recursion:

const processTwo = (caves: Caves, start: Node, end: string, usedBonus = false) => {
    const result: string[][] = [];
    for (const next of start.connections) {
        if (next === end) {
            result.push([start.name, end]);
            continue;
        }
        if (next === "start") continue;
        
        const nodePaths = getPathsTwo(caves, next, end, usedBonus);
        for (const nodePath of nodePaths) {
            result.push([start.name, ...nodePath]);
        }
    }
    return result;
};

const getPathsTwo = (caves: Caves, start: string, end: string, usedBonus = false): string[][] => {
    const startNode = caves[start];
    const nextCaves = copyCaves(caves);
    
    if (startNode.type === 'small') {
        if (usedBonus) {
            delete nextCaves[start];
            for (const key in nextCaves) {
                nextCaves[key].connections = nextCaves[key].connections.filter(item => item !== start);
            }
            return processTwo(nextCaves, startNode, end, true);
        } else {
            // Option 1: Use bonus now (don't remove cave)
            const pathsOne = processTwo(nextCaves, startNode, end, true);

            // Option 2: No bonus (remove cave as usual)
            delete nextCaves[start];
            for (const key in nextCaves) {
                nextCaves[key].connections = nextCaves[key].connections.filter(item => item !== start);
            }
            const pathsTwo = processTwo(nextCaves, startNode, end, false);

            return [...pathsOne, ...pathsTwo];
        }
    } else {
        return processTwo(nextCaves, startNode, end, usedBonus);
    }
};

The branching creates duplicate paths, so the final answer deduplicates using a Set on joined path strings:

Performance: Not Great

Part Time Paths Found
Part 1 ~300ms 3,292
Part 2 ~13,000ms 89,000+

Part two takes 13 seconds — 96% of the total runtime across all puzzles so far. That's... not good.

The problems:

  1. Excessive copying: Deep-copying the entire cave graph on every recursive call
  2. Duplicate work: The branching logic explores the same paths multiple times
  3. No memoization: We don't cache results for equivalent states

I could optimize by:

  • Tracking visited nodes in a Set instead of modifying the graph
  • Using an iterative approach with an explicit stack
  • Memoizing based on (current position, visited set, bonus used)

But the solution is correct, and 13 seconds is tolerable for a one-shot puzzle. The first rule of optimization: measure first. I know part two is slow. Whether it's worth optimizing depends on context.

Takeaways

  1. Graphs from edge lists: Parse connections, extract unique nodes, build adjacency lists. Standard pattern.

  2. Recursion for path enumeration: DFS naturally explores all paths. Return lists of paths, prepend current node.

  3. State modification vs. state tracking: I modified the graph (removing visited caves). Tracking visited nodes in a separate Set would be cleaner and faster.

  4. Deep copy is expensive: JSON.parse(JSON.stringify()) works but is slow. For hot paths, use targeted copying or immutable updates.

  5. Correct first, fast later: A 13-second solution that gives the right answer beats a fast solution that doesn't. Optimize when it matters.

  6. Deduplicate at the end: When branching logic creates duplicates, sometimes it's easier to just dedupe the results than to prevent duplicates in the first place.

The full solution is available in the repository.