# Content Mapping: Fiction A template for producing structured annotation data from fiction and historically-situated narrative writing. Output is a `.md` content map file consumed by a build agent. Do not write HTML here. --- ## Step 0 — Classify the Source Before mapping, determine: - **Historically situated?** Story uses real events, places, dates, institutions → accuracy-checking is required alongside literary analysis. - **Pure fiction?** No historical anchors → analytical lenses only. - **Challenge/constraint fiction?** Written to a brief → note the constraints; they affect how you read intentionality. --- ## Step 1 — Read the Source Completely Do not begin mapping until you have read the full text. Identify: 1. **The surface story** — what literally happens 2. **The underneath story** — what the text is actually about (often different) 3. **The narrator's position** — first person? Reliable? What do they not notice about themselves? 4. **Historical claims** — any named events, dates, places, institutions, objects, honours, or technologies that can be fact-checked 5. **Language patterns** — naming conventions, who gets interiority, who gets appearance, whose grief is legitimate --- ## Step 2 — Choose Analytical Lenses Fiction supports multiple valid readings. Choose 2–4 lenses that the text genuinely rewards. Do not force a lens onto text that doesn't earn it. **Available lenses (not exhaustive):** | Lens | Apply when... | |---|---| | Unreliable narrator | First-person narration; gap between what narrator claims and what text shows | | Male gaze / naming | Women defined by appearance or social function; asymmetric interiority | | Class position | Character claims outsider status but participates in the system they critique | | Historical accuracy | Text makes factual claims about real events, objects, or honours | | Epistolary / form | Story told through letters, documents — what the form conceals matters | | The unseen character | A character who drives the plot but never speaks or is never named | | Constraint analysis | Challenge fiction — how well are required elements absorbed vs engineered | For each chosen lens, note: *what specific passages earn this reading?* If you can't answer, drop the lens. --- ## Step 3 — Map Annotation Components ### 3a. Decoders Inline interactive elements. Applied to a specific phrase in the prose. Appear on click as a floating panel. **Apply a decoder when:** - A phrase needs factual unpacking (historical event, named place, military honour) - A phrase is a pivot point in the narrative that the reader might miss - A character classification is introduced (naming systems, taxonomy) - A contradiction opens between what the narrator says and what they show - A spelling or factual error exists in the source text **Decoder metadata — use TOON for the structured fields:** ```toon decoders[N]{id,phrase,color,tag,label}: dec-[slug],"exact phrase from text",[default|pink|cyan|amber|red],Tag Text,Panel Heading ``` **Decoder bodies — use YAML for body text (contains commas, complex content):** ```yaml decoder_bodies: dec-[slug]: > 2–4 sentences. State the fact, the contradiction, or the lens observation. Be direct. No hedging within the panel itself — hedging belongs in bias notes. dec-[slug]-link: https://... ``` **Color convention** (establish per-project, document here): - Assign one color per analytical lens or content type - Be consistent: if pink = unreliable narrator catches, every narrator catch is pink - Red = factual error or significant historical inaccuracy --- ### 3b. Lightboxes Full-screen overlay panels. Triggered from inline text. Use when a topic is too large for a decoder (needs multiple sections, a timeline, or extended context). **Apply a lightbox when:** - A decoder body would exceed 5 sentences - The topic has meaningful sub-sections (what it is / what the text gets right / what came after) - The text assumes reader knowledge of something substantial (a war, a legal concept, a cultural tradition) ```yaml lightboxes: lb-[slug]: eyebrow: Category label (e.g. "Historical Context") title: Lightbox heading color: cyan | amber | default sections: - heading: Section heading body: > Paragraph text. - heading: Section heading body: > Paragraph text. source_url: https://... source_label: Link label text ``` --- ### 3c. Accordions Expandable sections within educational tabs. One open at a time. ```yaml accordions: tab-[tab-id]: - heading: Question or section title body: > Extended prose. Can be multiple paragraphs. Separate with blank lines. **Bold** for emphasis. No decoders inside accordions. - heading: ... body: > ... ``` --- ### 3d. Tab Architecture Four tabs is the default. Rename to fit the content. ```toon tabs[4]{id,label,color,purpose}: story,"The Story",white,Full source text with inline decoders and lightbox triggers [lens-1-id],[Lens 1 Name],[color],Educational deep-dive on first analytical lens [lens-2-id],[Lens 2 Name],[color],Educational deep-dive on second analytical lens further,"Further Reading",white,Curated external links ``` Add a fifth tab only if the text genuinely requires it. Three lenses in three tabs is acceptable if all three are earned. --- ### 3e. Further Reading ```toon further_reading[N]{group,title,url,desc,color}: "Lens 1","Link Title","https://...","One-line description",default "Lens 1","Link Title","https://...","One-line description",default "Lens 2","Link Title","https://...","One-line description",cyan ``` Groups correspond to tabs. Use the same color as the tab they belong to. --- ## Step 4 — Bias Notes Every analytical tab requires one bias note. Placed at the top of the tab, before the accordions. Rules: - One sentence acknowledging the limitation. Then stop. - State the specific bias, not a generic disclaimer. - If you have a preference for the narrator or a character, say so — it shapes the analysis. - Do not write "I am an AI" — write what the actual bias is. ```yaml bias_notes: tab-[lens-1-id]: > One sentence. Specific bias. What it shapes in the analysis below. tab-[lens-2-id]: > One sentence. Specific bias. ``` --- ## Step 5 — Historical Accuracy (if applicable) If the source text makes historical claims, produce an accuracy table before the decoder map. ```toon historical_claims[N]{claim,verdict,detail}: "Claim as stated in text",accurate,"Supporting detail or correction" "Claim as stated in text",inaccurate,"What is actually true; what error was made" "Claim as stated in text",plausible,"Consistent with the period; unverifiable at this level of detail" "Claim as stated in text",anachronism,"Object/concept/institution did not exist at the stated time" ``` Inaccurate claims must become decoder annotations in the source text. Use `color: red`. --- ## Step 6 — The Story Section Format the source text for the build agent. The text is sacred — reproduce it exactly including original spelling errors. Mark decoder and lightbox trigger points. Use this notation inline in the prose block: ``` [DECODER:dec-slug] exact phrase in text [/DECODER] [LIGHTBOX:lb-slug] phrase that triggers lightbox [/LIGHTBOX] [ORDER] "Dialogue that functions as a section break or order" [/ORDER] [LETTER-START:muriel|marcus|neutral] [/LETTER-START] [LETTER-END] [/LETTER-END] ``` --- ## Output Format The completed content map is a single `.md` file structured as: ``` # Content Map: [Story Title] ## Source Classification ## Chosen Lenses ## Historical Accuracy Table (if applicable) ## Tab Definitions (TOON) ## Decoders (TOON metadata + YAML bodies) ## Lightboxes (YAML) ## Accordions (YAML) ## Bias Notes (YAML) ## Further Reading (TOON) ## Source Text (annotated) ``` This file is the complete specification for the build agent. The build agent needs nothing else except `annotated-writing-build.md`.