Homepage (site/index.html): integration-v14 promoted, Writings section integrated with 33 pieces clustered by type (stories/essays/miscellany), Writings welcome lightbox, content frame at 98% opacity. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
230 lines
7.6 KiB
Markdown
230 lines
7.6 KiB
Markdown
# Content Mapping: Non-Fiction
|
||
|
||
A template for producing structured annotation data from essays, journalism, historical documents, speeches, and analytical writing. Output is a `.md` content map file consumed by a build agent. Do not write HTML here.
|
||
|
||
---
|
||
|
||
## Step 0 — Classify the Source
|
||
|
||
Determine the non-fiction subtype. It affects which decoders and tabs are appropriate.
|
||
|
||
| Subtype | Primary annotation task |
|
||
|---|---|
|
||
| Historical document | Contextualise; decode named people, events, institutions; note anachronisms |
|
||
| Political speech / propaganda | Decode rhetoric; note what is claimed vs what is true; note audience |
|
||
| Scientific or technical writing | Decode concepts; assess claims against current consensus |
|
||
| Journalism / reportage | Verify facts; note sourcing; note what is and isn't attributed |
|
||
| Personal essay / memoir | Unreliability of memory; note what the author can't know about themselves |
|
||
| Legal or policy document | Decode jargon; note what the language does vs what it says |
|
||
|
||
A single source may span subtypes. Identify the dominant one and note the secondary.
|
||
|
||
---
|
||
|
||
## Step 1 — Read the Source Completely
|
||
|
||
Before mapping, identify:
|
||
|
||
1. **The argument** — what position is the text advancing?
|
||
2. **The evidence** — what does it cite, quote, or invoke in support?
|
||
3. **The gaps** — what would a critical reader notice is absent, unattributed, or assumed?
|
||
4. **The moment** — when was this written, and what was happening? What has happened since?
|
||
5. **Named entities** — every person, place, institution, date, law, statistic, or event that can be verified
|
||
|
||
---
|
||
|
||
## Step 2 — Choose Analytical Lenses
|
||
|
||
Non-fiction lenses differ from fiction. Choose 2–3 that the text genuinely rewards.
|
||
|
||
| Lens | Apply when... |
|
||
|---|---|
|
||
| Rhetorical analysis | Text is designed to persuade; identify the moves (ethos, pathos, logos, omission) |
|
||
| Historical accuracy | Text makes factual claims; some may be wrong, outdated, or contested |
|
||
| Source and attribution | Who is cited, who isn't, what is stated as fact without a source |
|
||
| The argument vs the evidence | Does the evidence actually support the conclusion drawn? |
|
||
| Who is speaking / who is silent | Whose perspective structures the account? Who is absent? |
|
||
| Language and framing | Word choices that carry implicit assumptions (e.g. "developed" vs "developing") |
|
||
| What changed after | The text was written at a specific moment; what do we know now that the author didn't? |
|
||
|
||
---
|
||
|
||
## Step 3 — Map Annotation Components
|
||
|
||
### 3a. Decoders
|
||
|
||
Apply a decoder when:
|
||
- A technical term, acronym, or proper noun requires unpacking
|
||
- A named person, institution, or event is invoked without explanation
|
||
- A claim is contested or has been updated since publication
|
||
- The text uses language that carries implicit assumptions (framing, euphemism, loaded terms)
|
||
- A rhetorical move is worth naming (strawman, appeal to authority, etc.)
|
||
|
||
```toon
|
||
decoders[N]{id,phrase,color,tag,label}:
|
||
dec-[slug],"exact phrase from text",[default|pink|cyan|amber|red],Tag Text,Panel Heading
|
||
```
|
||
|
||
```yaml
|
||
decoder_bodies:
|
||
dec-[slug]: >
|
||
What it is, what it means in context, why it matters here.
|
||
For contested claims: state what the text says, what the evidence shows.
|
||
For rhetorical moves: name the move, describe its effect.
|
||
dec-[slug]-link: https://...
|
||
```
|
||
|
||
**Color convention** (establish per-project):
|
||
- Assign colors by lens or content type, document the scheme here
|
||
- Red = factual error, outdated claim, or significantly misleading statement
|
||
|
||
---
|
||
|
||
### 3b. Lightboxes
|
||
|
||
Apply a lightbox when:
|
||
- An event, institution, or concept requires substantial background
|
||
- A claim is embedded in a debate too large for a tooltip
|
||
- The "what came after" context fundamentally changes the reading
|
||
|
||
```yaml
|
||
lightboxes:
|
||
lb-[slug]:
|
||
eyebrow: Category label
|
||
title: Lightbox heading
|
||
color: cyan | amber | default
|
||
sections:
|
||
- heading: What it was
|
||
body: >
|
||
Background.
|
||
- heading: What the text claims
|
||
body: >
|
||
Specific claim and its accuracy.
|
||
- heading: What came after
|
||
body: >
|
||
Subsequent developments that reframe the text.
|
||
source_url: https://...
|
||
source_label: Link label
|
||
```
|
||
|
||
---
|
||
|
||
### 3c. Accordions
|
||
|
||
For non-fiction, accordions structure the analytical tabs. Common patterns:
|
||
|
||
**For the "Historical Moment" tab:**
|
||
- What was happening when this was written
|
||
- Who was the intended audience
|
||
- How the text was received
|
||
- What has changed since
|
||
|
||
**For the "The Argument" tab:**
|
||
- What the text claims
|
||
- What evidence it uses
|
||
- Where the evidence is strong
|
||
- Where the argument has gaps or has been contested
|
||
|
||
```yaml
|
||
accordions:
|
||
tab-[tab-id]:
|
||
- heading: Section heading
|
||
body: >
|
||
Prose. Direct. No hedging within accordion bodies.
|
||
```
|
||
|
||
---
|
||
|
||
### 3d. Tab Architecture
|
||
|
||
```toon
|
||
tabs[4]{id,label,color,purpose}:
|
||
text,"The Text",white,Full source with inline decoders and lightbox triggers
|
||
[analysis-id],[Analysis Tab Name],[color],Core argument and evidence
|
||
[context-id],[Context Tab Name],[color],Historical moment; what came after
|
||
further,"Further Reading",white,Curated external links
|
||
```
|
||
|
||
For non-fiction, the "context" tab is almost always warranted as a standalone tab. The text's moment is not decorative — it is part of what you are teaching.
|
||
|
||
---
|
||
|
||
### 3e. Further Reading
|
||
|
||
```toon
|
||
further_reading[N]{group,title,url,desc,color}:
|
||
"Group Name","Title","https://...","Description",default
|
||
```
|
||
|
||
Groups should correspond to tabs. For non-fiction, a "Primary Sources" group is often useful — direct the reader to the original documents, not just secondary commentary.
|
||
|
||
---
|
||
|
||
## Step 4 — Bias Notes
|
||
|
||
Every analytical tab requires one bias note.
|
||
|
||
Non-fiction bias notes are especially important because the text being analysed often has its own point of view, and the analyst's agreement or disagreement shapes everything.
|
||
|
||
State: what tradition your analysis comes from, whether you find the text's argument credible, and what that means for the reading below.
|
||
|
||
```yaml
|
||
bias_notes:
|
||
tab-[analysis-id]: >
|
||
One sentence. What tradition this analysis comes from, or what credence I give the argument, and how that shapes what follows.
|
||
tab-[context-id]: >
|
||
One sentence. What sources my historical framing relies on, and whose perspective they represent.
|
||
```
|
||
|
||
---
|
||
|
||
## Step 5 — Fact-Checking Protocol
|
||
|
||
Non-fiction requires an accuracy table. All named facts — statistics, dates, attributions, historical claims — should be checked.
|
||
|
||
```toon
|
||
fact_checks[N]{claim,verdict,detail}:
|
||
"Exact claim as stated",accurate,"Source"
|
||
"Exact claim as stated",inaccurate,"What is actually true"
|
||
"Exact claim as stated",contested,"Summarise the debate"
|
||
"Exact claim as stated",outdated,"What has changed since publication"
|
||
"Exact claim as stated",unverifiable,"Cannot be confirmed or refuted with available sources"
|
||
```
|
||
|
||
Inaccurate and contested claims become red decoder annotations in the text.
|
||
|
||
Outdated claims become amber decoder annotations — the original claim was accurate at time of writing but has since changed.
|
||
|
||
---
|
||
|
||
## Step 6 — Source Text Section
|
||
|
||
Reproduce the text exactly. Mark annotation trigger points:
|
||
|
||
```
|
||
[DECODER:dec-slug] exact phrase [/DECODER]
|
||
[LIGHTBOX:lb-slug] trigger phrase [/LIGHTBOX]
|
||
[PULL-QUOTE] sentence worth isolating [/PULL-QUOTE]
|
||
[EDITORIAL] (Ed.) note or aside [/EDITORIAL]
|
||
[SECTION-BREAK] --- [/SECTION-BREAK]
|
||
```
|
||
|
||
---
|
||
|
||
## Output Format
|
||
|
||
```
|
||
# Content Map: [Text Title]
|
||
|
||
## Source Classification
|
||
## Chosen Lenses
|
||
## Fact-Check Table (TOON)
|
||
## Tab Definitions (TOON)
|
||
## Decoders (TOON metadata + YAML bodies)
|
||
## Lightboxes (YAML)
|
||
## Accordions (YAML)
|
||
## Bias Notes (YAML)
|
||
## Further Reading (TOON)
|
||
## Source Text (annotated)
|
||
```
|