From 5e944e8a1798c17ebb95781a7467941b59ac80de Mon Sep 17 00:00:00 2001 From: sudacode Date: Sat, 14 Mar 2026 22:17:26 -0700 Subject: [PATCH] chore: remove implementation plan documents --- .../2026-03-12-stats-subcommand-design.md | 81 -- docs/plans/2026-03-12-stats-subcommand.md | 173 --- .../2026-03-12-stats-v2-implementation.md | 1260 ----------------- docs/plans/2026-03-12-stats-v2-redesign.md | 152 -- .../2026-03-13-docs-kb-restructure-design.md | 88 -- docs/plans/2026-03-13-docs-kb-restructure.md | 138 -- .../2026-03-13-imm-words-cleanup-plan.md | 69 - ...6-03-13-immersion-anime-metadata-design.md | 110 -- .../2026-03-13-immersion-anime-metadata.md | 370 ----- ...6-03-14-episode-detail-anki-link-design.md | 56 - .../2026-03-14-episode-detail-anki-link.md | 402 ------ ...14-immersion-occurrence-tracking-design.md | 115 -- ...026-03-14-immersion-occurrence-tracking.md | 71 - .../plans/2026-03-14-stats-redesign-design.md | 137 -- ...026-03-14-stats-redesign-implementation.md | 1092 -------------- 15 files changed, 4314 deletions(-) delete mode 100644 docs/plans/2026-03-12-stats-subcommand-design.md delete mode 100644 docs/plans/2026-03-12-stats-subcommand.md delete mode 100644 docs/plans/2026-03-12-stats-v2-implementation.md delete mode 100644 docs/plans/2026-03-12-stats-v2-redesign.md delete mode 100644 docs/plans/2026-03-13-docs-kb-restructure-design.md delete mode 100644 docs/plans/2026-03-13-docs-kb-restructure.md delete mode 100644 docs/plans/2026-03-13-imm-words-cleanup-plan.md delete mode 100644 docs/plans/2026-03-13-immersion-anime-metadata-design.md delete mode 100644 docs/plans/2026-03-13-immersion-anime-metadata.md delete mode 100644 docs/plans/2026-03-14-episode-detail-anki-link-design.md delete mode 100644 docs/plans/2026-03-14-episode-detail-anki-link.md delete mode 100644 docs/plans/2026-03-14-immersion-occurrence-tracking-design.md delete mode 100644 docs/plans/2026-03-14-immersion-occurrence-tracking.md delete mode 100644 docs/plans/2026-03-14-stats-redesign-design.md delete mode 100644 docs/plans/2026-03-14-stats-redesign-implementation.md diff --git a/docs/plans/2026-03-12-stats-subcommand-design.md b/docs/plans/2026-03-12-stats-subcommand-design.md deleted file mode 100644 index 55b918d..0000000 --- a/docs/plans/2026-03-12-stats-subcommand-design.md +++ /dev/null @@ -1,81 +0,0 @@ -# Stats Subcommand Design - -**Problem:** Add a launcher command and matching app command that run only the stats dashboard stack: start the local stats server, initialize the data source it needs, and open the browser to the stats page. - -**Constraints:** -- Public entrypoint is a launcher subcommand: `subminer stats` -- Reuse the existing app instance when one is already running -- Explicit `stats` launch overrides `stats.autoStartServer` -- If `immersionTracking.enabled` is `false`, fail with an error instead of opening an empty dashboard -- Scope limited to stats server + browser page; no overlay/mpv startup requirements - -## Recommended Approach - -Add a dedicated app CLI flag, `--stats`, and let the launcher subcommand forward into that path. Use the existing Electron single-instance flow so a second `subminer stats` invocation can be handled by the primary app instance. Add a small response-file handshake so the launcher can still return success or failure when work is delegated to an already-running primary instance. - -## Runtime Flow - -1. `subminer stats` runs in the launcher. -2. Launcher resolves the app binary and forwards: - - `--stats` - - `--log-level ` when provided - - internal `--stats-response-path ` -3. Electron startup parses `--stats` as an app-starting command. -4. If this process becomes the primary instance, it runs a stats-only startup path: - - load config - - fail if `immersionTracking.enabled === false` - - initialize immersion tracker - - start stats server, forcing startup regardless of `stats.autoStartServer` - - open `http://127.0.0.1:` - - write success/error to the response path -5. If the process is a secondary instance, Electron forwards argv to the primary instance through the existing single-instance event. The primary instance runs the same stats command handler and writes the response result to the temp file. The secondary process waits for that file and exits with the same status. - -## Code Shape - -### Launcher - -- Add `stats` top-level subcommand in `launcher/config/cli-parser-builder.ts` -- Normalize that invocation in launcher arg parsing -- Add `launcher/commands/stats-command.ts` -- Dispatch it from `launcher/main.ts` -- Reuse existing app passthrough spawn helpers -- Add launcher tests for routing and forwarded argv - -### Electron app - -- Extend `src/cli/args.ts` with: - - `stats: boolean` - - `statsResponsePath?: string` -- Update app start gating so `--stats` starts the app -- Add a focused stats CLI runtime service instead of burying stats launch logic inside `main.ts` -- Reuse existing immersion tracker startup and stats server helpers where possible -- Add a single function that: - - validates immersion tracking enabled - - ensures tracker exists - - ensures stats server exists - - opens browser - - reports completion/failure to optional response file - -## Error Handling - -- `immersionTracking.enabled === false`: hard failure, clear message -- tracker init failure: hard failure, clear message -- server start failure: hard failure, clear message -- browser open failure: hard failure, clear message -- response-path write failure: log warning; primary runtime behavior still follows command result - -## Testing - -- Launcher parser/routing tests for `subminer stats` -- Launcher forwarding test verifies `--stats` and `--stats-response-path` -- App CLI arg tests for `--stats` -- App runtime tests for: - - stats command starts tracker/server/browser - - stats command forces server start even when auto-start is off - - stats command fails when immersion tracking is disabled - - second-instance command path can surface failure via response file plumbing - -## Docs - -- Update CLI help text -- Update user docs where launcher/browser stats access is described diff --git a/docs/plans/2026-03-12-stats-subcommand.md b/docs/plans/2026-03-12-stats-subcommand.md deleted file mode 100644 index 7444b3c..0000000 --- a/docs/plans/2026-03-12-stats-subcommand.md +++ /dev/null @@ -1,173 +0,0 @@ -# Stats Subcommand Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Add `subminer stats` plus app-side `--stats` support so SubMiner can launch only the stats dashboard stack, reuse an existing app instance, and fail when immersion tracking is disabled. - -**Architecture:** The launcher gets a new `stats` subcommand that forwards into a dedicated Electron CLI flag. The app handles that flag through a focused stats command service that validates immersion tracking, ensures tracker/server startup, opens the browser, and optionally reports success/failure through a response file so second-instance reuse preserves shell exit status. - -**Tech Stack:** TypeScript, Bun test runner, Electron single-instance lifecycle, Commander-based launcher CLI. - ---- - -### Task 1: Record launcher command coverage - -**Files:** -- Modify: `launcher/main.test.ts` -- Modify: `launcher/commands/command-modules.test.ts` - -**Step 1: Write the failing test** - -Add coverage that `subminer stats` routes through a dedicated launcher command and forwards `--stats` into the app process. - -**Step 2: Run test to verify it fails** - -Run: `bun test launcher/main.test.ts launcher/commands/command-modules.test.ts` -Expected: FAIL because the `stats` command does not exist yet. - -**Step 3: Write minimal implementation** - -Add the launcher parser, command module, and dispatch wiring needed for the tests to pass. - -**Step 4: Run test to verify it passes** - -Run: `bun test launcher/main.test.ts launcher/commands/command-modules.test.ts` -Expected: PASS - -**Step 5: Commit** - -```bash -git add launcher/main.test.ts launcher/commands/command-modules.test.ts launcher/config/cli-parser-builder.ts launcher/main.ts launcher/commands/stats-command.ts -git commit -m "feat: add launcher stats command" -``` - -### Task 2: Add failing app CLI parsing/runtime tests - -**Files:** -- Modify: `src/cli/args.test.ts` -- Modify: `src/main/runtime/cli-command-runtime-handler.test.ts` -- Modify: `src/main/runtime/cli-command-context.test.ts` - -**Step 1: Write the failing test** - -Add tests for: -- parsing `--stats` -- starting the app for `--stats` -- stats command behavior when immersion tracking is disabled -- stats command behavior when startup succeeds - -**Step 2: Run test to verify it fails** - -Run: `bun test src/cli/args.test.ts src/main/runtime/cli-command-runtime-handler.test.ts src/main/runtime/cli-command-context.test.ts` -Expected: FAIL because the new flag/service is not implemented. - -**Step 3: Write minimal implementation** - -Extend CLI args and add the smallest stats command runtime surface required by the tests. - -**Step 4: Run test to verify it passes** - -Run: `bun test src/cli/args.test.ts src/main/runtime/cli-command-runtime-handler.test.ts src/main/runtime/cli-command-context.test.ts` -Expected: PASS - -**Step 5: Commit** - -```bash -git add src/cli/args.test.ts src/main/runtime/cli-command-runtime-handler.test.ts src/main/runtime/cli-command-context.test.ts src/cli/args.ts src/main/cli-runtime.ts src/main/runtime/cli-command-context.ts src/main/runtime/cli-command-context-deps.ts -git commit -m "feat: add app stats cli command" -``` - -### Task 3: Add stats-only startup plumbing - -**Files:** -- Modify: `src/main.ts` -- Modify: `src/core/services/startup.ts` -- Modify: `src/core/services/cli-command.ts` -- Modify: `src/main/runtime/immersion-startup.ts` -- Test: existing runtime tests above plus any new focused stats tests - -**Step 1: Write the failing test** - -Add focused tests around the stats startup service and response-path reporting before touching production logic. - -**Step 2: Run test to verify it fails** - -Run: `bun test src/core/services/cli-command.test.ts` -Expected: FAIL because stats startup/reporting is missing. - -**Step 3: Write minimal implementation** - -Implement: -- stats-only startup gating -- tracker/server/browser startup orchestration -- response-file success/failure reporting for reused primary-instance handling - -**Step 4: Run test to verify it passes** - -Run: `bun test src/core/services/cli-command.test.ts` -Expected: PASS - -**Step 5: Commit** - -```bash -git add src/main.ts src/core/services/startup.ts src/core/services/cli-command.ts src/main/runtime/immersion-startup.ts src/core/services/cli-command.test.ts -git commit -m "feat: add stats-only startup flow" -``` - -### Task 4: Update docs/help - -**Files:** -- Modify: `src/cli/help.ts` -- Modify: `docs-site/immersion-tracking.md` -- Modify: `docs-site/mining-workflow.md` - -**Step 1: Write the failing test** - -Add/update any help-text assertions first. - -**Step 2: Run test to verify it fails** - -Run: `bun test src/cli/help.test.ts` -Expected: FAIL until help text includes stats usage. - -**Step 3: Write minimal implementation** - -Document `subminer stats` and clarify that explicit invocation forces the local dashboard server to start. - -**Step 4: Run test to verify it passes** - -Run: `bun test src/cli/help.test.ts` -Expected: PASS - -**Step 5: Commit** - -```bash -git add src/cli/help.ts src/cli/help.test.ts docs-site/immersion-tracking.md docs-site/mining-workflow.md -git commit -m "docs: document stats subcommand" -``` - -### Task 5: Verify integrated behavior - -**Files:** -- No new files required - -**Step 1: Run targeted unit/integration lanes** - -Run: -- `bun run typecheck` -- `bun run test:launcher` -- `bun test src/cli/args.test.ts src/core/services/cli-command.test.ts src/main/runtime/cli-command-runtime-handler.test.ts src/main/runtime/cli-command-context.test.ts src/cli/help.test.ts` - -Expected: PASS - -**Step 2: Run broader maintained lane if the targeted slice is clean** - -Run: `bun run test:fast` -Expected: PASS or surface unrelated pre-existing failures. - -**Step 3: Commit verification fixes if needed** - -```bash -git add -A -git commit -m "test: stabilize stats command coverage" -``` diff --git a/docs/plans/2026-03-12-stats-v2-implementation.md b/docs/plans/2026-03-12-stats-v2-implementation.md deleted file mode 100644 index c40d55e..0000000 --- a/docs/plans/2026-03-12-stats-v2-implementation.md +++ /dev/null @@ -1,1260 +0,0 @@ -# Stats Dashboard v2 Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Redesign the stats dashboard to focus on session/media history with an activity feed, cover art library, and per-anime drill-down — while fixing the watch time inflation bug and relative date formatting. - -**Architecture:** Activity feed as the default Overview tab, dedicated Library tab with Anilist cover art grid, per-anime detail view navigated from library cards. Bug fixes first, then backend (queries, API, rate limiter), then frontend (tabs, components, hooks). - -**Tech Stack:** React 19, Recharts, Tailwind CSS (Catppuccin Macchiato), Hono server, SQLite, Anilist GraphQL API, Electron IPC - ---- - -### Task 1: Fix Watch Time Inflation — Session Summaries Query - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts:11-34` - -**Step 1: Fix `getSessionSummaries` to use MAX instead of SUM** - -The telemetry values are cumulative snapshots. Each row stores the running total. Using `SUM()` across all telemetry rows for a session inflates values massively. Since the query already groups by `s.session_id`, change every `SUM(t.*)` to `MAX(t.*)`: - -```typescript -export function getSessionSummaries(db: DatabaseSync, limit = 50): SessionSummaryQueryRow[] { - const prepared = db.prepare(` - SELECT - s.session_id AS sessionId, - s.video_id AS videoId, - v.canonical_title AS canonicalTitle, - s.started_at_ms AS startedAtMs, - s.ended_at_ms AS endedAtMs, - COALESCE(MAX(t.total_watched_ms), 0) AS totalWatchedMs, - COALESCE(MAX(t.active_watched_ms), 0) AS activeWatchedMs, - COALESCE(MAX(t.lines_seen), 0) AS linesSeen, - COALESCE(MAX(t.words_seen), 0) AS wordsSeen, - COALESCE(MAX(t.tokens_seen), 0) AS tokensSeen, - COALESCE(MAX(t.cards_mined), 0) AS cardsMined, - COALESCE(MAX(t.lookup_count), 0) AS lookupCount, - COALESCE(MAX(t.lookup_hits), 0) AS lookupHits - FROM imm_sessions s - LEFT JOIN imm_session_telemetry t ON t.session_id = s.session_id - LEFT JOIN imm_videos v ON v.video_id = s.video_id - GROUP BY s.session_id - ORDER BY s.started_at_ms DESC - LIMIT ? - `); - return prepared.all(limit) as unknown as SessionSummaryQueryRow[]; -} -``` - -**Step 2: Verify build compiles** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 3: Commit** - -```bash -git add src/core/services/immersion-tracker/query.ts -git commit -m "fix(stats): use MAX instead of SUM for cumulative telemetry in session summaries" -``` - ---- - -### Task 2: Fix Watch Time Inflation — Daily & Monthly Rollups - -**Files:** -- Modify: `src/core/services/immersion-tracker/maintenance.ts:99-208` - -**Step 1: Fix `upsertDailyRollupsForGroups` to use MAX-per-session subquery** - -The rollup query must first get `MAX()` per session, then `SUM()` across sessions for that day+video combo: - -```typescript -function upsertDailyRollupsForGroups( - db: DatabaseSync, - groups: Array<{ rollupDay: number; videoId: number }>, - rollupNowMs: number, -): void { - if (groups.length === 0) { - return; - } - - const upsertStmt = db.prepare(` - INSERT INTO imm_daily_rollups ( - rollup_day, video_id, total_sessions, total_active_min, total_lines_seen, - total_words_seen, total_tokens_seen, total_cards, cards_per_hour, - words_per_min, lookup_hit_rate, CREATED_DATE, LAST_UPDATE_DATE - ) - SELECT - CAST(s.started_at_ms / 86400000 AS INTEGER) AS rollup_day, - s.video_id AS video_id, - COUNT(DISTINCT s.session_id) AS total_sessions, - COALESCE(SUM(sm.max_active_ms), 0) / 60000.0 AS total_active_min, - COALESCE(SUM(sm.max_lines), 0) AS total_lines_seen, - COALESCE(SUM(sm.max_words), 0) AS total_words_seen, - COALESCE(SUM(sm.max_tokens), 0) AS total_tokens_seen, - COALESCE(SUM(sm.max_cards), 0) AS total_cards, - CASE - WHEN COALESCE(SUM(sm.max_active_ms), 0) > 0 - THEN (COALESCE(SUM(sm.max_cards), 0) * 60.0) / (COALESCE(SUM(sm.max_active_ms), 0) / 60000.0) - ELSE NULL - END AS cards_per_hour, - CASE - WHEN COALESCE(SUM(sm.max_active_ms), 0) > 0 - THEN COALESCE(SUM(sm.max_words), 0) / (COALESCE(SUM(sm.max_active_ms), 0) / 60000.0) - ELSE NULL - END AS words_per_min, - CASE - WHEN COALESCE(SUM(sm.max_lookups), 0) > 0 - THEN CAST(COALESCE(SUM(sm.max_hits), 0) AS REAL) / CAST(SUM(sm.max_lookups) AS REAL) - ELSE NULL - END AS lookup_hit_rate, - ? AS CREATED_DATE, - ? AS LAST_UPDATE_DATE - FROM ( - SELECT - t.session_id, - MAX(t.active_watched_ms) AS max_active_ms, - MAX(t.lines_seen) AS max_lines, - MAX(t.words_seen) AS max_words, - MAX(t.tokens_seen) AS max_tokens, - MAX(t.cards_mined) AS max_cards, - MAX(t.lookup_count) AS max_lookups, - MAX(t.lookup_hits) AS max_hits - FROM imm_session_telemetry t - GROUP BY t.session_id - ) sm - JOIN imm_sessions s ON s.session_id = sm.session_id - WHERE CAST(s.started_at_ms / 86400000 AS INTEGER) = ? AND s.video_id = ? - GROUP BY rollup_day, s.video_id - ON CONFLICT (rollup_day, video_id) DO UPDATE SET - total_sessions = excluded.total_sessions, - total_active_min = excluded.total_active_min, - total_lines_seen = excluded.total_lines_seen, - total_words_seen = excluded.total_words_seen, - total_tokens_seen = excluded.total_tokens_seen, - total_cards = excluded.total_cards, - cards_per_hour = excluded.cards_per_hour, - words_per_min = excluded.words_per_min, - lookup_hit_rate = excluded.lookup_hit_rate, - CREATED_DATE = COALESCE(imm_daily_rollups.CREATED_DATE, excluded.CREATED_DATE), - LAST_UPDATE_DATE = excluded.LAST_UPDATE_DATE - `); - - for (const { rollupDay, videoId } of groups) { - upsertStmt.run(rollupNowMs, rollupNowMs, rollupDay, videoId); - } -} -``` - -**Step 2: Apply the same fix to `upsertMonthlyRollupsForGroups`** - -Same subquery pattern — replace the direct `SUM(t.*)` with `SUM(sm.max_*)` via a `MAX`-per-session subquery: - -```typescript -function upsertMonthlyRollupsForGroups( - db: DatabaseSync, - groups: Array<{ rollupMonth: number; videoId: number }>, - rollupNowMs: number, -): void { - if (groups.length === 0) { - return; - } - - const upsertStmt = db.prepare(` - INSERT INTO imm_monthly_rollups ( - rollup_month, video_id, total_sessions, total_active_min, total_lines_seen, - total_words_seen, total_tokens_seen, total_cards, CREATED_DATE, LAST_UPDATE_DATE - ) - SELECT - CAST(strftime('%Y%m', s.started_at_ms / 1000, 'unixepoch') AS INTEGER) AS rollup_month, - s.video_id AS video_id, - COUNT(DISTINCT s.session_id) AS total_sessions, - COALESCE(SUM(sm.max_active_ms), 0) / 60000.0 AS total_active_min, - COALESCE(SUM(sm.max_lines), 0) AS total_lines_seen, - COALESCE(SUM(sm.max_words), 0) AS total_words_seen, - COALESCE(SUM(sm.max_tokens), 0) AS total_tokens_seen, - COALESCE(SUM(sm.max_cards), 0) AS total_cards, - ? AS CREATED_DATE, - ? AS LAST_UPDATE_DATE - FROM ( - SELECT - t.session_id, - MAX(t.active_watched_ms) AS max_active_ms, - MAX(t.lines_seen) AS max_lines, - MAX(t.words_seen) AS max_words, - MAX(t.tokens_seen) AS max_tokens, - MAX(t.cards_mined) AS max_cards - FROM imm_session_telemetry t - GROUP BY t.session_id - ) sm - JOIN imm_sessions s ON s.session_id = sm.session_id - WHERE CAST(strftime('%Y%m', s.started_at_ms / 1000, 'unixepoch') AS INTEGER) = ? AND s.video_id = ? - GROUP BY rollup_month, s.video_id - ON CONFLICT (rollup_month, video_id) DO UPDATE SET - total_sessions = excluded.total_sessions, - total_active_min = excluded.total_active_min, - total_lines_seen = excluded.total_lines_seen, - total_words_seen = excluded.total_words_seen, - total_tokens_seen = excluded.total_tokens_seen, - total_cards = excluded.total_cards, - CREATED_DATE = COALESCE(imm_monthly_rollups.CREATED_DATE, excluded.CREATED_DATE), - LAST_UPDATE_DATE = excluded.LAST_UPDATE_DATE - `); - - for (const { rollupMonth, videoId } of groups) { - upsertStmt.run(rollupNowMs, rollupNowMs, rollupMonth, videoId); - } -} -``` - -**Step 3: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 4: Commit** - -```bash -git add src/core/services/immersion-tracker/maintenance.ts -git commit -m "fix(stats): use MAX-per-session subquery in daily and monthly rollup aggregation" -``` - ---- - -### Task 3: Force-Rebuild Rollups on Schema Upgrade - -**Files:** -- Modify: `src/core/services/immersion-tracker/storage.ts` -- Modify: `src/core/services/immersion-tracker/types.ts:1` - -**Step 1: Bump schema version to trigger rebuild** - -In `types.ts`, change line 1: -```typescript -export const SCHEMA_VERSION = 4; -``` - -**Step 2: Add rollup rebuild to schema migration in `storage.ts`** - -At the end of `ensureSchema()`, before the `INSERT INTO imm_schema_version`, add a rollup wipe so that `runRollupMaintenance(db, true)` will recompute from scratch on next maintenance run: - -```typescript - // Wipe stale rollups so they get recomputed with corrected MAX-per-session logic - if (currentVersion?.schema_version && currentVersion.schema_version < SCHEMA_VERSION) { - db.exec('DELETE FROM imm_daily_rollups'); - db.exec('DELETE FROM imm_monthly_rollups'); - db.exec(`UPDATE imm_rollup_state SET state_value = 0 WHERE state_key = 'last_rollup_sample_ms'`); - } -``` - -Add this block just before the final `INSERT INTO imm_schema_version` statement (before line 302). - -**Step 3: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 4: Commit** - -```bash -git add src/core/services/immersion-tracker/types.ts src/core/services/immersion-tracker/storage.ts -git commit -m "fix(stats): bump schema to v4 and wipe rollups for recomputation" -``` - ---- - -### Task 4: Fix Relative Date Formatting - -**Files:** -- Modify: `stats/src/lib/formatters.ts:18-26` -- Modify: `stats/src/lib/formatters.test.ts` - -**Step 1: Update tests first** - -Replace `stats/src/lib/formatters.test.ts` with comprehensive tests: - -```typescript -import assert from 'node:assert/strict'; -import test from 'node:test'; - -import { formatRelativeDate } from './formatters'; - -test('formatRelativeDate: future timestamps return "just now"', () => { - assert.equal(formatRelativeDate(Date.now() + 60_000), 'just now'); -}); - -test('formatRelativeDate: 0ms ago returns "just now"', () => { - assert.equal(formatRelativeDate(Date.now()), 'just now'); -}); - -test('formatRelativeDate: 30s ago returns "just now"', () => { - assert.equal(formatRelativeDate(Date.now() - 30_000), 'just now'); -}); - -test('formatRelativeDate: 5 minutes ago returns "5m ago"', () => { - assert.equal(formatRelativeDate(Date.now() - 5 * 60_000), '5m ago'); -}); - -test('formatRelativeDate: 59 minutes ago returns "59m ago"', () => { - assert.equal(formatRelativeDate(Date.now() - 59 * 60_000), '59m ago'); -}); - -test('formatRelativeDate: 2 hours ago returns "2h ago"', () => { - assert.equal(formatRelativeDate(Date.now() - 2 * 3_600_000), '2h ago'); -}); - -test('formatRelativeDate: 23 hours ago returns "23h ago"', () => { - assert.equal(formatRelativeDate(Date.now() - 23 * 3_600_000), '23h ago'); -}); - -test('formatRelativeDate: 36 hours ago returns "Yesterday"', () => { - assert.equal(formatRelativeDate(Date.now() - 36 * 3_600_000), 'Yesterday'); -}); - -test('formatRelativeDate: 5 days ago returns "5d ago"', () => { - assert.equal(formatRelativeDate(Date.now() - 5 * 86_400_000), '5d ago'); -}); - -test('formatRelativeDate: 10 days ago returns locale date string', () => { - const ts = Date.now() - 10 * 86_400_000; - assert.equal(formatRelativeDate(ts), new Date(ts).toLocaleDateString()); -}); -``` - -**Step 2: Run tests to verify they fail** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun test src/lib/formatters.test.ts` -Expected: Several failures (current implementation lacks minute/hour granularity) - -**Step 3: Implement the new formatter** - -Replace `formatRelativeDate` in `stats/src/lib/formatters.ts`: - -```typescript -export function formatRelativeDate(ms: number): string { - const now = Date.now(); - const diffMs = now - ms; - if (diffMs < 60_000) return 'just now'; - const diffMin = Math.floor(diffMs / 60_000); - if (diffMin < 60) return `${diffMin}m ago`; - const diffHours = Math.floor(diffMs / 3_600_000); - if (diffHours < 24) return `${diffHours}h ago`; - const diffDays = Math.floor(diffMs / 86_400_000); - if (diffDays < 2) return 'Yesterday'; - if (diffDays < 7) return `${diffDays}d ago`; - return new Date(ms).toLocaleDateString(); -} -``` - -**Step 4: Run tests to verify they pass** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun test src/lib/formatters.test.ts` -Expected: All pass - -**Step 5: Commit** - -```bash -git add stats/src/lib/formatters.ts stats/src/lib/formatters.test.ts -git commit -m "fix(stats): add minute and hour granularity to relative date formatting" -``` - ---- - -### Task 5: Add `imm_media_art` Table and Cover Art Queries - -**Files:** -- Modify: `src/core/services/immersion-tracker/storage.ts` (add table in `ensureSchema`) -- Modify: `src/core/services/immersion-tracker/query.ts` (add new query functions) -- Modify: `src/core/services/immersion-tracker/types.ts` (add new row types) - -**Step 1: Add types** - -Append to `src/core/services/immersion-tracker/types.ts`: - -```typescript -export interface MediaArtRow { - videoId: number; - anilistId: number | null; - coverUrl: string | null; - coverBlob: Buffer | null; - titleRomaji: string | null; - titleEnglish: string | null; - episodesTotal: number | null; - fetchedAtMs: number; -} - -export interface MediaLibraryRow { - videoId: number; - canonicalTitle: string; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - lastWatchedMs: number; - hasCoverArt: number; -} - -export interface MediaDetailRow { - videoId: number; - canonicalTitle: string; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - totalLinesSeen: number; - totalLookupCount: number; - totalLookupHits: number; -} -``` - -**Step 2: Add table creation in `ensureSchema`** - -Add after the `imm_kanji` table creation block (after line 191 in storage.ts): - -```typescript - db.exec(` - CREATE TABLE IF NOT EXISTS imm_media_art( - video_id INTEGER PRIMARY KEY, - anilist_id INTEGER, - cover_url TEXT, - cover_blob BLOB, - title_romaji TEXT, - title_english TEXT, - episodes_total INTEGER, - fetched_at_ms INTEGER NOT NULL, - CREATED_DATE INTEGER, - LAST_UPDATE_DATE INTEGER, - FOREIGN KEY(video_id) REFERENCES imm_videos(video_id) ON DELETE CASCADE - ); - `); -``` - -**Step 3: Add query functions** - -Append to `src/core/services/immersion-tracker/query.ts`: - -```typescript -import type { MediaArtRow, MediaLibraryRow, MediaDetailRow } from './types'; - -export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] { - return db.prepare(` - SELECT - v.video_id AS videoId, - v.canonical_title AS canonicalTitle, - COUNT(DISTINCT s.session_id) AS totalSessions, - COALESCE(SUM(sm.max_active_ms), 0) AS totalActiveMs, - COALESCE(SUM(sm.max_cards), 0) AS totalCards, - COALESCE(SUM(sm.max_words), 0) AS totalWordsSeen, - MAX(s.started_at_ms) AS lastWatchedMs, - CASE WHEN ma.cover_blob IS NOT NULL THEN 1 ELSE 0 END AS hasCoverArt - FROM imm_videos v - JOIN imm_sessions s ON s.video_id = v.video_id - LEFT JOIN ( - SELECT - t.session_id, - MAX(t.active_watched_ms) AS max_active_ms, - MAX(t.cards_mined) AS max_cards, - MAX(t.words_seen) AS max_words - FROM imm_session_telemetry t - GROUP BY t.session_id - ) sm ON sm.session_id = s.session_id - LEFT JOIN imm_media_art ma ON ma.video_id = v.video_id - GROUP BY v.video_id - ORDER BY lastWatchedMs DESC - `).all() as unknown as MediaLibraryRow[]; -} - -export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRow | null { - return db.prepare(` - SELECT - v.video_id AS videoId, - v.canonical_title AS canonicalTitle, - COUNT(DISTINCT s.session_id) AS totalSessions, - COALESCE(SUM(sm.max_active_ms), 0) AS totalActiveMs, - COALESCE(SUM(sm.max_cards), 0) AS totalCards, - COALESCE(SUM(sm.max_words), 0) AS totalWordsSeen, - COALESCE(SUM(sm.max_lines), 0) AS totalLinesSeen, - COALESCE(SUM(sm.max_lookups), 0) AS totalLookupCount, - COALESCE(SUM(sm.max_hits), 0) AS totalLookupHits - FROM imm_videos v - JOIN imm_sessions s ON s.video_id = v.video_id - LEFT JOIN ( - SELECT - t.session_id, - MAX(t.active_watched_ms) AS max_active_ms, - MAX(t.cards_mined) AS max_cards, - MAX(t.words_seen) AS max_words, - MAX(t.lines_seen) AS max_lines, - MAX(t.lookup_count) AS max_lookups, - MAX(t.lookup_hits) AS max_hits - FROM imm_session_telemetry t - GROUP BY t.session_id - ) sm ON sm.session_id = s.session_id - WHERE v.video_id = ? - GROUP BY v.video_id - `).get(videoId) as unknown as MediaDetailRow | null; -} - -export function getMediaSessions(db: DatabaseSync, videoId: number, limit = 100): SessionSummaryQueryRow[] { - return db.prepare(` - SELECT - s.session_id AS sessionId, - s.video_id AS videoId, - v.canonical_title AS canonicalTitle, - s.started_at_ms AS startedAtMs, - s.ended_at_ms AS endedAtMs, - COALESCE(MAX(t.total_watched_ms), 0) AS totalWatchedMs, - COALESCE(MAX(t.active_watched_ms), 0) AS activeWatchedMs, - COALESCE(MAX(t.lines_seen), 0) AS linesSeen, - COALESCE(MAX(t.words_seen), 0) AS wordsSeen, - COALESCE(MAX(t.tokens_seen), 0) AS tokensSeen, - COALESCE(MAX(t.cards_mined), 0) AS cardsMined, - COALESCE(MAX(t.lookup_count), 0) AS lookupCount, - COALESCE(MAX(t.lookup_hits), 0) AS lookupHits - FROM imm_sessions s - LEFT JOIN imm_session_telemetry t ON t.session_id = s.session_id - LEFT JOIN imm_videos v ON v.video_id = s.video_id - WHERE s.video_id = ? - GROUP BY s.session_id - ORDER BY s.started_at_ms DESC - LIMIT ? - `).all(videoId, limit) as unknown as SessionSummaryQueryRow[]; -} - -export function getMediaDailyRollups(db: DatabaseSync, videoId: number, limit = 90): ImmersionSessionRollupRow[] { - return db.prepare(` - SELECT - rollup_day AS rollupDayOrMonth, - video_id AS videoId, - total_sessions AS totalSessions, - total_active_min AS totalActiveMin, - total_lines_seen AS totalLinesSeen, - total_words_seen AS totalWordsSeen, - total_tokens_seen AS totalTokensSeen, - total_cards AS totalCards, - cards_per_hour AS cardsPerHour, - words_per_min AS wordsPerMin, - lookup_hit_rate AS lookupHitRate - FROM imm_daily_rollups - WHERE video_id = ? - ORDER BY rollup_day DESC - LIMIT ? - `).all(videoId, limit) as unknown as ImmersionSessionRollupRow[]; -} - -export function getCoverArt(db: DatabaseSync, videoId: number): MediaArtRow | null { - return db.prepare(` - SELECT - video_id AS videoId, - anilist_id AS anilistId, - cover_url AS coverUrl, - cover_blob AS coverBlob, - title_romaji AS titleRomaji, - title_english AS titleEnglish, - episodes_total AS episodesTotal, - fetched_at_ms AS fetchedAtMs - FROM imm_media_art - WHERE video_id = ? - `).get(videoId) as unknown as MediaArtRow | null; -} - -export function upsertCoverArt( - db: DatabaseSync, - videoId: number, - art: { - anilistId: number | null; - coverUrl: string | null; - coverBlob: Buffer | null; - titleRomaji: string | null; - titleEnglish: string | null; - episodesTotal: number | null; - }, -): void { - const nowMs = Date.now(); - db.prepare(` - INSERT INTO imm_media_art ( - video_id, anilist_id, cover_url, cover_blob, - title_romaji, title_english, episodes_total, - fetched_at_ms, CREATED_DATE, LAST_UPDATE_DATE - ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) - ON CONFLICT(video_id) DO UPDATE SET - anilist_id = excluded.anilist_id, - cover_url = excluded.cover_url, - cover_blob = excluded.cover_blob, - title_romaji = excluded.title_romaji, - title_english = excluded.title_english, - episodes_total = excluded.episodes_total, - fetched_at_ms = excluded.fetched_at_ms, - LAST_UPDATE_DATE = excluded.LAST_UPDATE_DATE - `).run( - videoId, art.anilistId, art.coverUrl, art.coverBlob, - art.titleRomaji, art.titleEnglish, art.episodesTotal, - nowMs, nowMs, nowMs, - ); -} -``` - -**Step 4: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker/types.ts src/core/services/immersion-tracker/storage.ts src/core/services/immersion-tracker/query.ts -git commit -m "feat(stats): add imm_media_art table and media library/detail queries" -``` - ---- - -### Task 6: Centralized Anilist Rate Limiter - -**Files:** -- Create: `src/core/services/anilist/rate-limiter.ts` - -**Step 1: Implement sliding-window rate limiter** - -```typescript -const DEFAULT_MAX_PER_MINUTE = 20; -const WINDOW_MS = 60_000; -const SAFETY_REMAINING_THRESHOLD = 5; - -export interface AnilistRateLimiter { - acquire(): Promise; - recordResponse(headers: Headers): void; -} - -export function createAnilistRateLimiter( - maxPerMinute = DEFAULT_MAX_PER_MINUTE, -): AnilistRateLimiter { - const timestamps: number[] = []; - let pauseUntilMs = 0; - - function pruneOld(now: number): void { - const cutoff = now - WINDOW_MS; - while (timestamps.length > 0 && timestamps[0]! < cutoff) { - timestamps.shift(); - } - } - - return { - async acquire(): Promise { - const now = Date.now(); - - if (now < pauseUntilMs) { - const waitMs = pauseUntilMs - now; - await new Promise((resolve) => setTimeout(resolve, waitMs)); - } - - pruneOld(Date.now()); - - if (timestamps.length >= maxPerMinute) { - const oldest = timestamps[0]!; - const waitMs = oldest + WINDOW_MS - Date.now() + 100; - if (waitMs > 0) { - await new Promise((resolve) => setTimeout(resolve, waitMs)); - } - pruneOld(Date.now()); - } - - timestamps.push(Date.now()); - }, - - recordResponse(headers: Headers): void { - const remaining = headers.get('x-ratelimit-remaining'); - if (remaining !== null) { - const n = parseInt(remaining, 10); - if (Number.isFinite(n) && n < SAFETY_REMAINING_THRESHOLD) { - const reset = headers.get('x-ratelimit-reset'); - if (reset) { - const resetMs = parseInt(reset, 10) * 1000; - if (Number.isFinite(resetMs)) { - pauseUntilMs = Math.max(pauseUntilMs, resetMs); - } - } else { - pauseUntilMs = Math.max(pauseUntilMs, Date.now() + WINDOW_MS); - } - } - } - - const retryAfter = headers.get('retry-after'); - if (retryAfter) { - const seconds = parseInt(retryAfter, 10); - if (Number.isFinite(seconds) && seconds > 0) { - pauseUntilMs = Math.max(pauseUntilMs, Date.now() + seconds * 1000); - } - } - }, - }; -} -``` - -**Step 2: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 3: Commit** - -```bash -git add src/core/services/anilist/rate-limiter.ts -git commit -m "feat(stats): add centralized Anilist rate limiter with sliding window" -``` - ---- - -### Task 7: Cover Art Fetcher Service - -**Files:** -- Create: `src/core/services/anilist/cover-art-fetcher.ts` - -**Step 1: Implement the cover art fetcher** - -This service searches Anilist for anime cover art and caches results. It reuses the existing `guessAnilistMediaInfo` for title parsing and `pickBestSearchResult`-style matching. - -```typescript -import type { DatabaseSync } from '../immersion-tracker/sqlite'; -import type { AnilistRateLimiter } from './rate-limiter'; -import { getCoverArt, upsertCoverArt } from '../immersion-tracker/query'; - -const ANILIST_GRAPHQL_URL = 'https://graphql.anilist.co'; - -const SEARCH_QUERY = ` - query ($search: String!) { - Page(perPage: 5) { - media(search: $search, type: ANIME) { - id - episodes - coverImage { large medium } - title { romaji english native } - } - } - } -`; - -interface AnilistSearchMedia { - id: number; - episodes: number | null; - coverImage?: { large?: string; medium?: string }; - title?: { romaji?: string; english?: string; native?: string }; -} - -interface AnilistSearchResponse { - data?: { Page?: { media?: AnilistSearchMedia[] } }; - errors?: Array<{ message?: string }>; -} - -function stripFilenameTags(title: string): string { - return title - .replace(/\s*\[.*?\]\s*/g, ' ') - .replace(/\s*\((?:\d{4}|(?:\d+(?:bit|p)))\)\s*/gi, ' ') - .replace(/\s*-\s*S\d+E\d+\s*/i, ' ') - .replace(/\s*-\s*\d{2,4}\s*/, ' ') - .replace(/\s+/g, ' ') - .trim(); -} - -export interface CoverArtFetcher { - fetchIfMissing(db: DatabaseSync, videoId: number, canonicalTitle: string): Promise; -} - -export function createCoverArtFetcher( - rateLimiter: AnilistRateLimiter, - logger: { info: (msg: string) => void; warn: (msg: string, detail?: unknown) => void }, -): CoverArtFetcher { - return { - async fetchIfMissing(db: DatabaseSync, videoId: number, canonicalTitle: string): Promise { - const existing = getCoverArt(db, videoId); - if (existing) return true; - - const searchTitle = stripFilenameTags(canonicalTitle); - if (!searchTitle) { - upsertCoverArt(db, videoId, { - anilistId: null, coverUrl: null, coverBlob: null, - titleRomaji: null, titleEnglish: null, episodesTotal: null, - }); - return false; - } - - try { - await rateLimiter.acquire(); - const res = await fetch(ANILIST_GRAPHQL_URL, { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ query: SEARCH_QUERY, variables: { search: searchTitle } }), - }); - rateLimiter.recordResponse(res.headers); - - if (res.status === 429) { - logger.warn(`Anilist 429 for "${searchTitle}", will retry later`); - return false; - } - - const payload = await res.json() as AnilistSearchResponse; - const media = payload.data?.Page?.media ?? []; - if (media.length === 0) { - upsertCoverArt(db, videoId, { - anilistId: null, coverUrl: null, coverBlob: null, - titleRomaji: null, titleEnglish: null, episodesTotal: null, - }); - return false; - } - - const best = media[0]!; - const coverUrl = best.coverImage?.large ?? best.coverImage?.medium ?? null; - let coverBlob: Buffer | null = null; - - if (coverUrl) { - await rateLimiter.acquire(); - const imgRes = await fetch(coverUrl); - rateLimiter.recordResponse(imgRes.headers); - if (imgRes.ok) { - coverBlob = Buffer.from(await imgRes.arrayBuffer()); - } - } - - upsertCoverArt(db, videoId, { - anilistId: best.id, - coverUrl, - coverBlob, - titleRomaji: best.title?.romaji ?? null, - titleEnglish: best.title?.english ?? null, - episodesTotal: best.episodes, - }); - - logger.info(`Cached cover art for "${searchTitle}" (anilist:${best.id})`); - return true; - } catch (err) { - logger.warn(`Cover art fetch failed for "${searchTitle}"`, err); - return false; - } - }, - }; -} -``` - -**Step 2: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 3: Commit** - -```bash -git add src/core/services/anilist/cover-art-fetcher.ts -git commit -m "feat(stats): add cover art fetcher with Anilist search and image caching" -``` - ---- - -### Task 8: Add Media API Endpoints and IPC Handlers - -**Files:** -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/ipc.ts` -- Modify: `src/shared/ipc/contracts.ts` -- Modify: `src/preload-stats.ts` - -**Step 1: Add new IPC channel constants** - -In `src/shared/ipc/contracts.ts`, add to `IPC_CHANNELS.request` (after line 72): - -```typescript - statsGetMediaLibrary: 'stats:get-media-library', - statsGetMediaDetail: 'stats:get-media-detail', - statsGetMediaSessions: 'stats:get-media-sessions', - statsGetMediaDailyRollups: 'stats:get-media-daily-rollups', - statsGetMediaCover: 'stats:get-media-cover', -``` - -**Step 2: Add HTTP routes to stats-server.ts** - -Add before the `return app;` line in `createStatsApp()`: - -```typescript - app.get('/api/stats/media', async (c) => { - const library = await tracker.getMediaLibrary(); - return c.json(library); - }); - - app.get('/api/stats/media/:videoId', async (c) => { - const videoId = parseIntQuery(c.req.param('videoId'), 0); - if (videoId <= 0) return c.json(null, 400); - const [detail, sessions, rollups] = await Promise.all([ - tracker.getMediaDetail(videoId), - tracker.getMediaSessions(videoId, 100), - tracker.getMediaDailyRollups(videoId, 90), - ]); - return c.json({ detail, sessions, rollups }); - }); - - app.get('/api/stats/media/:videoId/cover', async (c) => { - const videoId = parseIntQuery(c.req.param('videoId'), 0); - if (videoId <= 0) return c.body(null, 404); - const art = await tracker.getCoverArt(videoId); - if (!art?.coverBlob) return c.body(null, 404); - return new Response(art.coverBlob, { - headers: { - 'Content-Type': 'image/jpeg', - 'Cache-Control': 'public, max-age=604800', - }, - }); - }); -``` - -**Step 3: Add IPC handlers** - -Add corresponding IPC handlers in `src/core/services/ipc.ts` following the existing pattern (after the `statsGetKanji` handler). - -**Step 4: Add preload API methods** - -Add to `src/preload-stats.ts` statsAPI object: - -```typescript - getMediaLibrary: (): Promise => - ipcRenderer.invoke(IPC_CHANNELS.request.statsGetMediaLibrary), - - getMediaDetail: (videoId: number): Promise => - ipcRenderer.invoke(IPC_CHANNELS.request.statsGetMediaDetail, videoId), - - getMediaSessions: (videoId: number, limit?: number): Promise => - ipcRenderer.invoke(IPC_CHANNELS.request.statsGetMediaSessions, videoId, limit), - - getMediaDailyRollups: (videoId: number, limit?: number): Promise => - ipcRenderer.invoke(IPC_CHANNELS.request.statsGetMediaDailyRollups, videoId, limit), - - getMediaCover: (videoId: number): Promise => - ipcRenderer.invoke(IPC_CHANNELS.request.statsGetMediaCover, videoId), -``` - -**Step 5: Wire up `ImmersionTrackerService` to expose the new query methods** - -The service needs to expose `getMediaLibrary()`, `getMediaDetail(videoId)`, `getMediaSessions(videoId, limit)`, `getMediaDailyRollups(videoId, limit)`, and `getCoverArt(videoId)` by delegating to the query functions added in Task 5. - -**Step 6: Verify build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && npx tsc --noEmit` - -**Step 7: Commit** - -```bash -git add src/shared/ipc/contracts.ts src/core/services/stats-server.ts src/core/services/ipc.ts src/preload-stats.ts src/core/services/immersion-tracker-service.ts -git commit -m "feat(stats): add media library/detail/cover API endpoints and IPC handlers" -``` - ---- - -### Task 9: Frontend — Update Types, Clients, and Hooks - -**Files:** -- Modify: `stats/src/types/stats.ts` -- Modify: `stats/src/lib/api-client.ts` -- Modify: `stats/src/lib/ipc-client.ts` -- Create: `stats/src/hooks/useMediaLibrary.ts` -- Create: `stats/src/hooks/useMediaDetail.ts` - -**Step 1: Add new types in `stats/src/types/stats.ts`** - -```typescript -export interface MediaLibraryItem { - videoId: number; - canonicalTitle: string; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - lastWatchedMs: number; - hasCoverArt: number; -} - -export interface MediaDetailData { - detail: { - videoId: number; - canonicalTitle: string; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - totalLinesSeen: number; - totalLookupCount: number; - totalLookupHits: number; - } | null; - sessions: SessionSummary[]; - rollups: DailyRollup[]; -} -``` - -**Step 2: Add new methods to both clients** - -Add to `apiClient` in `stats/src/lib/api-client.ts`: - -```typescript - getMediaLibrary: () => fetchJson('/api/stats/media'), - getMediaDetail: (videoId: number) => - fetchJson(`/api/stats/media/${videoId}`), -``` - -Add matching methods to `ipcClient` in `stats/src/lib/ipc-client.ts` and the `StatsElectronAPI` interface. - -**Step 3: Create `stats/src/hooks/useMediaLibrary.ts`** - -```typescript -import { useState, useEffect } from 'react'; -import { getStatsClient } from './useStatsApi'; -import type { MediaLibraryItem } from '../types/stats'; - -export function useMediaLibrary() { - const [media, setMedia] = useState([]); - const [loading, setLoading] = useState(true); - const [error, setError] = useState(null); - - useEffect(() => { - getStatsClient() - .getMediaLibrary() - .then(setMedia) - .catch((err: Error) => setError(err.message)) - .finally(() => setLoading(false)); - }, []); - - return { media, loading, error }; -} -``` - -**Step 4: Create `stats/src/hooks/useMediaDetail.ts`** - -```typescript -import { useState, useEffect } from 'react'; -import { getStatsClient } from './useStatsApi'; -import type { MediaDetailData } from '../types/stats'; - -export function useMediaDetail(videoId: number | null) { - const [data, setData] = useState(null); - const [loading, setLoading] = useState(false); - const [error, setError] = useState(null); - - useEffect(() => { - if (videoId === null) return; - setLoading(true); - setError(null); - getStatsClient() - .getMediaDetail(videoId) - .then(setData) - .catch((err: Error) => setError(err.message)) - .finally(() => setLoading(false)); - }, [videoId]); - - return { data, loading, error }; -} -``` - -**Step 5: Verify frontend build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun run build` - -**Step 6: Commit** - -```bash -git add stats/src/types/stats.ts stats/src/lib/api-client.ts stats/src/lib/ipc-client.ts stats/src/hooks/useMediaLibrary.ts stats/src/hooks/useMediaDetail.ts -git commit -m "feat(stats): add media library and detail types, clients, and hooks" -``` - ---- - -### Task 10: Frontend — Redesign Overview Tab as Activity Feed - -**Files:** -- Modify: `stats/src/components/overview/OverviewTab.tsx` -- Modify: `stats/src/components/overview/HeroStats.tsx` -- Modify: `stats/src/components/overview/RecentSessions.tsx` -- Delete or repurpose: `stats/src/components/overview/QuickStats.tsx` - -**Step 1: Simplify HeroStats to 4 cards: Watch Time Today, Cards Mined, Streak, All Time** - -Replace the "Words Seen" and "Lookup Hit Rate" cards with "Streak" and "All Time" — move the streak logic from QuickStats into HeroStats. The all-time total is the sum of all rollup `totalActiveMin`. - -**Step 2: Redesign RecentSessions as an activity feed** - -- Group sessions by day ("Today", "Yesterday", "March 10") -- Each row: small cover art thumbnail (48x64), clean title, relative time + duration, cards + words stats -- Use the cover art endpoint: `/api/stats/media/${videoId}/cover` with an `` tag and fallback placeholder - -**Step 3: Remove WatchTimeChart and QuickStats from the Overview tab** - -The watch time chart moves to the Trends tab. QuickStats data is absorbed into HeroStats. - -**Step 4: Update OverviewTab layout** - -```tsx -export function OverviewTab() { - const { data, loading, error } = useOverview(); - if (loading) return
Loading...
; - if (error) return
Error: {error}
; - if (!data) return null; - - return ( -
- - -
- ); -} -``` - -**Step 5: Verify frontend build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun run build` - -**Step 6: Commit** - -```bash -git add stats/src/components/overview/ -git commit -m "feat(stats): redesign Overview tab as activity feed with hero stats" -``` - ---- - -### Task 11: Frontend — Library Tab with Cover Art Grid - -**Files:** -- Create: `stats/src/components/library/LibraryTab.tsx` -- Create: `stats/src/components/library/MediaCard.tsx` -- Create: `stats/src/components/library/CoverImage.tsx` - -**Step 1: Create CoverImage component** - -Loads cover art from `/api/stats/media/${videoId}/cover`. Falls back to a gray placeholder with the first character of the title. Handles loading state. - -**Step 2: Create MediaCard component** - -Shows: CoverImage (3:4 aspect ratio), episode badge, title, watch time, cards mined. Accepts `onClick` prop for navigation. - -**Step 3: Create LibraryTab** - -Uses `useMediaLibrary()` hook. Renders search input, filter chips (All/Watching/Completed — for v1, "All" only since we don't track watch status yet), summary line ("N titles · Xh total"), and a CSS grid of MediaCards. Clicking a card sets a `selectedVideoId` state to navigate to the detail view (Task 12). - -**Step 4: Verify frontend build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun run build` - -**Step 5: Commit** - -```bash -git add stats/src/components/library/ -git commit -m "feat(stats): add Library tab with cover art grid" -``` - ---- - -### Task 12: Frontend — Per-Anime Detail View - -**Files:** -- Create: `stats/src/components/library/MediaDetailView.tsx` -- Create: `stats/src/components/library/MediaHeader.tsx` -- Create: `stats/src/components/library/MediaWatchChart.tsx` -- Create: `stats/src/components/library/MediaSessionList.tsx` - -**Step 1: Create MediaHeader** - -Large cover art on the left, title + stats on the right (total watch time, total episodes/sessions, cards mined, avg session length). - -**Step 2: Create MediaWatchChart** - -Reuse the existing `WatchTimeChart` pattern (Recharts BarChart) but scoped to the anime's rollups from `MediaDetailData.rollups`. - -**Step 3: Create MediaSessionList** - -List of sessions for this anime. Reuse the SessionRow pattern but without the expand/detail — just show timestamp, duration, cards, words per session. - -**Step 4: Create MediaDetailView** - -Composed component: back button, MediaHeader, MediaWatchChart, MediaSessionList. Uses `useMediaDetail(videoId)` hook. The vocabulary section can be a placeholder for now ("Coming soon") to keep v1 scope manageable. - -**Step 5: Integrate into LibraryTab** - -When `selectedVideoId` is set, render `MediaDetailView` instead of the grid. Back button resets `selectedVideoId` to null. - -**Step 6: Verify frontend build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun run build` - -**Step 7: Commit** - -```bash -git add stats/src/components/library/ -git commit -m "feat(stats): add per-anime detail view with header, chart, and session history" -``` - ---- - -### Task 13: Frontend — Update Tab Bar and App Shell - -**Files:** -- Modify: `stats/src/components/layout/TabBar.tsx` -- Modify: `stats/src/App.tsx` - -**Step 1: Update TabBar tabs** - -Change `TabId` type and `TABS` array: - -```typescript -export type TabId = 'overview' | 'library' | 'trends' | 'vocabulary'; - -const TABS: Tab[] = [ - { id: 'overview', label: 'Overview' }, - { id: 'library', label: 'Library' }, - { id: 'trends', label: 'Trends' }, - { id: 'vocabulary', label: 'Vocabulary' }, -]; -``` - -**Step 2: Update App.tsx** - -Replace the Sessions tab panel with Library, import `LibraryTab`: - -```tsx -import { LibraryTab } from './components/library/LibraryTab'; - -// In the JSX, replace the sessions section with: - -``` - -Remove the SessionsTab import. The Sessions tab functionality is now part of the activity feed (Overview) and per-anime detail (Library). - -**Step 3: Verify frontend build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner/stats && bun run build` - -**Step 4: Commit** - -```bash -git add stats/src/components/layout/TabBar.tsx stats/src/App.tsx -git commit -m "feat(stats): replace Sessions tab with Library tab in app shell" -``` - ---- - -### Task 14: Integration Test — Full Build and Smoke Test - -**Step 1: Full build** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && make build` - -**Step 2: Run existing tests** - -Run: `cd /home/sudacode/projects/japanese/SubMiner && bun test` - -**Step 3: Manual smoke test** - -Launch the app, open the stats overlay, verify: -- Overview tab shows activity feed with relative timestamps -- Watch time values are reasonable (not inflated) -- Library tab shows grid with cover art placeholders -- Clicking a card shows the detail view -- Back button returns to grid -- Trends and Vocabulary tabs still work - -**Step 4: Final commit** - -```bash -git add -A -git commit -m "feat(stats): stats dashboard v2 with activity feed, library grid, and per-anime detail" -``` diff --git a/docs/plans/2026-03-12-stats-v2-redesign.md b/docs/plans/2026-03-12-stats-v2-redesign.md deleted file mode 100644 index f22f117..0000000 --- a/docs/plans/2026-03-12-stats-v2-redesign.md +++ /dev/null @@ -1,152 +0,0 @@ -# Stats Dashboard v2 Redesign - -## Summary - -Redesign the stats dashboard to focus on session/media history as the primary experience. Activity feed as the default view, dedicated Library tab with anime cover art (via Anilist API), per-anime drill-down pages, and bug fixes for watch time inflation and relative date formatting. - -## Bug Fixes (Pre-requisite) - -### Watch Time Inflation - -Telemetry values (`active_watched_ms`, `total_watched_ms`, `lines_seen`, `words_seen`, etc.) are cumulative snapshots — each sample stores the running total for that session. Both `getSessionSummaries` (query.ts) and `upsertDailyRollupsForGroups` / `upsertMonthlyRollupsForGroups` (maintenance.ts) incorrectly use `SUM()` across all telemetry rows instead of `MAX()` per session. - -**Fix:** -- `getSessionSummaries`: change `SUM(t.active_watched_ms)` → `MAX(t.active_watched_ms)` (already grouped by `s.session_id`) -- `upsertDailyRollupsForGroups` / `upsertMonthlyRollupsForGroups`: use a subquery that gets `MAX()` per session_id, then `SUM()` across sessions -- Run `forceRebuild` rollup after migration to recompute all rollups - -### Relative Date Formatting - -`formatRelativeDate` only has day-level granularity ("Today", "Yesterday"). Add minute and hour levels: -- < 1 min → "just now" -- < 60 min → "Xm ago" -- < 24 hours → "Xh ago" -- < 2 days → "Yesterday" -- < 7 days → "Xd ago" -- otherwise → locale date string - -## Tab Structure - -**Overview** (default) | **Library** | **Trends** | **Vocabulary** - -### Overview Tab — Activity Feed - -Top section: hero stats (watch time today, cards mined today, streak, all-time total hours). - -Below: recent sessions listed chronologically, grouped by day headers ("Today", "Yesterday", "March 10"). Each session row shows: -- Small cover art thumbnail (from Anilist cache) -- Clean title with episode info ("The Eminence in Shadow — Episode 5") -- Relative timestamp ("32m ago") and active duration ("24m active") -- Per-session stats: cards mined, words seen - -### Library Tab — Cover Art Grid - -Grid of anime cover art cards fetched from Anilist API. Each card shows: -- Cover art image (3:4 aspect ratio) -- Episode count badge -- Title, total watch time, cards mined - -Controls: search bar, filter chips (All / Watching / Completed), total count + time summary. - -Clicking a card navigates to the per-anime detail view. - -### Per-Anime Detail View - -Navigated from Library card click. Sections: -1. **Header** — cover art, title, total watch time, total episodes, total cards mined, avg session length -2. **Watch time chart** — bar chart scoped to this anime over time (14/30/90d range selector) -3. **Session history** — all sessions for this anime with timestamps, durations, per-session stats -4. **Vocabulary** — words and kanji learned from this show (joined via session events → video_id) - -### Trends & Vocabulary Tabs - -Keep existing implementation, mostly unchanged for v2. - -## Anilist Integration & Cover Art Cache - -### Title Parsing - -Parse show name from `canonical_title` to search Anilist: -- Jellyfin titles are already clean (use as-is) -- Local file titles: use existing `guessit` + `parseMediaInfo` fallback from `anilist-updater.ts` -- Strip episode info, codec tags, resolution markers via regex - -### Anilist GraphQL Query - -Search query (no auth needed for public anime search): -```graphql -query ($search: String!) { - Page(perPage: 5) { - media(search: $search, type: ANIME) { - id - coverImage { large medium } - title { romaji english native } - episodes - } - } -} -``` - -### Cover Art Cache - -New SQLite table `imm_media_art`: -- `video_id` INTEGER PRIMARY KEY (FK to imm_videos) -- `anilist_id` INTEGER -- `cover_url` TEXT -- `cover_blob` BLOB (cached image binary) -- `title_romaji` TEXT -- `title_english` TEXT -- `episodes_total` INTEGER -- `fetched_at_ms` INTEGER -- `CREATED_DATE` INTEGER -- `LAST_UPDATE_DATE` INTEGER - -Serve cached images via: `GET /api/stats/media/:videoId/cover` - -Fallback: gray placeholder with first character of title if no Anilist match. - -### Rate Limiting Strategy - -**Current state:** Anilist limit is 30 req/min (temporarily reduced from 90). Existing post-watch updater uses up to 3 requests per episode (search + entry lookup + save mutation). Retry queue can also fire requests. No centralized rate limiter. - -**Centralized rate limiter:** -- Shared sliding-window tracker (array of timestamps) for all Anilist calls -- App-wide cap: 20 req/min (leaving 10 req/min headroom) -- All callers go through the limiter: existing `anilistGraphQl` helper and new cover art fetcher -- Read `X-RateLimit-Remaining` from response headers; if < 5, pause until window resets -- On 429 response, honor `Retry-After` header - -**Cover art fetching behavior:** -- Lazy & one-shot: only fetch when a video appears in the stats UI with no cached art -- Once cached in SQLite, never re-fetch (cover art doesn't change) -- On first Library load with N uncached titles, fetch sequentially with ~3s gap between requests -- Show placeholder for unfetched titles, fill in as fetches complete - -## New API Endpoints - -- `GET /api/stats/media` — all media with aggregated stats (total time, episodes watched, cards, last watched, cover art status) -- `GET /api/stats/media/:videoId` — single media detail: session history, rollups, vocab for that video -- `GET /api/stats/media/:videoId/cover` — cached cover art image (binary response) - -## New Database Queries - -- `getMediaLibrary(db)` — group sessions by video_id, aggregate stats, join with imm_media_art -- `getMediaDetail(db, videoId)` — sessions + daily rollups + vocab scoped to one video_id -- `getMediaVocabulary(db, videoId)` — words/kanji from sessions belonging to a specific video_id (join imm_session_events with imm_sessions on video_id) - -## Data Flow - -``` -Library tab loads - → GET /api/stats/media - → Returns list of videos with aggregated stats + cover art status - → For videos without cached art: - → Background: parse title → search Anilist → download cover → cache in SQLite - → Rate-limited via centralized sliding window (20 req/min cap) - → UI shows placeholders, fills in as covers arrive - -User clicks anime card - → GET /api/stats/media/:videoId - → Returns sessions, rollups, vocab for that video - → Renders detail view with all four sections -``` diff --git a/docs/plans/2026-03-13-docs-kb-restructure-design.md b/docs/plans/2026-03-13-docs-kb-restructure-design.md deleted file mode 100644 index 290546c..0000000 --- a/docs/plans/2026-03-13-docs-kb-restructure-design.md +++ /dev/null @@ -1,88 +0,0 @@ -# Internal Knowledge Base Restructure Design - -**Problem:** `AGENTS.md` currently carries too much project detail while deeper internal guidance is either missing from `docs/` or mixed into `docs-site/`, which should stay user-facing. Agents and contributors need a stable entrypoint plus an internal system of record with progressive disclosure and mechanical enforcement. - -**Goals:** -- Make `AGENTS.md` a short table of contents, not an encyclopedia. -- Establish `docs/` as the internal system of record for architecture, workflow, and knowledge-base conventions. -- Keep `docs-site/` user-facing only. -- Add lightweight enforcement so the split is maintained mechanically. -- Preserve existing build/test/release guidance while moving canonical internal pointers into `docs/`. - -**Non-Goals:** -- Rework product/user docs information architecture beyond boundary cleanup. -- Build a custom documentation generator. -- Solve plan lifecycle cleanup in this change. -- Reorganize unrelated runtime code or existing feature docs. - -## Recommended Approach - -Create a small internal knowledge-base structure under `docs/`, rewrite `AGENTS.md` into a compact map that points into that structure, and add a repo-level test that enforces required internal docs and boundary rules. Keep `docs-site/` focused on public/product documentation and strip out any claims that it is the canonical source for internal architecture or workflow. - -## Information Architecture - -Proposed internal layout: - -```text -docs/ - README.md - architecture/ - README.md - domains.md - layering.md - knowledge-base/ - README.md - core-beliefs.md - catalog.md - quality.md - workflow/ - README.md - planning.md - verification.md - plans/ - ... -``` - -Key rules: -- `AGENTS.md` links to `docs/README.md` plus a small set of core entrypoints. -- `docs/README.md` acts as the internal KB home page. -- Internal docs include lightweight metadata via explicit section fields: - - `Status` - - `Last verified` - - `Owner` - - `Read when` -- `docs-site/` remains public/user-facing and may link to internal docs only as contributor references, not as canonical internal source-of-truth pages. - -## Enforcement - -Add a repo-level knowledge-base test that validates: - -- `AGENTS.md` links to required internal KB docs. -- `AGENTS.md` stays below a capped line count. -- Required internal docs exist. -- Internal KB docs include the expected metadata fields. -- `docs-site/development.md` and `docs-site/architecture.md` point internal readers to `docs/`. -- `AGENTS.md` does not treat `docs-site/` pages as the canonical internal source of truth. - -Keep the new test outside `docs-site/` so internal/public boundaries stay clear. - -## Migration Plan - -1. Write the design and implementation plan docs. -2. Rewrite `AGENTS.md` as a compact map. -3. Create the internal KB entrypoints under `docs/`. -4. Update `docs-site/` contributor docs to reference `docs/` for internal guidance. -5. Add a repo-level test plus package/CI wiring. -6. Run docs and targeted repo verification. - -## Verification Strategy - -Primary commands: -- `bun run docs:test` -- `bun run test:fast` -- `bun run docs:build` - -Focused review: -- read `AGENTS.md` start-to-finish and confirm it behaves like a table of contents -- inspect the new `docs/README.md` navigation and cross-links -- confirm `docs-site/` still reads as user-facing documentation diff --git a/docs/plans/2026-03-13-docs-kb-restructure.md b/docs/plans/2026-03-13-docs-kb-restructure.md deleted file mode 100644 index 026226b..0000000 --- a/docs/plans/2026-03-13-docs-kb-restructure.md +++ /dev/null @@ -1,138 +0,0 @@ -# Internal Knowledge Base Restructure Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Turn `AGENTS.md` into a compact table of contents, establish `docs/` as the internal system of record, keep `docs-site/` user-facing, and enforce the split with tests/CI. - -**Architecture:** Create a small internal knowledge-base hierarchy under `docs/`, migrate canonical contributor/agent guidance into it, and wire a repo-level verification test that checks required docs, metadata, and boundary rules. Keep public docs in `docs-site/` and update them to reference internal docs rather than acting as the source of truth themselves. - -**Tech Stack:** Markdown docs, Bun test runner, existing GitHub Actions CI - ---- - -### Task 1: Add the internal KB entrypoints - -**Files:** -- Create: `docs/README.md` -- Create: `docs/architecture/README.md` -- Create: `docs/architecture/domains.md` -- Create: `docs/architecture/layering.md` -- Create: `docs/knowledge-base/README.md` -- Create: `docs/knowledge-base/core-beliefs.md` -- Create: `docs/knowledge-base/catalog.md` -- Create: `docs/knowledge-base/quality.md` -- Create: `docs/workflow/README.md` -- Create: `docs/workflow/planning.md` -- Create: `docs/workflow/verification.md` - -**Step 1: Write the KB home page** - -Add `docs/README.md` with navigation to architecture, workflow, knowledge-base maintenance, release docs, and active plans. - -**Step 2: Add architecture pages** - -Create the architecture index plus focused `domains.md` and `layering.md` pages that summarize runtime ownership and dependency boundaries from the existing architecture doc. - -**Step 3: Add knowledge-base pages** - -Create the knowledge-base index, core beliefs, catalog, and quality pages with explicit metadata fields and short maintenance guidance. - -**Step 4: Add workflow pages** - -Create workflow index, planning guide, and verification guide with the current maintained Bun commands and lane-selection guidance. - -**Step 5: Review cross-links** - -Read the new docs and confirm every page links back to at least one parent/index page. - -### Task 2: Rewrite `AGENTS.md` as a compact map - -**Files:** -- Modify: `AGENTS.md` - -**Step 1: Replace encyclopedia-style content with a compact map** - -Keep only the minimum operational guidance needed in injected context: -- quick start -- internal source-of-truth pointers -- build/test gate -- generated/sensitive file notes -- release pointer -- backlog note - -**Step 2: Add direct links to the new KB entrypoints** - -Point `AGENTS.md` at: -- `docs/README.md` -- `docs/architecture/README.md` -- `docs/workflow/README.md` -- `docs/workflow/verification.md` -- `docs/knowledge-base/README.md` -- `docs/RELEASING.md` - -**Step 3: Keep the file intentionally short** - -Target roughly 100 lines and avoid moving deep details back into `AGENTS.md`. - -### Task 3: Re-boundary `docs-site/` - -**Files:** -- Modify: `docs-site/development.md` -- Modify: `docs-site/architecture.md` -- Modify: `docs-site/README.md` - -**Step 1: Update contributor-facing docs** - -Keep build/run/testing instructions, but stop presenting `docs-site/*` pages as canonical internal architecture/workflow references. - -**Step 2: Add explicit internal-doc pointers** - -Link readers to `docs/README.md` and the new internal architecture/workflow pages for deep contributor guidance. - -**Step 3: Preserve public-doc tone** - -Ensure the `docs-site/` pages remain user/contributor-facing and do not become the internal KB themselves. - -### Task 4: Add mechanical enforcement - -**Files:** -- Create: `scripts/docs-knowledge-base.test.ts` -- Modify: `package.json` -- Modify: `.github/workflows/ci.yml` - -**Step 1: Write a repo-level docs KB test** - -Assert: -- required docs exist -- metadata fields exist on internal docs -- `AGENTS.md` links to internal KB entrypoints -- `AGENTS.md` stays under the line cap -- `docs-site/development.md` and `docs-site/architecture.md` point to `docs/` - -**Step 2: Wire the test into package scripts** - -Add a script for the KB test and include it in an existing maintained verification lane. - -**Step 3: Ensure CI exercises the check** - -Make sure the CI path that runs the maintained test lane catches KB regressions. - -### Task 5: Verify and hand off - -**Files:** -- Modify: any files above if verification reveals drift - -**Step 1: Run targeted docs verification** - -Run: -- `bun run docs:test` -- `bun run test:fast` -- `bun run docs:build` - -**Step 2: Fix drift found by tests** - -If any assertions fail, update the docs or test expectations so the enforced model matches the intended structure. - -**Step 3: Summarize outcome** - -Report the new internal KB entrypoints, the `AGENTS.md` table-of-contents rewrite, enforcement coverage, verification results, and any skipped items. diff --git a/docs/plans/2026-03-13-imm-words-cleanup-plan.md b/docs/plans/2026-03-13-imm-words-cleanup-plan.md deleted file mode 100644 index f26ba2c..0000000 --- a/docs/plans/2026-03-13-imm-words-cleanup-plan.md +++ /dev/null @@ -1,69 +0,0 @@ -# Imm Words Cleanup Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Fix `imm_words` so only allowed vocabulary tokens are persisted with POS metadata, and add an on-demand stats cleanup command that removes existing bad vocabulary rows. - -**Architecture:** Thread processed `SubtitleData.tokens` into immersion tracking instead of extracting vocabulary from raw subtitle text. Store POS metadata on `imm_words`, reuse the existing POS exclusion logic for persistence and cleanup, and expose cleanup through the existing launcher/app stats command surface. - -**Tech Stack:** TypeScript, Bun, SQLite/libsql wrapper, Commander-based launcher CLI - ---- - -### Task 1: Lock the broken behavior down with failing tests - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.test.ts` -- Modify: `src/core/services/immersion-tracker/__tests__/query.test.ts` -- Modify: `launcher/parse-args.test.ts` -- Modify: `src/main/runtime/stats-cli-command.test.ts` - -**Steps:** -1. Add a tracker regression test that records subtitle tokens with mixed POS and asserts excluded tokens are not written while allowed tokens retain POS metadata. -2. Add a cleanup/query test that seeds valid and invalid `imm_words` rows and asserts vocab cleanup deletes only invalid rows. -3. Add launcher parse tests for `subminer stats cleanup`, `subminer stats cleanup -v`, and default vocab cleanup mode. -4. Add app-side stats CLI tests for dispatching cleanup vs dashboard launch. - -### Task 2: Fix live vocabulary persistence - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` -- Modify: `src/core/services/immersion-tracker/storage.ts` -- Modify: `src/main/runtime/mpv-main-event-main-deps.ts` -- Modify: `src/main/runtime/mpv-main-event-bindings.ts` -- Modify: `src/main/runtime/mpv-main-event-actions.ts` -- Modify: `src/main/state.ts` - -**Steps:** -1. Extend immersion subtitle recording to accept processed subtitle payloads or token arrays. -2. Add `imm_words` POS columns and prepared-statement support. -3. Convert tracker inserts to use processed tokens rather than raw regex extraction. -4. Reuse existing POS/frequency exclusion rules to decide whether a token belongs in `imm_words`. - -### Task 3: Add vocab cleanup service and CLI wiring - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/main/runtime/stats-cli-command.ts` -- Modify: `src/cli/args.ts` -- Modify: `launcher/config/cli-parser-builder.ts` -- Modify: `launcher/config/args-normalizer.ts` -- Modify: `launcher/types.ts` -- Modify: `launcher/commands/stats-command.ts` - -**Steps:** -1. Add a cleanup routine that removes invalid `imm_words` rows using the same persistence filter. -2. Extend app CLI args to represent stats cleanup actions and vocab mode selection. -3. Extend launcher stats command forwarding to pass cleanup flags through the attached app flow. -4. Print a compact cleanup summary and fail cleanly on errors. - -### Task 4: Verify and document the final behavior - -**Files:** -- Modify: `docs-site/immersion-tracking.md` - -**Steps:** -1. Update user-facing stats docs to mention `subminer stats cleanup` vocab maintenance. -2. Run the cheapest sufficient verification lanes for touched files. -3. Record exact commands/results in the task final summary before handoff. diff --git a/docs/plans/2026-03-13-immersion-anime-metadata-design.md b/docs/plans/2026-03-13-immersion-anime-metadata-design.md deleted file mode 100644 index 044674d..0000000 --- a/docs/plans/2026-03-13-immersion-anime-metadata-design.md +++ /dev/null @@ -1,110 +0,0 @@ -# Immersion Anime Metadata Design - -**Problem:** The immersion database is keyed around videos and sessions, which makes it awkward to present anime-centric stats such as per-anime totals, episode progress, and season breakdowns. We need first-class anime metadata without requiring migration or backfill support for existing databases. - -**Goals:** -- Add anime-level identity that can be shared across multiple video files and rewatches. -- Persist parsed episode/season metadata so stats can group by anime, season, and episode. -- Use existing filename parsing conventions: `guessit` first, built-in parser fallback. -- Create provisional anime rows even when AniList lookup fails. -- Keep the change additive and forward-looking; do not spend time on migrations/backfill. - -**Non-Goals:** -- Backfilling or migrating existing user databases. -- Perfect anime identity resolution across every edge case. -- Building the entire new stats UI in this design doc. -- Replacing existing `canonical_title` or current video/session APIs immediately. - -## Recommended Approach - -Add a new `imm_anime` table for anime-level metadata and link each `imm_videos` row to one anime row through `anime_id`. Keep season/episode and filename-derived fields on `imm_videos`, because those belong to a concrete file, not the anime as a whole. - -Anime rows should exist even when AniList lookup fails. In that case, use a normalized parsed-title key as provisional identity. If the same anime is resolved to AniList later, upgrade the existing anime row in place instead of creating a duplicate. - -## Data Model - -### `imm_anime` - -One row per anime identity. - -Suggested fields: -- `anime_id INTEGER PRIMARY KEY AUTOINCREMENT` -- `identity_key TEXT NOT NULL UNIQUE` -- `parsed_title TEXT NOT NULL` -- `normalized_title TEXT NOT NULL` -- `anilist_id INTEGER` -- `title_romaji TEXT` -- `title_english TEXT` -- `title_native TEXT` -- `episodes_total INTEGER` -- `parser_source TEXT` -- `parser_confidence TEXT` -- `metadata_json TEXT` -- `CREATED_DATE INTEGER` -- `LAST_UPDATE_DATE INTEGER` - -Identity rules: -- Resolved anime: `identity_key = anilist:` -- Provisional anime: `identity_key = title:` -- When a provisional row later gets an AniList match, update that row's `identity_key` to `anilist:` and fill AniList metadata. - -### `imm_videos` - -Keep existing video metadata. Add: -- `anime_id INTEGER` -- `parsed_filename TEXT` -- `parsed_title TEXT` -- `parsed_title_normalized TEXT` -- `parsed_season INTEGER` -- `parsed_episode INTEGER` -- `parsed_episode_title TEXT` -- `parser_source TEXT` -- `parser_confidence TEXT` -- `parse_metadata_json TEXT` - -`canonical_title` remains for compatibility. New fields are additive. - -## Parsing and Lookup Flow - -During `handleMediaChange(...)`: - -1. Normalize path/title with the existing tracker flow. -2. Build/create the video row as today. -3. Parse anime metadata: - - use `guessit` against the basename/title when available - - fallback to existing `parseMediaInfo` -4. Use the parsed title to create/find a provisional anime row if needed. -5. Attempt AniList lookup using the same guessit-first, fallback-parser approach already used elsewhere. -6. If AniList lookup succeeds: - - upgrade or fill the anime row with AniList id/title metadata - - keep per-video season/episode fields on the video row -7. Link the video row to `anime_id` and store parsed per-video metadata. - -## Query Shape - -Add anime-aware query functions without deleting current video/session queries: -- anime library list -- anime detail summary -- anime episode list / season breakdown -- anime sessions list - -Aggregation should group by `anime_id`, not `canonical_title`, so rewatches and multiple files collapse correctly. - -## Edge Cases - -- Multiple files for one anime: many videos may point to one anime row. -- Rewatches: same video/session history still aggregates under one anime row. -- No AniList match: keep provisional anime row keyed by normalized parsed title. -- Later AniList match: upgrade provisional row in place. -- Parser disagreement between files: season/episode remain per-video; anime identity uses AniList id or normalized parsed title. -- Remote/Jellyfin playback: use the effective title/path available to the current tracker flow and run the same parser pipeline. - -## Testing Strategy - -Start red/green with focused DB-backed tests: -- schema test for `imm_anime` and new video columns -- storage test for provisional anime creation, reuse, and AniList upgrade -- service test for media-change ingest wiring -- query test for anime-level aggregation and episode breakdown - -Primary verification lane for implementation: `bun run test:immersion:sqlite:src`, then broader repo verification as needed. diff --git a/docs/plans/2026-03-13-immersion-anime-metadata.md b/docs/plans/2026-03-13-immersion-anime-metadata.md deleted file mode 100644 index 2b6a386..0000000 --- a/docs/plans/2026-03-13-immersion-anime-metadata.md +++ /dev/null @@ -1,370 +0,0 @@ -# Immersion Anime Metadata Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Add anime-level immersion metadata, link videos to anime rows, and expose anime/season/episode query surfaces so future stats can aggregate by anime instead of only by video title. - -**Architecture:** Introduce a new `imm_anime` table plus additive `imm_videos` metadata columns. Wire media ingest through a guessit-first, fallback-parser flow that always creates or reuses an anime row, stores per-video episode metadata, and upgrades provisional anime rows when AniList data becomes available. Keep existing video/session behavior compatible while adding new query surfaces in parallel. - -**Tech Stack:** TypeScript, Bun, libsql SQLite, existing immersion tracker storage/query/service modules, existing AniList parser helpers (`guessit`, `parseMediaInfo`) - ---- - -### Task 1: Add Red Tests for Schema Shape - -**Files:** -- Modify: `src/core/services/immersion-tracker/storage-session.test.ts` -- Inspect: `src/core/services/immersion-tracker/storage.ts` -- Inspect: `src/core/services/immersion-tracker/types.ts` - -**Step 1: Write the failing schema test** - -Add assertions that `ensureSchema()` creates: -- `imm_anime` -- new `imm_videos` columns for `anime_id`, parsed filename/title, season, episode, parser source/confidence, and parse metadata - -Use `PRAGMA table_info(imm_videos)` and `sqlite_master` queries instead of indirect assertions. - -**Step 2: Run the targeted test to verify it fails** - -Run: - -```bash -bun test src/core/services/immersion-tracker/storage-session.test.ts -``` - -Expected: FAIL because the new table/columns do not exist yet. - -**Step 3: Implement minimal schema changes** - -Modify `src/core/services/immersion-tracker/storage.ts` and `src/core/services/immersion-tracker/types.ts`: -- add `imm_anime` -- add new `imm_videos` columns -- add indexes/FKs needed for anime lookup -- bump schema version for the fresh-schema path -- do not add migration/backfill logic for older DB contents - -**Step 4: Re-run the targeted test** - -Run: - -```bash -bun test src/core/services/immersion-tracker/storage-session.test.ts -``` - -Expected: PASS. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker/types.ts src/core/services/immersion-tracker/storage.ts src/core/services/immersion-tracker/storage-session.test.ts -git commit -m "feat(immersion): add anime schema and video metadata fields" -``` - -### Task 2: Add Red Tests for Anime Storage Identity and Upgrade Rules - -**Files:** -- Modify: `src/core/services/immersion-tracker/storage-session.test.ts` -- Modify: `src/core/services/immersion-tracker/storage.ts` -- Inspect: `src/core/services/immersion-tracker/query.ts` - -**Step 1: Write failing storage tests** - -Add DB-backed tests for: -- creating a provisional anime row from normalized parsed title -- reusing that row for another video from the same anime -- upgrading the same row when AniList id/title metadata becomes available later -- preserving per-video season/episode values while sharing one anime row - -Prefer explicit row assertions over service-level mocks. - -**Step 2: Run the targeted test file to verify it fails** - -Run: - -```bash -bun test src/core/services/immersion-tracker/storage-session.test.ts -``` - -Expected: FAIL because storage helpers do not exist yet. - -**Step 3: Implement minimal storage helpers** - -In `src/core/services/immersion-tracker/storage.ts`, add focused helpers such as: -- normalize anime identity key from parsed title -- get/create provisional anime row -- upgrade anime row with AniList data -- update/link per-video anime metadata - -Keep responsibilities narrow and composable; do not bury query logic in the service class. - -**Step 4: Re-run the targeted test file** - -Run: - -```bash -bun test src/core/services/immersion-tracker/storage-session.test.ts -``` - -Expected: PASS. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker/storage.ts src/core/services/immersion-tracker/storage-session.test.ts -git commit -m "feat(immersion): store provisional anime rows and upgrade with AniList data" -``` - -### Task 3: Add Red Tests for Parser Metadata Extraction - -**Files:** -- Modify: `src/core/services/immersion-tracker/metadata.test.ts` -- Modify: `src/core/services/immersion-tracker/metadata.ts` -- Inspect: `src/jimaku/utils.ts` -- Inspect: `src/core/services/anilist/anilist-updater.ts` - -**Step 1: Write failing parser tests** - -Add tests for a helper that returns parsed anime/video metadata from a media path/title: -- uses `guessit` output first when available -- falls back to built-in parser when `guessit` throws or returns incomplete data -- preserves season/episode/title/source/confidence -- records filename/basename for per-video metadata - -Use representative filenames like: -- `Little Witch Academia S02E05.mkv` -- `[SubsPlease] Frieren - 03 (1080p).mkv` - -**Step 2: Run the targeted parser test file to verify it fails** - -Run: - -```bash -bun test src/core/services/immersion-tracker/metadata.test.ts -``` - -Expected: FAIL because the helper does not exist yet. - -**Step 3: Implement the minimal parser helper** - -In `src/core/services/immersion-tracker/metadata.ts`: -- add a focused helper that wraps guessit-first parsing -- reuse existing parser conventions instead of inventing a new format -- keep ffprobe/local media metadata behavior intact - -If shared types are needed, add them in `src/core/services/immersion-tracker/types.ts`. - -**Step 4: Re-run the targeted parser test** - -Run: - -```bash -bun test src/core/services/immersion-tracker/metadata.test.ts -``` - -Expected: PASS. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker/metadata.ts src/core/services/immersion-tracker/metadata.test.ts src/core/services/immersion-tracker/types.ts -git commit -m "feat(immersion): add guessit-first anime metadata parsing helper" -``` - -### Task 4: Add Red Tests for Media-Change Ingest Wiring - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.test.ts` -- Modify: `src/core/services/immersion-tracker-service.ts` -- Inspect: `src/core/services/immersion-tracker/storage.ts` -- Inspect: `src/core/services/immersion-tracker/metadata.ts` - -**Step 1: Write failing service tests** - -Add focused tests showing that `handleMediaChange(...)`: -- creates/links an anime row -- stores parsed season/episode/file metadata on the active video row -- reuses the same anime row across multiple video files for the same parsed anime -- keeps working when AniList lookup is missing - -Prefer DB-backed assertions after service calls rather than deep mocking. - -**Step 2: Run the targeted service test to verify it fails** - -Run: - -```bash -bun test src/core/services/immersion-tracker-service.test.ts -``` - -Expected: FAIL because ingest does not yet populate anime metadata. - -**Step 3: Implement the minimal service wiring** - -Modify `src/core/services/immersion-tracker-service.ts` to: -- call the new parser helper during media change -- create/reuse provisional anime rows -- persist per-video metadata -- trigger AniList enrichment/upgrade only as far as current dependencies already allow - -Do not refactor unrelated tracker behavior while making this pass. - -**Step 4: Re-run the targeted service test** - -Run: - -```bash -bun test src/core/services/immersion-tracker-service.test.ts -``` - -Expected: PASS. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker-service.ts src/core/services/immersion-tracker-service.test.ts -git commit -m "feat(immersion): link videos to anime metadata during media ingest" -``` - -### Task 5: Add Red Tests for Anime Query Surfaces - -**Files:** -- Modify: `src/core/services/immersion-tracker/__tests__/query.test.ts` -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` - -**Step 1: Write failing query tests** - -Add tests for new query functions such as: -- anime library summary list -- anime detail summary -- per-anime episode list or season breakdown - -Seed the DB with: -- one anime with multiple episode files -- repeated sessions on one episode -- another anime for contrast - -Assert grouping by `anime_id`, not by `canonical_title`. - -**Step 2: Run the targeted query test to verify it fails** - -Run: - -```bash -bun test src/core/services/immersion-tracker/__tests__/query.test.ts -``` - -Expected: FAIL because the anime query functions/types do not exist yet. - -**Step 3: Implement minimal query functions** - -Modify `src/core/services/immersion-tracker/query.ts` and related exported types to add anime-level queries in parallel with existing video-level queries. - -Keep SQL explicit and aggregation stable: -- anime totals from linked sessions/videos -- episode/season data from video-level parsed fields - -**Step 4: Re-run the targeted query test** - -Run: - -```bash -bun test src/core/services/immersion-tracker/__tests__/query.test.ts -``` - -Expected: PASS. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker/query.ts src/core/services/immersion-tracker/types.ts src/core/services/immersion-tracker/__tests__/query.test.ts -git commit -m "feat(immersion): add anime-level stats queries" -``` - -### Task 6: Integrate Export Surfaces and Compatibility Checks - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.ts` -- Modify: any stats-server or API files only if needed after query integration -- Inspect: `src/core/services/__tests__/stats-server.test.ts` -- Inspect: `stats/src/lib/dashboard-data.ts` - -**Step 1: Write the smallest failing integration test if API surface changes** - -Only if the service/API export surface changes, add one failing test proving the new query path is exposed correctly. If no export change is needed yet, skip straight to implementation and note the skip in the task notes. - -**Step 2: Run the targeted test to verify red state** - -Run only the affected test file, for example: - -```bash -bun test src/core/services/__tests__/stats-server.test.ts -``` - -Expected: FAIL if a new API contract is required; otherwise explicitly skip. - -**Step 3: Implement minimal integration** - -Export new query methods through the service only where needed for the next stats consumer. Avoid prematurely reshaping the public API if current UI work is out of scope. - -**Step 4: Run the targeted integration test** - -Run: - -```bash -bun test src/core/services/__tests__/stats-server.test.ts -``` - -Expected: PASS, or documented skip if no API change was needed. - -**Step 5: Commit** - -```bash -git add src/core/services/immersion-tracker-service.ts src/core/services/__tests__/stats-server.test.ts stats/src/lib/dashboard-data.ts -git commit -m "feat(stats): expose anime-level immersion data where needed" -``` - -### Task 7: Run Focused Verification and Update Docs/Task - -**Files:** -- Modify: `backlog/tasks/task-169 - Add-anime-level-immersion-metadata-and-link-videos.md` -- Modify: docs only if implementation changes user-visible behavior or API expectations - -**Step 1: Run the focused SQLite immersion lane** - -Run: - -```bash -bun run test:immersion:sqlite:src -``` - -Expected: PASS. - -**Step 2: Run any additional required verification** - -Use the repo verifier/classifier to choose broader lanes if the diff touches runtime or stats-server surfaces: - -```bash -bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh -bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core -``` - -Escalate only if the touched files require it. - -**Step 3: Update task notes and final summary** - -Record: -- commands run -- pass/fail -- skipped lanes -- remaining risks - -Update the task plan section if actual execution deviated. - -**Step 4: Commit** - -```bash -git add backlog/tasks/task-169\ -\ Add-anime-level-immersion-metadata-and-link-videos.md -git commit -m "docs(backlog): record immersion anime metadata verification" -``` diff --git a/docs/plans/2026-03-14-episode-detail-anki-link-design.md b/docs/plans/2026-03-14-episode-detail-anki-link-design.md deleted file mode 100644 index 9b20a79..0000000 --- a/docs/plans/2026-03-14-episode-detail-anki-link-design.md +++ /dev/null @@ -1,56 +0,0 @@ -# Episode Detail & Anki Card Link — Design - -**Date**: 2026-03-14 -**Status**: Approved - -## Motivation - -The anime detail page shows episodes and cards mined but lacks drill-down into individual episodes. Users want to see per-episode stats (sessions, words, cards) and link directly to mined Anki cards. - -## Design - -### 1. Episode Expandable Detail - -Click an episode row in `EpisodeList` or `AnimeCardsList` → expands inline: -- Sessions list for this episode (sessions linked to video_id) -- Cards mined list — timestamps + "Open in Anki" button per card (when note ID available) -- Top words from this episode (word occurrences scoped to video_id) - -### 2. Anki Note ID Storage - -- Extend `recordCardsMined` callback to accept note IDs: `recordCardsMined(count, noteIds)` -- Store in CARD_MINED event payload: `{ cardsMined: 1, noteIds: [12345] }` -- Proxy already has note IDs in `pendingNoteIds` — pass through callback chain -- Polling has note IDs from `newNoteIds` — same treatment -- No schema change — note IDs stored in existing `payload_json` column on `imm_session_events` - -### 3. "Open in Anki" Flow - -- New endpoint: `POST /api/stats/anki/browse?noteId=12345` -- Calls AnkiConnect `guiBrowse` with query `nid:12345` -- Opens Anki's card browser filtered to that note -- Frontend button hits this endpoint - -### 4. Episode Words - -- New query: `getEpisodeWords(videoId)` — like `getAnimeWords` but filtered by video_id -- Reuse AnimeWordList component pattern - -### 5. Backend Changes - -**Modified files:** -- `src/anki-integration/anki-connect-proxy.ts` — pass note IDs through recordCardsAdded callback -- `src/anki-integration/polling.ts` — pass note IDs through recordCardsAdded callback -- `src/anki-integration.ts` — update callback signature -- `src/core/services/immersion-tracker-service.ts` — accept and store note IDs in recordCardsMined -- `src/core/services/immersion-tracker/query.ts` — add getEpisodeWords, getEpisodeSessions, getEpisodeCardEvents -- `src/core/services/stats-server.ts` — add episode detail and anki browse endpoints - -### 6. Frontend Changes - -**Modified files:** -- `stats/src/components/anime/EpisodeList.tsx` — make rows expandable -- `stats/src/components/anime/AnimeCardsList.tsx` — make rows expandable - -**New files:** -- `stats/src/components/anime/EpisodeDetail.tsx` — inline expandable content diff --git a/docs/plans/2026-03-14-episode-detail-anki-link.md b/docs/plans/2026-03-14-episode-detail-anki-link.md deleted file mode 100644 index 5a87a36..0000000 --- a/docs/plans/2026-03-14-episode-detail-anki-link.md +++ /dev/null @@ -1,402 +0,0 @@ -# Episode Detail & Anki Card Link Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Add expandable episode detail rows with per-episode sessions/words/cards, store Anki note IDs in card mined events, and add "Open in Anki" button that opens the card browser. - -**Architecture:** Extend the recordCardsMined callback chain to pass note IDs alongside count. Store them in the existing payload_json column. Add a stats server endpoint that proxies AnkiConnect's guiBrowse. Frontend makes episode rows expandable with inline detail content. - -**Tech Stack:** Hono (backend API), AnkiConnect (guiBrowse), React + Recharts + Tailwind/Catppuccin (frontend), bun test (backend), Vitest (frontend) - ---- - -## Task 1: Extend recordCardsAdded callback to pass note IDs - -**Files:** -- Modify: `src/anki-integration/anki-connect-proxy.ts` -- Modify: `src/anki-integration/polling.ts` -- Modify: `src/anki-integration.ts` - -**Step 1: Update the callback type** - -In `src/anki-integration/anki-connect-proxy.ts` line 18, change: -```typescript -recordCardsAdded?: (count: number) => void; -``` -to: -```typescript -recordCardsAdded?: (count: number, noteIds: number[]) => void; -``` - -In `src/anki-integration/polling.ts` line 12, same change. - -**Step 2: Pass note IDs through the proxy callback** - -In `src/anki-integration/anki-connect-proxy.ts` line 349, change: -```typescript -this.deps.recordCardsAdded?.(enqueuedCount); -``` -to: -```typescript -this.deps.recordCardsAdded?.(enqueuedCount, noteIds.filter(id => !this.pendingNoteIdSet.has(id))); -``` - -Wait — the dedup already happened by this point. The `noteIds` param to `enqueueNotes` contains the raw IDs. We need the ones that were actually enqueued (not filtered out as duplicates). Track them: - -Actually, look at lines 334-348: it iterates `noteIds`, skips duplicates, and pushes accepted ones to `this.pendingNoteIds`. The `enqueuedCount` tracks how many were accepted. We need to collect those IDs: - -```typescript -enqueueNotes(noteIds: number[]): void { - const accepted: number[] = []; - for (const noteId of noteIds) { - if (this.pendingNoteIdSet.has(noteId) || this.inFlightNoteIds.has(noteId)) { - continue; - } - this.pendingNoteIds.push(noteId); - this.pendingNoteIdSet.add(noteId); - accepted.push(noteId); - } - if (accepted.length > 0) { - this.deps.recordCardsAdded?.(accepted.length, accepted); - } - // ... rest of method -} -``` - -**Step 3: Pass note IDs through the polling callback** - -In `src/anki-integration/polling.ts` line 84, change: -```typescript -this.deps.recordCardsAdded?.(newNoteIds.length); -``` -to: -```typescript -this.deps.recordCardsAdded?.(newNoteIds.length, newNoteIds); -``` - -**Step 4: Update AnkiIntegration callback chain** - -In `src/anki-integration.ts`: - -Line 140, change field type: -```typescript -private recordCardsMinedCallback: ((count: number, noteIds?: number[]) => void) | null = null; -``` - -Line 154, update constructor param: -```typescript -recordCardsMined?: (count: number, noteIds?: number[]) => void -``` - -Lines 214-216 (polling deps), change to: -```typescript -recordCardsAdded: (count, noteIds) => { - this.recordCardsMinedCallback?.(count, noteIds); -} -``` - -Lines 238-240 (proxy deps), same change. - -Line 1125-1127 (setter), update signature: -```typescript -setRecordCardsMinedCallback(callback: ((count: number, noteIds?: number[]) => void) | null): void -``` - -**Step 5: Commit** - -```bash -git commit -m "feat(anki): pass note IDs through recordCardsAdded callback chain" -``` - ---- - -## Task 2: Store note IDs in card mined event payload - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.ts` - -**Step 1: Update recordCardsMined to accept and store note IDs** - -Find the `recordCardsMined` method (line 759). Change signature and payload: - -```typescript -recordCardsMined(count = 1, noteIds?: number[]): void { - if (!this.sessionState) return; - this.sessionState.cardsMined += count; - this.sessionState.pendingTelemetry = true; - this.recordWrite({ - kind: 'event', - sessionId: this.sessionState.sessionId, - sampleMs: Date.now(), - eventType: EVENT_CARD_MINED, - wordsDelta: 0, - cardsDelta: count, - payloadJson: sanitizePayload( - { cardsMined: count, ...(noteIds?.length ? { noteIds } : {}) }, - this.maxPayloadBytes, - ), - }); -} -``` - -**Step 2: Update the caller in main.ts** - -Find where `recordCardsMined` is called (around line 2506-2508 and 3409-3411). Pass through noteIds: - -```typescript -recordCardsMined: (count, noteIds) => { - ensureImmersionTrackerStarted(); - appState.immersionTracker?.recordCardsMined(count, noteIds); -} -``` - -**Step 3: Commit** - -```bash -git commit -m "feat(immersion): store anki note IDs in card mined event payload" -``` - ---- - -## Task 3: Add episode-level query functions - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` -- Modify: `src/core/services/immersion-tracker-service.ts` - -**Step 1: Add types** - -In `types.ts`, add: -```typescript -export interface EpisodeCardEventRow { - eventId: number; - sessionId: number; - tsMs: number; - cardsDelta: number; - noteIds: number[]; -} -``` - -**Step 2: Add query functions** - -In `query.ts`: - -```typescript -export function getEpisodeWords(db: DatabaseSync, videoId: number, limit = 50): AnimeWordRow[] { - return db.prepare(` - SELECT w.id AS wordId, w.headword, w.word, w.reading, w.part_of_speech AS partOfSpeech, - SUM(o.occurrence_count) AS frequency - FROM imm_word_line_occurrences o - JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id - JOIN imm_words w ON w.id = o.word_id - WHERE sl.video_id = ? - GROUP BY w.id - ORDER BY frequency DESC - LIMIT ? - `).all(videoId, limit) as unknown as AnimeWordRow[]; -} - -export function getEpisodeSessions(db: DatabaseSync, videoId: number): SessionSummaryQueryRow[] { - return db.prepare(` - SELECT - s.session_id AS sessionId, s.video_id AS videoId, - v.canonical_title AS canonicalTitle, - s.started_at_ms AS startedAtMs, s.ended_at_ms AS endedAtMs, - COALESCE(MAX(t.total_watched_ms), 0) AS totalWatchedMs, - COALESCE(MAX(t.active_watched_ms), 0) AS activeWatchedMs, - COALESCE(MAX(t.lines_seen), 0) AS linesSeen, - COALESCE(MAX(t.words_seen), 0) AS wordsSeen, - COALESCE(MAX(t.tokens_seen), 0) AS tokensSeen, - COALESCE(MAX(t.cards_mined), 0) AS cardsMined, - COALESCE(MAX(t.lookup_count), 0) AS lookupCount, - COALESCE(MAX(t.lookup_hits), 0) AS lookupHits - FROM imm_sessions s - JOIN imm_videos v ON v.video_id = s.video_id - LEFT JOIN imm_session_telemetry t ON t.session_id = s.session_id - WHERE s.video_id = ? - GROUP BY s.session_id - ORDER BY s.started_at_ms DESC - `).all(videoId) as SessionSummaryQueryRow[]; -} - -export function getEpisodeCardEvents(db: DatabaseSync, videoId: number): EpisodeCardEventRow[] { - const rows = db.prepare(` - SELECT e.event_id AS eventId, e.session_id AS sessionId, - e.ts_ms AS tsMs, e.cards_delta AS cardsDelta, - e.payload_json AS payloadJson - FROM imm_session_events e - JOIN imm_sessions s ON s.session_id = e.session_id - WHERE s.video_id = ? AND e.event_type = 4 - ORDER BY e.ts_ms DESC - `).all(videoId) as Array<{ eventId: number; sessionId: number; tsMs: number; cardsDelta: number; payloadJson: string | null }>; - - return rows.map(row => { - let noteIds: number[] = []; - if (row.payloadJson) { - try { - const parsed = JSON.parse(row.payloadJson); - if (Array.isArray(parsed.noteIds)) noteIds = parsed.noteIds; - } catch {} - } - return { eventId: row.eventId, sessionId: row.sessionId, tsMs: row.tsMs, cardsDelta: row.cardsDelta, noteIds }; - }); -} -``` - -**Step 3: Add wrapper methods to immersion-tracker-service.ts** - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): add episode-level query functions for sessions, words, cards" -``` - ---- - -## Task 4: Add episode detail and Anki browse API endpoints - -**Files:** -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Step 1: Add episode detail endpoint** - -```typescript -app.get('/api/stats/episode/:videoId/detail', async (c) => { - const videoId = parseIntQuery(c.req.param('videoId'), 0); - if (videoId <= 0) return c.body(null, 400); - const sessions = await tracker.getEpisodeSessions(videoId); - const words = await tracker.getEpisodeWords(videoId); - const cardEvents = await tracker.getEpisodeCardEvents(videoId); - return c.json({ sessions, words, cardEvents }); -}); -``` - -**Step 2: Add Anki browse endpoint** - -```typescript -app.post('/api/stats/anki/browse', async (c) => { - const noteId = parseIntQuery(c.req.query('noteId'), 0); - if (noteId <= 0) return c.body(null, 400); - try { - const response = await fetch('http://127.0.0.1:8765', { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ action: 'guiBrowse', version: 6, params: { query: `nid:${noteId}` } }), - }); - const result = await response.json(); - return c.json(result); - } catch (err) { - return c.json({ error: 'Failed to reach AnkiConnect' }, 502); - } -}); -``` - -**Step 3: Add tests and verify** - -Run: `bun test ./src/core/services/__tests__/stats-server.test.ts` - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): add episode detail and anki browse endpoints" -``` - ---- - -## Task 5: Add frontend types and API client methods - -**Files:** -- Modify: `stats/src/types/stats.ts` -- Modify: `stats/src/lib/api-client.ts` -- Modify: `stats/src/lib/ipc-client.ts` - -**Step 1: Add types** - -```typescript -export interface EpisodeCardEvent { - eventId: number; - sessionId: number; - tsMs: number; - cardsDelta: number; - noteIds: number[]; -} - -export interface EpisodeDetailData { - sessions: SessionSummary[]; - words: AnimeWord[]; - cardEvents: EpisodeCardEvent[]; -} -``` - -**Step 2: Add API client methods** - -```typescript -getEpisodeDetail: (videoId: number) => fetchJson(`/api/stats/episode/${videoId}/detail`), -ankiBrowse: (noteId: number) => fetchJson(`/api/stats/anki/browse?noteId=${noteId}`, { method: 'POST' }), -``` - -Mirror in ipc-client. - -**Step 3: Commit** - -```bash -git commit -m "feat(stats): add episode detail types and API client methods" -``` - ---- - -## Task 6: Build EpisodeDetail component - -**Files:** -- Create: `stats/src/components/anime/EpisodeDetail.tsx` -- Modify: `stats/src/components/anime/EpisodeList.tsx` -- Modify: `stats/src/components/anime/AnimeCardsList.tsx` - -**Step 1: Create EpisodeDetail component** - -Inline expandable content showing: -- Sessions list (compact: time, duration, cards, words) -- Cards mined list with "Open in Anki" button per note ID -- Top words grid (reuse AnimeWordList pattern) - -Fetches data from `getEpisodeDetail(videoId)` on mount. - -"Open in Anki" button calls `apiClient.ankiBrowse(noteId)`. - -**Step 2: Make EpisodeList rows expandable** - -Add `expandedVideoId` state. Clicking a row toggles expansion. Render `EpisodeDetail` below the expanded row. - -**Step 3: Make AnimeCardsList rows expandable** - -Same pattern — clicking an episode row expands to show `EpisodeDetail`. - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): add expandable episode detail with anki card links" -``` - ---- - -## Task 7: Build and verify - -**Step 1: Type check** -Run: `npx tsc --noEmit` - -**Step 2: Run backend tests** -Run: `bun test ./src/core/services/__tests__/stats-server.test.ts` - -**Step 3: Run frontend tests** -Run: `npx vitest run` - -**Step 4: Build** -Run: `npx vite build` - -**Step 5: Commit any fixes** - -```bash -git commit -m "feat(stats): episode detail and anki link complete" -``` diff --git a/docs/plans/2026-03-14-immersion-occurrence-tracking-design.md b/docs/plans/2026-03-14-immersion-occurrence-tracking-design.md deleted file mode 100644 index cfa9363..0000000 --- a/docs/plans/2026-03-14-immersion-occurrence-tracking-design.md +++ /dev/null @@ -1,115 +0,0 @@ -# Immersion Occurrence Tracking Design - -**Problem:** `imm_words` and `imm_kanji` only store global aggregates. They cannot answer "where did this word/kanji appear?" at the anime, episode, timestamp, or subtitle-line level. - -**Goals:** -- Map normalized words and kanji back to exact subtitle lines. -- Preserve repeated tokens inside one subtitle line. -- Avoid storing token text repeatedly for each repeated token in the same line. -- Keep the change additive and compatible with current top-word/top-kanji stats. - -**Non-Goals:** -- Exact token character offsets inside a subtitle line. -- Full stats UI redesign in the same change. -- Replacing existing aggregate tables or existing vocabulary queries. - -## Recommended Approach - -Add a normalized subtitle-line table plus counted bridge tables from lines to canonical word and kanji rows. Keep `imm_words` and `imm_kanji` as canonical lexeme aggregates, then link them to `imm_subtitle_lines` through one row per unique lexeme per line with `occurrence_count`. - -This preserves total frequency within a line without duplicating token text or needing one row per repeated token. Reverse mapping becomes a simple join from canonical lexeme to line row to video/anime context. - -## Data Model - -### `imm_subtitle_lines` - -One row per recorded subtitle line. - -Suggested fields: -- `line_id INTEGER PRIMARY KEY AUTOINCREMENT` -- `session_id INTEGER NOT NULL` -- `event_id INTEGER` -- `video_id INTEGER NOT NULL` -- `anime_id INTEGER` -- `line_index INTEGER NOT NULL` -- `segment_start_ms INTEGER` -- `segment_end_ms INTEGER` -- `text TEXT NOT NULL` -- `CREATED_DATE INTEGER` -- `LAST_UPDATE_DATE INTEGER` - -Notes: -- `event_id` links back to `imm_session_events` when the subtitle-line event is written. -- `anime_id` is nullable because some rows may predate anime linkage or come from unresolved media. - -### `imm_word_line_occurrences` - -One row per normalized word per subtitle line. - -Suggested fields: -- `line_id INTEGER NOT NULL` -- `word_id INTEGER NOT NULL` -- `occurrence_count INTEGER NOT NULL` -- `PRIMARY KEY(line_id, word_id)` - -`word_id` points at the canonical row in `imm_words`. - -### `imm_kanji_line_occurrences` - -One row per kanji per subtitle line. - -Suggested fields: -- `line_id INTEGER NOT NULL` -- `kanji_id INTEGER NOT NULL` -- `occurrence_count INTEGER NOT NULL` -- `PRIMARY KEY(line_id, kanji_id)` - -`kanji_id` points at the canonical row in `imm_kanji`. - -## Write Path - -During `recordSubtitleLine(...)`: - -1. Normalize and validate the line as today. -2. Compute counted word and kanji occurrences for the line. -3. Upsert canonical `imm_words` / `imm_kanji` rows as today. -4. Insert one `imm_subtitle_lines` row for the line. -5. Insert counted bridge rows for each normalized word and kanji found in that line. - -Counting rules: -- Words: count repeated allowed tokens in the token list; skip tokens excluded by the existing POS/noise filter. -- Kanji: count repeated kanji characters from the visible subtitle line text. - -## Query Shape - -Add reverse-mapping query functions for: -- word -> recent occurrence rows -- kanji -> recent occurrence rows - -Each row should include enough context for drilldown: -- anime id/title -- video id/title -- session id -- line index -- segment start/end -- subtitle text -- occurrence count within that line - -Existing top-word/top-kanji aggregate queries stay in place. - -## Edge Cases - -- Repeated tokens in one line: store once per lexeme per line with `occurrence_count > 1`. -- Duplicate identical lines in one session: each subtitle event gets its own `imm_subtitle_lines` row. -- No anime link yet: keep `anime_id` null and still preserve the line/video/session mapping. -- Legacy DBs: additive migration only; no destructive rebuild of existing word/kanji data. - -## Testing Strategy - -Start with focused DB-backed tests: -- schema test for new line/bridge tables and indexes -- service test for counted word/kanji line persistence -- query tests for reverse mapping from word/kanji to line/anime/video context -- migration test for existing DBs gaining the new tables cleanly - -Primary verification lane: `bun run test:immersion:sqlite:src`, then broader lanes only if API/runtime surfaces widen. diff --git a/docs/plans/2026-03-14-immersion-occurrence-tracking.md b/docs/plans/2026-03-14-immersion-occurrence-tracking.md deleted file mode 100644 index e1970e6..0000000 --- a/docs/plans/2026-03-14-immersion-occurrence-tracking.md +++ /dev/null @@ -1,71 +0,0 @@ -# Immersion Occurrence Tracking Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Add normalized counted occurrence tracking for immersion words and kanji so stats can map each item back to anime, episode, timestamp, and subtitle line context. - -**Architecture:** Introduce `imm_subtitle_lines` plus counted bridge tables from lines to canonical `imm_words` and `imm_kanji` rows. Extend the subtitle write path to persist one line row per subtitle event, retain aggregate lexeme tables, and expose reverse-mapping queries without duplicating repeated token text in storage. - -**Tech Stack:** TypeScript, Bun, libsql SQLite, existing immersion tracker storage/query/service modules - ---- - -### Task 1: Lock schema and migration shape down with failing tests - -**Files:** -- Modify: `src/core/services/immersion-tracker/storage-session.test.ts` -- Modify: `src/core/services/immersion-tracker/storage.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` - -**Steps:** -1. Add a red test asserting `ensureSchema()` creates `imm_subtitle_lines`, `imm_word_line_occurrences`, and `imm_kanji_line_occurrences`, plus additive migration support from the previous schema version. -2. Run `bun test src/core/services/immersion-tracker/storage-session.test.ts` and confirm failure. -3. Implement the minimal schema/version/index changes. -4. Re-run the targeted test and confirm green. - -### Task 2: Lock counted subtitle-line persistence down with failing tests - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.test.ts` -- Modify: `src/core/services/immersion-tracker-service.ts` -- Modify: `src/core/services/immersion-tracker/storage.ts` - -**Steps:** -1. Add a red service test that records a subtitle line with repeated allowed words and repeated kanji, then asserts one line row plus counted bridge rows are written. -2. Run `bun test src/core/services/immersion-tracker-service.test.ts` and confirm failure. -3. Implement the minimal subtitle-line insert and counted occurrence write path. -4. Re-run the targeted test and confirm green. - -### Task 3: Add reverse-mapping query tests first - -**Files:** -- Modify: `src/core/services/immersion-tracker/__tests__/query.test.ts` -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` - -**Steps:** -1. Add red query tests for `word -> lines` and `kanji -> lines` mappings, including anime/video/session/timestamp/text context and per-line `occurrence_count`. -2. Run `bun test src/core/services/immersion-tracker/__tests__/query.test.ts` and confirm failure. -3. Implement the minimal query functions/types. -4. Re-run the targeted test and confirm green. - -### Task 4: Expose the new query surface through the tracker service - -**Files:** -- Modify: `src/core/services/immersion-tracker-service.ts` -- Modify: any narrow API/service consumer files only if needed - -**Steps:** -1. Add the service methods needed to consume the new reverse-mapping queries. -2. Keep the change narrow; do not widen unrelated UI/API contracts unless a current consumer needs them. -3. Re-run the focused affected tests. - -### Task 5: Verify with the maintained immersion lane - -**Files:** -- Modify: `backlog/tasks/task-171 - Add-normalized-immersion-word-and-kanji-occurrence-tracking.md` - -**Steps:** -1. Run the focused SQLite immersion tests first. -2. Escalate to broader verification only if touched files cross into API/runtime boundaries. -3. Record exact commands and results in the backlog task notes/final summary. diff --git a/docs/plans/2026-03-14-stats-redesign-design.md b/docs/plans/2026-03-14-stats-redesign-design.md deleted file mode 100644 index 97eaa32..0000000 --- a/docs/plans/2026-03-14-stats-redesign-design.md +++ /dev/null @@ -1,137 +0,0 @@ -# Stats Dashboard Redesign — Anime-Centric Approach - -**Date**: 2026-03-14 -**Status**: Approved - -## Motivation - -The current stats dashboard tracks metrics that aren't particularly useful (words seen as a hero stat, word clouds). The data model now supports anime-level tracking (`imm_anime`, `imm_videos` with `parsed_episode`), subtitle line storage (`imm_subtitle_lines`), and word/kanji occurrence mapping (`imm_word_line_occurrences`, `imm_kanji_line_occurrences`). The dashboard should be restructured around anime as the primary unit, with sessions, episodes, and rollups as the core metrics. - -## Data Model (already in place) - -- `imm_anime` — anime-level: title, AniList ID, romaji/english/native titles, metadata -- `imm_videos` — episode-level: `anime_id`, `parsed_episode`, `parsed_season` -- `imm_sessions` — session-level: linked to video -- `imm_subtitle_lines` — line-level: linked to session, video, anime -- `imm_word_line_occurrences` / `imm_kanji_line_occurrences` — word/kanji → line mapping -- `imm_media_art` — cover art + `episodes_total` -- `imm_daily_rollups` / `imm_monthly_rollups` — aggregated metrics -- `imm_words` — POS data: `part_of_speech`, `pos1`, `pos2`, `pos3` - -## Tab Structure (5 tabs) - -### 1. Overview - -**Hero Stats** (6 cards): -- Watch time today -- Cards mined today -- Sessions today -- Episodes watched today -- Current streak (days) -- Active anime (titles with sessions in last 30 days) - -**14-day Watch Time Chart**: Bar chart (keep existing). - -**Streak Calendar**: GitHub-contributions-style heatmap, last 90 days, colored by watch time intensity. - -**Tracking Snapshot** (secondary stats): Total sessions, total episodes, all-time hours, active days, total cards. - -**Recent Activity Feed**: Last 10 sessions grouped by day — anime title + cover art thumbnail, episode number, duration, cards mined. - -Removed from Overview: 14-day words chart, "words today", "words this week" hero stats. - -### 2. Anime (replaces Library) - -**Grid View**: -- Responsive card grid with cover art -- Each card: title, progress bar (episodes watched / `episodes_total`), watch time, cards mined -- Search/filter by title -- Sort: last watched, watch time, cards mined, progress % - -**Anime Detail View** (click into card): -- Header: cover art, titles (romaji/english/native), AniList link if available -- Progress: episode progress bar + "X / Y episodes" -- Stats row: total watch time, cards mined, words seen, lookup hit rate, avg session length -- Episode list: table of episodes (from `imm_videos`), each showing episode number, session count, watch time, cards, last watched date -- Watch time chart: bar chart over time (14d/30d/90d toggle) -- Words from this anime: top words learned from this show (via `imm_word_line_occurrences` → `imm_subtitle_lines` → `anime_id`), clickable to vocab detail -- Mining efficiency: cards per hour / cards per episode trend - -### 3. Trends - -**Existing charts (keep all 9)**: -1. Watch Time (min) — bar -2. Tracked Cards — bar -3. Words Seen — bar -4. Sessions — line -5. Avg Session (min) — line -6. Cards per Hour — line -7. Lookup Hit Rate (%) — line -8. Rolling 7d Watch Time — line -9. Rolling 7d Cards — line - -**New charts (6)**: -10. Episodes watched per day/week -11. Anime completion progress over time (cumulative episodes / total across all anime) -12. New anime started over time (first session per anime by date) -13. Watch time per anime (stacked bar — top 5 anime + "other") -14. Streak history (visual streak timeline — active vs gap periods) -15. Cards per episode trend - -**Controls**: Time range selector (7d/30d/90d/all), group by (day/month). - -### 4. Vocabulary - -**Hero Stats** (4 cards): -- Unique words (excluding particles/noise via POS filter) -- Unique kanji -- New this week -- Avg frequency - -**Filters/Controls**: -- POS filter toggle: hide particles, single-char tokens by default (toggleable) -- Sort: by frequency / last seen / first seen -- Search by word/reading - -**Word List**: Grid/table of words — headword, reading, POS tag, frequency. Each word is clickable. - -**Word Detail Panel** (slide-out or modal): -- Headword, reading, POS (part_of_speech, pos1, pos2, pos3) -- Frequency + first/last seen dates -- Anime appearances: which anime this word appeared in, frequency per anime -- Example lines: actual subtitle lines where the word was used -- Similar words: words sharing same kanji or reading - -**Kanji Section**: Same pattern — clickable kanji grid, detail panel with frequency, anime appearances, example lines, words using this kanji. - -**Charts**: Top repeated words bar chart, new words by day timeline. - -### 5. Sessions - -**Session List**: Chronological, grouped by day. -- Each row: anime title + episode, cover art thumbnail, duration (active/total), cards mined, lines seen, lookup rate -- Expandable detail: session timeline chart (words/cards over time), event log (pauses, seeks, lookups, cards mined) -- Filters: by anime title, date range - -Based on existing hidden `SessionsTab` component with anime/episode context added. - -## Backend Changes Needed - -### New API Endpoints -- `GET /api/stats/anime` — list all anime with episode counts, watch time, progress -- `GET /api/stats/anime/:animeId` — anime detail: episodes, stats, recent sessions -- `GET /api/stats/anime/:animeId/words` — top words from this anime -- `GET /api/stats/vocabulary/:wordId` — word detail: POS, frequency, anime appearances, example lines, similar words -- `GET /api/stats/kanji/:kanjiId` — kanji detail: frequency, anime appearances, example lines, words using this kanji - -### Modified API Endpoints -- `GET /api/stats/vocabulary` — add POS fields to response, support POS filtering query param -- `GET /api/stats/overview` — add episodes today, active anime count -- `GET /api/stats/daily-rollups` — add episode count data for new trend charts - -### New Query Functions -- Anime-level aggregation: episodes per anime, watch time per anime, cards per anime -- Word/kanji occurrence lookups: join through `imm_word_line_occurrences` → `imm_subtitle_lines` → `imm_anime` -- Streak calendar data: daily activity map for last 90 days -- Episode-level trend data: episodes per day for trend charts -- Stacked watch time: per-anime daily breakdown diff --git a/docs/plans/2026-03-14-stats-redesign-implementation.md b/docs/plans/2026-03-14-stats-redesign-implementation.md deleted file mode 100644 index b9449ae..0000000 --- a/docs/plans/2026-03-14-stats-redesign-implementation.md +++ /dev/null @@ -1,1092 +0,0 @@ -# Stats Dashboard Redesign — Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Restructure the stats dashboard from word-heavy metrics to an anime-centric 5-tab layout (Overview, Anime, Trends, Vocabulary, Sessions). - -**Architecture:** The backend already has anime-level queries (`getAnimeLibrary`, `getAnimeDetail`, `getAnimeEpisodes`) and occurrence queries (`getWordOccurrences`, `getKanjiOccurrences`) but no API endpoints for anime. The frontend needs new types, hooks, components, and data builders. Work proceeds bottom-up: backend API → frontend types → hooks → data builders → components. - -**Tech Stack:** Hono (backend API), React + Recharts + Tailwind/Catppuccin (frontend), node:test (backend tests), Vitest (frontend tests) - ---- - -## Task 1: Add Anime API Endpoints to Stats Server - -**Files:** -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** `query.ts` already exports `getAnimeLibrary()`, `getAnimeDetail(db, animeId)`, `getAnimeEpisodes(db, animeId)`. The stats server needs endpoints to expose them. Also need an anime cover art endpoint — anime links to videos, and videos link to `imm_media_art`. - -**Step 1: Write failing tests for the 3 new anime endpoints** - -Add to `src/core/services/__tests__/stats-server.test.ts`: - -```typescript -test('GET /api/stats/anime returns anime library', async () => { - const res = await app.request('/api/stats/anime'); - assert.equal(res.status, 200); - const body = await res.json(); - assert.ok(Array.isArray(body)); -}); - -test('GET /api/stats/anime/:animeId returns anime detail', async () => { - // Use animeId from seeded data - const res = await app.request('/api/stats/anime/1'); - assert.equal(res.status, 200); - const body = await res.json(); - assert.ok(body.detail); - assert.ok(Array.isArray(body.episodes)); -}); - -test('GET /api/stats/anime/:animeId returns 404 for missing anime', async () => { - const res = await app.request('/api/stats/anime/99999'); - assert.equal(res.status, 404); -}); -``` - -**Step 2: Run tests to verify they fail** - -Run: `node --test src/core/services/__tests__/stats-server.test.ts` -Expected: FAIL — 404 for all anime endpoints - -**Step 3: Add the anime endpoints to stats-server.ts** - -Add after the existing media endpoints (around line 165): - -```typescript -app.get('/api/stats/anime', async (c) => { - const rows = getAnimeLibrary(tracker.db); - return c.json(rows); -}); - -app.get('/api/stats/anime/:animeId', async (c) => { - const animeId = parseIntQuery(c.req.param('animeId'), 0); - if (animeId <= 0) return c.body(null, 400); - const detail = getAnimeDetail(tracker.db, animeId); - if (!detail) return c.body(null, 404); - const episodes = getAnimeEpisodes(tracker.db, animeId); - return c.json({ detail, episodes }); -}); - -app.get('/api/stats/anime/:animeId/cover', async (c) => { - const animeId = parseIntQuery(c.req.param('animeId'), 0); - if (animeId <= 0) return c.body(null, 404); - const art = getAnimeCoverArt(tracker.db, animeId); - if (!art?.coverBlob) return c.body(null, 404); - return new Response(new Uint8Array(art.coverBlob), { - headers: { - 'Content-Type': 'image/jpeg', - 'Cache-Control': 'public, max-age=86400', - }, - }); -}); -``` - -Note: `getAnimeCoverArt` may need to be added to `query.ts` — it should look up the first video for the anime and return its cover art. Check if this already exists; if not, add it: - -```typescript -export function getAnimeCoverArt(db: DatabaseSync, animeId: number): MediaArtRow | null { - return db.prepare(` - SELECT a.video_id, a.anilist_id, a.cover_url, a.cover_blob, - a.title_romaji, a.title_english, a.episodes_total, a.fetched_at_ms - FROM imm_media_art a - JOIN imm_videos v ON v.video_id = a.video_id - WHERE v.anime_id = ? - AND a.cover_blob IS NOT NULL - LIMIT 1 - `).get(animeId) as MediaArtRow | null; -} -``` - -Import the new query functions at the top of `stats-server.ts`. - -**Step 4: Run tests to verify they pass** - -Run: `node --test src/core/services/__tests__/stats-server.test.ts` -Expected: All tests PASS - -**Step 5: Commit** - -```bash -git add src/core/services/stats-server.ts src/core/services/__tests__/stats-server.test.ts src/core/services/immersion-tracker/query.ts -git commit -m "feat(stats): add anime API endpoints to stats server" -``` - ---- - -## Task 2: Add Anime Words and Anime Rollups Query + Endpoints - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** The anime detail view needs "top words from this anime" and daily rollups scoped to an anime. Word occurrences already join through `imm_word_line_occurrences` → `imm_subtitle_lines` → `anime_id`. - -**Step 1: Add query functions to query.ts** - -```typescript -export interface AnimeWordRow { - wordId: number; - headword: string; - word: string; - reading: string; - partOfSpeech: string | null; - frequency: number; -} - -export function getAnimeWords(db: DatabaseSync, animeId: number, limit = 50): AnimeWordRow[] { - return db.prepare(` - SELECT w.id AS wordId, w.headword, w.word, w.reading, w.part_of_speech AS partOfSpeech, - SUM(o.occurrence_count) AS frequency - FROM imm_word_line_occurrences o - JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id - JOIN imm_words w ON w.id = o.word_id - WHERE sl.anime_id = ? - GROUP BY w.id - ORDER BY frequency DESC - LIMIT ? - `).all(animeId, limit) as AnimeWordRow[]; -} - -export function getAnimeDailyRollups(db: DatabaseSync, animeId: number, limit = 90): ImmersionSessionRollupRow[] { - return db.prepare(` - SELECT r.rollup_day AS rollupDayOrMonth, r.video_id AS videoId, - r.total_sessions AS totalSessions, r.total_active_min AS totalActiveMin, - r.total_lines_seen AS totalLinesSeen, r.total_words_seen AS totalWordsSeen, - r.total_tokens_seen AS totalTokensSeen, r.total_cards AS totalCards, - r.cards_per_hour AS cardsPerHour, r.words_per_min AS wordsPerMin, - r.lookup_hit_rate AS lookupHitRate - FROM imm_daily_rollups r - JOIN imm_videos v ON v.video_id = r.video_id - WHERE v.anime_id = ? - ORDER BY r.rollup_day DESC - LIMIT ? - `).all(animeId, limit) as ImmersionSessionRollupRow[]; -} -``` - -Add `AnimeWordRow` to the exports in `types.ts` if needed. - -**Step 2: Add API endpoints** - -In `stats-server.ts`, add: - -```typescript -app.get('/api/stats/anime/:animeId/words', async (c) => { - const animeId = parseIntQuery(c.req.param('animeId'), 0); - const limit = parseIntQuery(c.req.query('limit'), 50, 200); - if (animeId <= 0) return c.body(null, 400); - return c.json(getAnimeWords(tracker.db, animeId, limit)); -}); - -app.get('/api/stats/anime/:animeId/rollups', async (c) => { - const animeId = parseIntQuery(c.req.param('animeId'), 0); - const limit = parseIntQuery(c.req.query('limit'), 90, 365); - if (animeId <= 0) return c.body(null, 400); - return c.json(getAnimeDailyRollups(tracker.db, animeId, limit)); -}); -``` - -**Step 3: Write tests and verify** - -Run: `node --test src/core/services/__tests__/stats-server.test.ts` - -**Step 4: Commit** - -```bash -git add src/core/services/immersion-tracker/query.ts src/core/services/immersion-tracker/types.ts src/core/services/stats-server.ts src/core/services/__tests__/stats-server.test.ts -git commit -m "feat(stats): add anime words and rollups query + endpoints" -``` - ---- - -## Task 3: Extend Overview Endpoint with Episodes and Active Anime - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** The overview endpoint currently returns `{ sessions, rollups, hints }`. We need to add `episodesToday` and `activeAnimeCount` to the hints. - -**Step 1: Extend `getQueryHints` in query.ts** - -The current `getQueryHints` returns `{ totalSessions, activeSessions }`. Add: - -```typescript -// Episodes today: count distinct video_ids with sessions started today -const today = Math.floor(Date.now() / 86400000); -const episodesToday = (db.prepare(` - SELECT COUNT(DISTINCT v.video_id) AS count - FROM imm_sessions s - JOIN imm_videos v ON v.video_id = s.video_id - WHERE CAST(s.started_at_ms / 86400000 AS INTEGER) = ? -`).get(today) as { count: number })?.count ?? 0; - -// Active anime: anime with sessions in last 30 days -const thirtyDaysAgoMs = Date.now() - 30 * 86400000; -const activeAnimeCount = (db.prepare(` - SELECT COUNT(DISTINCT v.anime_id) AS count - FROM imm_sessions s - JOIN imm_videos v ON v.video_id = s.video_id - WHERE v.anime_id IS NOT NULL - AND s.started_at_ms >= ? -`).get(thirtyDaysAgoMs) as { count: number })?.count ?? 0; -``` - -Return these as part of the hints object. - -**Step 2: Write test, run, verify** - -Run: `node --test src/core/services/__tests__/stats-server.test.ts` - -**Step 3: Commit** - -```bash -git add src/core/services/immersion-tracker/query.ts src/core/services/stats-server.ts src/core/services/__tests__/stats-server.test.ts -git commit -m "feat(stats): add episodes today and active anime to overview hints" -``` - ---- - -## Task 4: Extend Vocabulary Endpoint with POS Data - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` (function `getVocabularyStats`) -- Modify: `src/core/services/immersion-tracker/types.ts` (`VocabularyStatsRow`) -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** The vocabulary endpoint currently returns headword, word, reading, frequency, firstSeen, lastSeen. It needs to also return `partOfSpeech`, `pos1`, `pos2`, `pos3` and support a `?excludePos=particle` query param for filtering. - -**Step 1: Update `VocabularyStatsRow` in types.ts** - -Add fields: `partOfSpeech: string | null`, `pos1: string | null`, `pos2: string | null`, `pos3: string | null`. - -**Step 2: Update `getVocabularyStats` query in query.ts** - -Add `part_of_speech AS partOfSpeech, pos1, pos2, pos3` to the SELECT. Add optional POS filtering parameter. - -**Step 3: Update the API endpoint in stats-server.ts** - -Pass the `excludePos` query param through to the query function. - -**Step 4: Update frontend type `VocabularyEntry` in stats/src/types/stats.ts** - -Add: `partOfSpeech: string | null`, `pos1: string | null`, `pos2: string | null`, `pos3: string | null`. - -**Step 5: Test and commit** - -```bash -git commit -m "feat(stats): add POS data to vocabulary endpoint and support filtering" -``` - ---- - -## Task 5: Add Streak Calendar Query + Endpoint - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/immersion-tracker/types.ts` -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** The streak calendar needs a map of `{ epochDay → totalActiveMin }` for the last 90 days. - -**Step 1: Add query function** - -```typescript -export interface StreakCalendarRow { - epochDay: number; - totalActiveMin: number; -} - -export function getStreakCalendar(db: DatabaseSync, days = 90): StreakCalendarRow[] { - const cutoffDay = Math.floor(Date.now() / 86400000) - days; - return db.prepare(` - SELECT rollup_day AS epochDay, SUM(total_active_min) AS totalActiveMin - FROM imm_daily_rollups - WHERE rollup_day >= ? - GROUP BY rollup_day - ORDER BY rollup_day ASC - `).all(cutoffDay) as StreakCalendarRow[]; -} -``` - -**Step 2: Add endpoint** - -```typescript -app.get('/api/stats/streak-calendar', async (c) => { - const days = parseIntQuery(c.req.query('days'), 90, 365); - return c.json(getStreakCalendar(tracker.db, days)); -}); -``` - -**Step 3: Test and commit** - -```bash -git commit -m "feat(stats): add streak calendar endpoint" -``` - ---- - -## Task 6: Add Trend Episode/Anime Queries + Endpoints - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** Trends tab needs new data: episodes per day, new anime started per day, watch time per anime (stacked). - -**Step 1: Add query functions** - -```typescript -export interface EpisodesPerDayRow { - epochDay: number; - episodeCount: number; -} - -export function getEpisodesPerDay(db: DatabaseSync, limit = 90): EpisodesPerDayRow[] { - return db.prepare(` - SELECT CAST(s.started_at_ms / 86400000 AS INTEGER) AS epochDay, - COUNT(DISTINCT s.video_id) AS episodeCount - FROM imm_sessions s - GROUP BY epochDay - ORDER BY epochDay DESC - LIMIT ? - `).all(limit) as EpisodesPerDayRow[]; -} - -export interface NewAnimePerDayRow { - epochDay: number; - newAnimeCount: number; -} - -export function getNewAnimePerDay(db: DatabaseSync, limit = 90): NewAnimePerDayRow[] { - return db.prepare(` - SELECT CAST(MIN(s.started_at_ms) / 86400000 AS INTEGER) AS epochDay, - COUNT(*) AS newAnimeCount - FROM ( - SELECT v.anime_id, MIN(s.started_at_ms) AS started_at_ms - FROM imm_sessions s - JOIN imm_videos v ON v.video_id = s.video_id - WHERE v.anime_id IS NOT NULL - GROUP BY v.anime_id - ) s - GROUP BY epochDay - ORDER BY epochDay DESC - LIMIT ? - `).all(limit) as NewAnimePerDayRow[]; -} - -export interface WatchTimePerAnimeRow { - epochDay: number; - animeId: number; - animeTitle: string; - totalActiveMin: number; -} - -export function getWatchTimePerAnime(db: DatabaseSync, limit = 90): WatchTimePerAnimeRow[] { - const cutoffDay = Math.floor(Date.now() / 86400000) - limit; - return db.prepare(` - SELECT r.rollup_day AS epochDay, a.anime_id AS animeId, - a.canonical_title AS animeTitle, - SUM(r.total_active_min) AS totalActiveMin - FROM imm_daily_rollups r - JOIN imm_videos v ON v.video_id = r.video_id - JOIN imm_anime a ON a.anime_id = v.anime_id - WHERE r.rollup_day >= ? - GROUP BY r.rollup_day, a.anime_id - ORDER BY r.rollup_day ASC - `).all(cutoffDay) as WatchTimePerAnimeRow[]; -} -``` - -**Step 2: Add endpoints** - -```typescript -app.get('/api/stats/trends/episodes-per-day', async (c) => { - const limit = parseIntQuery(c.req.query('limit'), 90, 365); - return c.json(getEpisodesPerDay(tracker.db, limit)); -}); - -app.get('/api/stats/trends/new-anime-per-day', async (c) => { - const limit = parseIntQuery(c.req.query('limit'), 90, 365); - return c.json(getNewAnimePerDay(tracker.db, limit)); -}); - -app.get('/api/stats/trends/watch-time-per-anime', async (c) => { - const limit = parseIntQuery(c.req.query('limit'), 90, 365); - return c.json(getWatchTimePerAnime(tracker.db, limit)); -}); -``` - -**Step 3: Test and commit** - -```bash -git commit -m "feat(stats): add episode and anime trend query endpoints" -``` - ---- - -## Task 7: Add Word/Kanji Detail Queries + Endpoints - -**Files:** -- Modify: `src/core/services/immersion-tracker/query.ts` -- Modify: `src/core/services/stats-server.ts` -- Modify: `src/core/services/__tests__/stats-server.test.ts` - -**Context:** Clicking a word in the vocab tab should show full detail: POS, frequency, anime appearances, example lines, similar words. - -**Step 1: Add query functions** - -```typescript -export interface WordDetailRow { - wordId: number; - headword: string; - word: string; - reading: string; - partOfSpeech: string | null; - pos1: string | null; - pos2: string | null; - pos3: string | null; - frequency: number; - firstSeen: number; - lastSeen: number; -} - -export function getWordDetail(db: DatabaseSync, wordId: number): WordDetailRow | null { - return db.prepare(` - SELECT id AS wordId, headword, word, reading, - part_of_speech AS partOfSpeech, pos1, pos2, pos3, - frequency, first_seen AS firstSeen, last_seen AS lastSeen - FROM imm_words WHERE id = ? - `).get(wordId) as WordDetailRow | null; -} - -export interface WordAnimeAppearanceRow { - animeId: number; - animeTitle: string; - occurrenceCount: number; -} - -export function getWordAnimeAppearances(db: DatabaseSync, wordId: number): WordAnimeAppearanceRow[] { - return db.prepare(` - SELECT a.anime_id AS animeId, a.canonical_title AS animeTitle, - SUM(o.occurrence_count) AS occurrenceCount - FROM imm_word_line_occurrences o - JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id - JOIN imm_anime a ON a.anime_id = sl.anime_id - WHERE o.word_id = ? - GROUP BY a.anime_id - ORDER BY occurrenceCount DESC - `).all(wordId) as WordAnimeAppearanceRow[]; -} - -export interface SimilarWordRow { - wordId: number; - headword: string; - word: string; - reading: string; - frequency: number; -} - -export function getSimilarWords(db: DatabaseSync, wordId: number, limit = 10): SimilarWordRow[] { - const word = db.prepare('SELECT headword, reading FROM imm_words WHERE id = ?').get(wordId) as { headword: string; reading: string } | null; - if (!word) return []; - // Words sharing kanji characters or same reading - return db.prepare(` - SELECT id AS wordId, headword, word, reading, frequency - FROM imm_words - WHERE id != ? - AND (reading = ? OR headword LIKE ? OR headword LIKE ?) - ORDER BY frequency DESC - LIMIT ? - `).all( - wordId, - word.reading, - `%${word.headword.charAt(0)}%`, - `%${word.headword.charAt(word.headword.length - 1)}%`, - limit - ) as SimilarWordRow[]; -} -``` - -Add analogous `getKanjiDetail`, `getKanjiAnimeAppearances`, `getKanjiWords` functions. - -**Step 2: Add endpoints** - -```typescript -app.get('/api/stats/vocabulary/:wordId/detail', async (c) => { - const wordId = parseIntQuery(c.req.param('wordId'), 0); - if (wordId <= 0) return c.body(null, 400); - const detail = getWordDetail(tracker.db, wordId); - if (!detail) return c.body(null, 404); - const animeAppearances = getWordAnimeAppearances(tracker.db, wordId); - const similarWords = getSimilarWords(tracker.db, wordId); - return c.json({ detail, animeAppearances, similarWords }); -}); - -app.get('/api/stats/kanji/:kanjiId/detail', async (c) => { - const kanjiId = parseIntQuery(c.req.param('kanjiId'), 0); - if (kanjiId <= 0) return c.body(null, 400); - const detail = getKanjiDetail(tracker.db, kanjiId); - if (!detail) return c.body(null, 404); - const animeAppearances = getKanjiAnimeAppearances(tracker.db, kanjiId); - const words = getKanjiWords(tracker.db, kanjiId); - return c.json({ detail, animeAppearances, words }); -}); -``` - -**Step 3: Test and commit** - -```bash -git commit -m "feat(stats): add word and kanji detail endpoints" -``` - ---- - -## Task 8: Update Frontend Types - -**Files:** -- Modify: `stats/src/types/stats.ts` - -**Context:** Add all new types needed by the frontend for the redesigned dashboard. - -**Step 1: Add new types** - -```typescript -// Anime types -export interface AnimeLibraryItem { - animeId: number; - canonicalTitle: string; - anilistId: number | null; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - episodeCount: number; - lastWatchedMs: number; -} - -export interface AnimeDetailData { - detail: { - animeId: number; - canonicalTitle: string; - anilistId: number | null; - titleRomaji: string | null; - titleEnglish: string | null; - titleNative: string | null; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - totalWordsSeen: number; - totalLinesSeen: number; - totalLookupCount: number; - totalLookupHits: number; - episodeCount: number; - lastWatchedMs: number; - }; - episodes: AnimeEpisode[]; -} - -export interface AnimeEpisode { - videoId: number; - parsedEpisode: number | null; - parsedSeason: number | null; - canonicalTitle: string; - totalSessions: number; - totalActiveMs: number; - totalCards: number; - lastWatchedMs: number; -} - -export interface AnimeWord { - wordId: number; - headword: string; - word: string; - reading: string; - partOfSpeech: string | null; - frequency: number; -} - -// Streak calendar -export interface StreakCalendarDay { - epochDay: number; - totalActiveMin: number; -} - -// Trend types -export interface EpisodesPerDay { - epochDay: number; - episodeCount: number; -} - -export interface NewAnimePerDay { - epochDay: number; - newAnimeCount: number; -} - -export interface WatchTimePerAnime { - epochDay: number; - animeId: number; - animeTitle: string; - totalActiveMin: number; -} - -// Word/Kanji detail -export interface WordDetailData { - detail: { - wordId: number; - headword: string; - word: string; - reading: string; - partOfSpeech: string | null; - pos1: string | null; - pos2: string | null; - pos3: string | null; - frequency: number; - firstSeen: number; - lastSeen: number; - }; - animeAppearances: Array<{ - animeId: number; - animeTitle: string; - occurrenceCount: number; - }>; - similarWords: Array<{ - wordId: number; - headword: string; - word: string; - reading: string; - frequency: number; - }>; -} -``` - -Update `VocabularyEntry` to include POS fields. - -**Step 2: Commit** - -```bash -git commit -m "feat(stats): add frontend types for anime-centric dashboard" -``` - ---- - -## Task 9: Update API Client and IPC Client - -**Files:** -- Modify: `stats/src/lib/api-client.ts` -- Modify: `stats/src/lib/ipc-client.ts` - -**Context:** Both clients need methods for the new endpoints. - -**Step 1: Add new methods to api-client.ts** - -```typescript -getAnimeLibrary: () => fetchJson('/api/stats/anime'), -getAnimeDetail: (animeId: number) => fetchJson(`/api/stats/anime/${animeId}`), -getAnimeWords: (animeId: number, limit = 50) => fetchJson(`/api/stats/anime/${animeId}/words?limit=${limit}`), -getAnimeRollups: (animeId: number, limit = 90) => fetchJson(`/api/stats/anime/${animeId}/rollups?limit=${limit}`), -getAnimeCover: (animeId: number) => `/api/stats/anime/${animeId}/cover`, -getStreakCalendar: (days = 90) => fetchJson(`/api/stats/streak-calendar?days=${days}`), -getEpisodesPerDay: (limit = 90) => fetchJson(`/api/stats/trends/episodes-per-day?limit=${limit}`), -getNewAnimePerDay: (limit = 90) => fetchJson(`/api/stats/trends/new-anime-per-day?limit=${limit}`), -getWatchTimePerAnime: (limit = 90) => fetchJson(`/api/stats/trends/watch-time-per-anime?limit=${limit}`), -getWordDetail: (wordId: number) => fetchJson(`/api/stats/vocabulary/${wordId}/detail`), -getKanjiDetail: (kanjiId: number) => fetchJson(`/api/stats/kanji/${kanjiId}/detail`), -``` - -Mirror the same methods in `ipc-client.ts`. - -**Step 2: Commit** - -```bash -git commit -m "feat(stats): add anime and detail methods to API clients" -``` - ---- - -## Task 10: Add Frontend Hooks - -**Files:** -- Create: `stats/src/hooks/useAnimeLibrary.ts` -- Create: `stats/src/hooks/useAnimeDetail.ts` -- Create: `stats/src/hooks/useStreakCalendar.ts` -- Create: `stats/src/hooks/useWordDetail.ts` -- Create: `stats/src/hooks/useKanjiDetail.ts` - -**Context:** Follow the same pattern as existing hooks (e.g., `useMediaLibrary`, `useMediaDetail`). Each hook: fetches on mount or param change, returns `{ data, loading, error }`, handles cleanup. - -**Step 1: Create hooks** - -Pattern for each (example `useAnimeLibrary`): - -```typescript -import { useState, useEffect } from 'react'; -import { getStatsClient } from './useStatsApi'; -import type { AnimeLibraryItem } from '../types/stats'; - -export function useAnimeLibrary() { - const [anime, setAnime] = useState([]); - const [loading, setLoading] = useState(true); - const [error, setError] = useState(null); - - useEffect(() => { - let cancelled = false; - const client = getStatsClient(); - client.getAnimeLibrary() - .then((data) => { if (!cancelled) setAnime(data); }) - .catch((err) => { if (!cancelled) setError(String(err)); }) - .finally(() => { if (!cancelled) setLoading(false); }); - return () => { cancelled = true; }; - }, []); - - return { anime, loading, error }; -} -``` - -Follow same pattern for `useAnimeDetail(animeId)`, `useStreakCalendar()`, `useWordDetail(wordId)`, `useKanjiDetail(kanjiId)`. - -**Step 2: Commit** - -```bash -git commit -m "feat(stats): add frontend hooks for anime, streak, and word detail" -``` - ---- - -## Task 11: Update Dashboard Data Builders - -**Files:** -- Modify: `stats/src/lib/dashboard-data.ts` -- Modify: `stats/src/lib/dashboard-data.test.ts` - -**Context:** Update `buildOverviewSummary` to include episodes today and active anime. Add builder for streak calendar data. Extend `buildTrendDashboard` for new chart series. - -**Step 1: Update OverviewSummary interface** - -Add fields: `episodesToday: number`, `activeAnimeCount: number`. Remove `todayWords`, `weekWords`. Remove `recentWords` chart data. - -**Step 2: Update buildOverviewSummary** - -Pull `episodesToday` and `activeAnimeCount` from `data.hints`. - -**Step 3: Add streak calendar builder** - -```typescript -export interface StreakCalendarPoint { - date: string; // YYYY-MM-DD - value: number; // active minutes -} - -export function buildStreakCalendar(days: StreakCalendarDay[]): StreakCalendarPoint[] { - return days.map(d => ({ - date: epochDayToDate(d.epochDay).toISOString().slice(0, 10), - value: d.totalActiveMin, - })); -} -``` - -**Step 4: Update tests, run, verify** - -Run: `npx vitest run stats/src/lib/dashboard-data.test.ts` - -**Step 5: Commit** - -```bash -git commit -m "feat(stats): update dashboard data builders for anime-centric overview" -``` - ---- - -## Task 12: Update Tab Bar and App Router - -**Files:** -- Modify: `stats/src/App.tsx` -- Modify: `stats/src/components/layout/TabBar.tsx` - -**Context:** Change tabs from `['overview', 'library', 'trends', 'vocabulary']` to `['overview', 'anime', 'trends', 'vocabulary', 'sessions']`. - -**Step 1: Update TabBar** - -Change `TabId` type to `'overview' | 'anime' | 'trends' | 'vocabulary' | 'sessions'`. Update tab labels. - -**Step 2: Update App.tsx** - -Replace `LibraryTab` import with `AnimeTab` (to be created). Add `SessionsTab` import. Update conditional rendering. - -Note: `AnimeTab` doesn't exist yet — create a placeholder that renders "Anime tab coming soon" for now. Wire up `SessionsTab`. - -**Step 3: Commit** - -```bash -git commit -m "feat(stats): update tab bar to 5-tab anime-centric layout" -``` - ---- - -## Task 13: Redesign Overview Tab - -**Files:** -- Modify: `stats/src/components/overview/OverviewTab.tsx` -- Modify: `stats/src/components/overview/HeroStats.tsx` -- Create: `stats/src/components/overview/StreakCalendar.tsx` - -**Context:** Update hero stats to show the 6 new metrics. Add streak calendar. Remove words chart. Keep watch time chart and recent sessions. - -**Step 1: Update HeroStats** - -Change the 6 cards to: -1. Watch Time Today -2. Cards Mined Today -3. Sessions Today -4. Episodes Today -5. Current Streak -6. Active Anime - -**Step 2: Create StreakCalendar component** - -GitHub-contributions-style heatmap. 90 days, 7 rows (days of week), colored by intensity (Catppuccin palette: ctp-surface0 for empty, ctp-green shades for activity levels). - -Use `useStreakCalendar()` hook to fetch data. - -**Step 3: Update OverviewTab layout** - -- HeroStats (6 cards) -- Watch Time Chart (keep) -- Streak Calendar (new) -- Tracking Snapshot (updated: total sessions, total episodes, all-time hours, active days, total cards) -- Recent Sessions (keep, add episode number to display) - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): redesign overview tab with episodes, streak calendar" -``` - ---- - -## Task 14: Build Anime Tab - -**Files:** -- Create: `stats/src/components/anime/AnimeTab.tsx` -- Create: `stats/src/components/anime/AnimeCard.tsx` -- Create: `stats/src/components/anime/AnimeDetailView.tsx` -- Create: `stats/src/components/anime/AnimeHeader.tsx` -- Create: `stats/src/components/anime/EpisodeList.tsx` -- Create: `stats/src/components/anime/AnimeWordList.tsx` -- Create: `stats/src/components/anime/AnimeCoverImage.tsx` - -**Context:** This replaces the Library tab. Reuse patterns from `LibraryTab` / `MediaDetailView` but centered on `anime_id` instead of `video_id`. - -**Step 1: AnimeTab (grid view)** - -- Search input for filtering by title -- Sort dropdown: last watched, watch time, cards, progress -- Responsive grid of AnimeCard components -- Total count + total watch time header - -**Step 2: AnimeCard** - -- Cover art via `AnimeCoverImage` (fetches from `/api/stats/anime/:animeId/cover`) -- Title, progress bar (`episodeCount / episodesTotal`), watch time, cards mined -- Click handler to enter detail view - -**Step 3: AnimeDetailView** - -- AnimeHeader: cover art, titles (romaji/english/native), AniList link, progress bar -- Stats row: 6 StatCards (watch time, cards, words, lookup rate, sessions, avg session) -- EpisodeList: table of episodes from `AnimeDetailData.episodes` -- Watch time chart using anime rollups -- AnimeWordList: top words from `getAnimeWords`, each clickable (opens vocab detail) -- Mining efficiency chart: cards per hour / cards per episode - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): build anime tab with grid, detail, episodes, words" -``` - ---- - -## Task 15: Extend Trends Tab with New Charts - -**Files:** -- Modify: `stats/src/components/trends/TrendsTab.tsx` -- Modify: `stats/src/hooks/useTrends.ts` - -**Context:** Add the 6 new trend charts. The hook needs to fetch additional data from the new endpoints. - -**Step 1: Extend useTrends hook** - -Add fetches for: -- `getEpisodesPerDay(limit)` -- `getNewAnimePerDay(limit)` -- `getWatchTimePerAnime(limit)` - -Return these alongside existing data. - -**Step 2: Add new TrendChart instances to TrendsTab** - -After the existing 9 charts, add: -- Episodes per Day (bar chart) -- Anime Completion Progress (line chart — cumulative) -- New Anime Started (bar chart) -- Watch Time per Anime (stacked bar — needs a new `StackedTrendChart` component or extend `TrendChart`) -- Streak History (visual timeline) -- Cards per Episode (line chart — derive from cards/episodes per day) - -For the stacked bar chart, extend `TrendChart` to accept a `stacked` prop with multiple data series, or create a `StackedTrendChart` wrapper. - -**Step 3: Organize charts into sections** - -Group visually with section headers: "Activity", "Anime", "Efficiency". - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): add 6 new trend charts for episodes, anime, efficiency" -``` - ---- - -## Task 16: Redesign Vocabulary Tab - -**Files:** -- Modify: `stats/src/components/vocabulary/VocabularyTab.tsx` -- Modify: `stats/src/components/vocabulary/WordList.tsx` -- Modify: `stats/src/components/vocabulary/KanjiBreakdown.tsx` -- Modify: `stats/src/components/vocabulary/VocabularyOccurrencesDrawer.tsx` -- Create: `stats/src/components/vocabulary/WordDetailPanel.tsx` -- Create: `stats/src/components/vocabulary/KanjiDetailPanel.tsx` - -**Context:** The existing drawer shows occurrence lines. Replace/extend it with a full detail panel showing POS, anime appearances, example lines, similar words. - -**Step 1: Update VocabularyTab** - -- Hero stats: update to use POS-filtered counts (exclude particles) -- Add POS filter toggle (checkbox to show/hide particles, single-char tokens) -- Add search input for word/reading search -- Keep top words chart and new words timeline - -**Step 2: Update WordList** - -- Show POS tag badge next to each word -- Make each word clickable → opens `WordDetailPanel` -- Support POS filtering from parent - -**Step 3: Create WordDetailPanel** - -Slide-out panel (reuse `VocabularyOccurrencesDrawer` pattern): -- Header: headword, reading, POS (pos1/pos2/pos3) -- Stats: frequency, first/last seen -- Anime appearances: list of anime with per-anime frequency (from `getWordDetail`) -- Example lines: paginated subtitle lines (from existing `getWordOccurrences`) -- Similar words: clickable list (from `getWordDetail`) - -Uses `useWordDetail(wordId)` hook. - -**Step 4: Update KanjiBreakdown** - -Same pattern: clickable kanji → `KanjiDetailPanel` with frequency, anime appearances, example lines, words using this kanji. - -**Step 5: Commit** - -```bash -git commit -m "feat(stats): redesign vocabulary tab with POS filter and detail panels" -``` - ---- - -## Task 17: Enhance Sessions Tab - -**Files:** -- Modify: `stats/src/components/sessions/SessionsTab.tsx` -- Modify: `stats/src/components/sessions/SessionRow.tsx` -- Modify: `stats/src/components/sessions/SessionDetail.tsx` - -**Context:** The existing `SessionsTab` is functional but hidden. Enable it and add anime/episode context. - -**Step 1: Update SessionRow** - -- Add cover art thumbnail (from anime cover endpoint) -- Show anime title + episode number instead of just canonical title -- Keep: duration, cards, lines, lookup rate - -**Step 2: Update SessionsTab** - -- Add filter by anime title -- Add date range filter -- Group sessions by day (Today / Yesterday / date) - -**Step 3: Verify SessionDetail still works** - -The inline expandable detail with timeline chart and events should work as-is. - -**Step 4: Commit** - -```bash -git commit -m "feat(stats): enhance sessions tab with anime context and filters" -``` - ---- - -## Task 18: Wire Up Cross-Tab Navigation - -**Files:** -- Modify: `stats/src/App.tsx` -- Modify various components - -**Context:** Enable navigation between tabs when clicking related items: -- Clicking a word in the Anime detail "words from this anime" should navigate to Vocabulary tab and open that word's detail -- Clicking an anime in the word detail "anime appearances" should navigate to Anime tab and open that anime's detail - -**Step 1: Lift navigation state to App level** - -Add state for: `selectedAnimeId`, `selectedWordId`. Pass navigation callbacks down to tabs. - -**Step 2: Wire up cross-references** - -- AnimeWordList: onClick navigates to vocabulary tab + opens word detail -- WordDetailPanel anime appearances: onClick navigates to anime tab + opens anime detail - -**Step 3: Commit** - -```bash -git commit -m "feat(stats): add cross-tab navigation between anime and vocabulary" -``` - ---- - -## Task 19: Final Integration Testing and Polish - -**Files:** -- All modified files -- Modify: `stats/src/lib/dashboard-data.test.ts` - -**Step 1: Run all backend tests** - -Run: `node --test src/core/services/__tests__/stats-server.test.ts` - -**Step 2: Run all frontend tests** - -Run: `npx vitest run` - -**Step 3: Build the stats frontend** - -Run: `cd stats && npm run build` - -**Step 4: Visual testing** - -Start the app and verify each tab renders correctly with real data. - -**Step 5: Final commit** - -```bash -git commit -m "feat(stats): complete stats dashboard redesign" -```