Compare commits

...

22 Commits

Author SHA1 Message Date
24667ad6c9 fix(review): address latest CodeRabbit comments 2026-03-19 23:49:55 -07:00
42028d0a4d fix(subtitle): unify annotation token filtering 2026-03-19 23:48:38 -07:00
4a01cebca6 feat(stats): rename all token display text to words
Replace every user-facing "token(s)" label, tooltip, and message in the
stats UI with "words" so the terminology is consistent and friendlier
(e.g. "Words Seen", "word occurrences", "3.4 / 100 words", "Words Today").

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-19 23:48:37 -07:00
3995c396f8 fix(review): address latest CodeRabbit comments 2026-03-19 23:13:43 -07:00
544cd8aaa0 fix(stats): address review follow-ups 2026-03-19 22:55:46 -07:00
1932d2e25e fix(stats): format stats navigation helper 2026-03-19 22:21:57 -07:00
2258ededbd Show anime progress from latest session position
- include anime ID in media detail data
- use latest session position for episode progress
- update stats UI and lookup tests
2026-03-19 21:57:04 -07:00
64a88020c9 feat(stats): add 'View Anime' navigation button in MediaDetailView
- Added onNavigateToAnime prop to MediaDetailView
- Show 'View Anime →' button in the top-right when viewing media from
  non-anime origins (overview/sessions)
- Extract animeId from available sessions to enable navigation
- Button is hidden when already viewing from anime origin

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-19 21:43:30 -07:00
0ea1746123 feat(stats): add media-detail navigation from Sessions rows; fix(tokenizer): exclude そうだ auxiliary-stem from annotations
- Added hover-revealed ↗ button on SessionRow that navigates to the
  anime media-detail view for the session's videoId
- Added `sessions` origin type to MediaDetailOrigin and
  openSessionsMediaDetail() / closeMediaDetail() handling so the
  back button returns correctly to the Sessions tab ("Back to Sessions")
- Wired onNavigateToMediaDetail down through SessionsTab → SessionRow
- Excluded tokens with MeCab POS3 = 助動詞語幹 (e.g. そうだ grammar tails)
  from subtitle annotation metadata so frequency, JLPT, and N+1 styling
  no longer apply to grammar-tail tokens
- Added annotation-stage unit test and end-to-end tokenizeSubtitle test
  for the そうだ exclusion path
- Updated docs-site changelog, immersion-tracking, and
  subtitle-annotations pages to reflect both changes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-19 21:42:53 -07:00
59fa3b427d fix: exclude auxiliary grammar tails from subtitle annotations 2026-03-19 21:40:20 -07:00
ff95934f07 fix(launcher): address newest PR review feedback 2026-03-19 21:32:51 -07:00
c27ef90046 test(anki): cover non-blocking proxy enrichment 2026-03-19 21:32:32 -07:00
34ba602405 fix(stats): persist anime episode progress checkpoints 2026-03-19 21:31:47 -07:00
ecb4b07f43 docs: remove release cut note from changelog 2026-03-19 20:07:11 -07:00
1227706ac9 fix: address latest PR review feedback 2026-03-19 20:06:52 -07:00
9ad3ccfa38 fix(stats): address Claude review follow-ups 2026-03-19 19:55:05 -07:00
20f53c0b70 Switch known-word cache to incremental sync and doctor refresh
- Load persisted known-word cache on startup; reconcile adds/deletes/edits on timed sync
- Add `knownWords.addMinedWordsImmediately` (default `true`) for immediate mined-word updates
- Route full rebuild to explicit `subminer doctor --refresh-known-words` and expand tests/docs
2026-03-19 19:29:58 -07:00
72d78ba1ca chore: prepare release v0.7.0 2026-03-19 18:04:02 -07:00
43a0d11446 fix(subtitle): restore known and JLPT token annotations 2026-03-19 18:03:20 -07:00
1b5f0c6999 Normalize formatting in tracking snapshot and session detail test
- Collapse multiline JSX and import statements to single-line style
- No behavior changes; formatting-only cleanup
2026-03-19 17:04:36 -07:00
886c6ef1d7 cleanup 2026-03-19 15:47:05 -07:00
f2d6c70019 Fix stats command flow and tracking metrics regressions
- Route default `subminer stats` through attached `--stats`; keep daemon path for `--background`/`--stop`
- Update overview metrics: lookup rate uses lifetime Yomitan lookups per 100 tokens; new words dedupe by headword
- Suppress repeated macOS `Overlay loading...` OSD during fullscreen tracker flaps and improve session-detail chart scaling
- Add/adjust launcher, tracker query, stats server, IPC, overlay, and stats UI regression tests; add changelog fragments
2026-03-19 15:46:52 -07:00
155 changed files with 5177 additions and 2232 deletions

View File

@@ -1,5 +1,72 @@
# Changelog
## v0.7.0 (2026-03-19)
### Added
- Immersion: Added Mine Word, Mine Sentence, and Mine Audio buttons to word detail example lines in the stats dashboard.
- Immersion: Mine Word creates a full Yomitan card (definition, reading, pitch accent) via the hidden search page bridge, then enriches with sentence audio, screenshot, and metadata extracted from the source video.
- Immersion: Mine Sentence and Mine Audio create cards directly with appropriate Lapis/Kiku flags, sentence highlighting, and media from the source file.
- Immersion: Media generation (audio + image/AVIF) runs in parallel and respects all AnkiConnect config options.
- Immersion: Added word exclusion list to the Vocabulary tab with localStorage persistence and a management modal.
- Immersion: Fixed truncated readings in the frequency rank table (e.g. お前 now shows おまえ instead of まえ).
- Immersion: Clicking a bar in the Top Repeated Words chart now opens the word detail panel.
- Immersion: Secondary subtitle text is now stored alongside primary subtitle lines for use as translation when mining cards from the stats page.
- Stats: Added `subminer stats -b` to start or reuse a dedicated background stats server without blocking normal SubMiner instances.
- Stats: Added `subminer stats -s` to stop the dedicated background stats server without closing browser tabs.
- Stats: Stats server startup now reuses a running background stats daemon instead of trying to bind a second local server in another SubMiner instance.
- Launcher: Added launcher passthrough for `-a/--args` so mpv receives raw extra launch flags (`--fs`, `--ytdl-format`, custom audio/video settings, etc.) from the `subminer` command.
- Launcher: Added `subminer stats` to launch the local stats dashboard, force-start the stats server on demand, and open the dashboard in your browser.
- Launcher: Added `subminer stats cleanup` to backfill vocabulary metadata and prune stale or excluded immersion rows on demand.
- Launcher: Added `stats.autoOpenBrowser` so browser launch after `subminer stats` can be enabled or disabled explicitly.
- Immersion: Added a local stats dashboard for immersion tracking with Overview, Anime, Trends, Vocabulary, and Sessions views.
- Immersion: Added anime progress, episode completion, Anki card links, and occurrence drill-down across the stats dashboard.
- Immersion: Added richer session timelines with new-word activity, cumulative totals, and pause/seek/card event markers.
- Immersion: Added completed-episodes and completed-anime totals to the Overview tracking snapshot.
### Changed
- Anki: Changed known-word cache settings to live under `ankiConnect.knownWords` instead of mixing them into `ankiConnect.nPlusOne`.
- Anki: Kept legacy `ankiConnect.nPlusOne` known-word keys and older `ankiConnect.behavior.nPlusOne*` keys as deprecated compatibility fallbacks.
- Stats: Added session deletion to the Sessions tab with the same confirmation prompt used by anime episode/session deletes, and removed all associated session rows from the stats database.
- Immersion: Kept immersion tracking history by default while preserving daily/monthly rollup maintenance.
- Immersion: Added exact lifetime summary reads for overview/anime/media stats so dashboard totals no longer depend on rescanning raw telemetry.
- Immersion: Reduced tracker storage overhead by removing duplicated subtitle text from subtitle-line event payloads.
- Immersion: Deduplicated episode cover-art blobs through a shared blob store and updated cover-art reads/writes to resolve shared images correctly.
- Immersion: Added indexes for large-history session, telemetry, vocabulary, kanji, and cover-art queries to keep dashboard reads fast as the SQLite database grows.
- Immersion: Renamed the stats dashboard's Anime tab to Library so the media browser label matches non-anime sources like YouTube and other yt-dlp-backed content.
- Anilist: Standardized episode completion threshold by introducing `DEFAULT_MIN_WATCH_RATIO` and using it for both local watched state transitions and AniList post-watch progress updates.
- Anilist: Episode auto-marking now uses the same threshold as AniList (`85%`), removing divergent completion behavior.
- Overlay: Excluded interjections and sound-effect tokens from subtitle annotation styling so they no longer inherit misleading lexical highlight treatment while still remaining visible and hoverable as plain subtitle tokens.
- Overlay: Expanded subtitle annotation noise filtering to also strip annotation metadata from standalone grammar-only helper tokens such as particles, auxiliaries, adnominals, common explanatory endings like `んです` / `のだ`, and merged trailing quote-particle forms like `...って` while keeping them tokenized for hover lookup.
### Fixed
- Launcher: Fixed mpv Lua plugin binary auto-detection on Linux to also search `/usr/bin/subminer` and `/usr/local/bin/subminer` (lowercase), matching the conventional Unix wrapper name used by packaged installs such as the AUR package.
- Stats: Fixed the in-app stats overlay so it connects to the configured `stats.serverPort` instead of falling back to the default port.
- Overlay: Fixed subtitle frequency tagging for merged lookup-backed tokens like `陰に` by falling back to exact surface-form Yomitan frequencies when the normalized headword lookup misses.
- Overlay: Fixed MeCab merged-token position mapping across line breaks so merged content-plus-particle tokens like `陰に` keep their matched Yomitan frequency instead of inheriting shifted POS tags.
- Overlay: Fixed grouped frequency parsing in both Yomitan and fallback frequency-dictionary lookups so display values like `118,121` use the leading rank instead of collapsing the rank and occurrence count into `118121`.
- Overlay: Fixed frequency-rank ingestion to ignore Yomitan dictionaries explicitly marked `occurrence-based`, so raw occurrence counts are no longer treated as subtitle rank values.
- Overlay: Fixed inflected headword frequency tagging to prefer ranks from the selected Yomitan `termsFind` popup entry itself, ordered by configured dictionary priority, so forms like `潜み` use primary-dictionary ranks like `4073` before falling back to lower-priority raw lemma metadata such as `CC100`.
- Overlay: Fixed annotation-stage frequency filtering so exact kanji noun tokens like `者` keep their matched rank even when MeCab labels them `名詞/非自立`, instead of dropping the highlight after scan-time frequency lookup succeeds.
- Anki: Fixed repeated character-dictionary startup work by scheduling auto-sync only from mpv media-path changes instead of also re-triggering it from connection and media-title events for the same title.
- Overlay: Fixed macOS fullscreen overlay stability by keeping the passive visible overlay from stealing focus, re-raising the overlay window when reasserting its macOS topmost level, and tolerating one transient macOS tracker/helper miss before hiding the overlay.
- Overlay: Kept subtitle tokenization warmup one-shot for the lifetime of the app so later fullscreen/media churn on macOS does not replay the startup warmup gate after the first file is ready.
- Overlay: Added a bounded macOS tracker loss-grace window so fullscreen enter/leave transitions do not immediately hide and reload the overlay when the helper briefly loses the mpv window.
- Overlay: Skipped subtitle/tokenization refresh invalidation on character-dictionary auto-sync completion when the dictionary was already current, preventing startup flash/reload loops on unchanged media.
- Stats: Fixed session stats so known-word counts track real known-word occurrences without collapsing subtitle-line gaps.
- Stats: Fixed session word totals in session-facing stats views to prefer token counts when available, preventing known words from exceeding total words in the session chart.
- Stats: Fixed the stats Vocabulary tab blank-screen regression caused by a hook-order crash after vocabulary data finished loading.
- Anki: Fixed card-mine OSD feedback so the final mine result stops the Anki spinner first, then shows a single-line `✓`/`x` status without being overwritten by a later spinner tick.
- Stats: Removed the misleading `New words` series from expanded session charts; session detail now shows only the real total-word and known-word lines.
- Stats: Restored the cross-anime word table behavior in stats vocabulary surfaces so shared vocabulary entries no longer disappear or merge incorrectly across related media.
- Stats: `subminer stats -b` now runs as a standalone background stats daemon instead of reusing the main SubMiner app process, so the overlay app can still be launched separately for normal video watching.
- Stats: Dashboard word mining still works against the background daemon by using a short-lived hidden helper for the Yomitan add-note flow.
- Stats: Load full session timelines by default in stats session detail views so long sessions preserve complete telemetry history instead of being truncated by a fixed sample limit.
- Stats: Replaced heuristic stats word counts with Yomitan token counts, so session, media, anime, and trend subtitle totals now come directly from parsed subtitle tokens.
- Stats: Updated stats UI labels and lookup-rate copy to refer to tokens instead of words where those counts are shown.
- Overlay: Reduced repeated `Overlay loading...` popups on macOS when fullscreen tracker flaps briefly hide and recover the visible overlay.
- Stats: Scaled expanded session-detail known-word charts to the session's actual percentage range so small changes no longer render as a nearly flat line.
- Jlpt: Reduced JLPT dictionary startup log noise by summarizing duplicate surface-form collisions instead of logging one line per duplicate entry.
## v0.6.5 (2026-03-15)
### Internal

View File

@@ -1,14 +1,14 @@
<div align="center">
<img src="assets/SubMiner.png" width="140" alt="SubMiner logo">
# SubMiner
# SubMiner
**Sentence-mine from mpv — look up words, one-key Anki export, immersion tracking.**
**Sentence-mine from mpv — look up words, one-key Anki export, immersion tracking.**
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![Linux](https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-informational)]()
[![Docs](https://img.shields.io/badge/docs-docs.subminer.moe-blueviolet)](https://docs.subminer.moe)
[![AUR](https://img.shields.io/aur/version/subminer-bin)](https://aur.archlinux.org/packages/subminer-bin)
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![Linux](https://img.shields.io/badge/platform-Linux%20%7C%20macOS%20%7C%20Windows-informational)](https://github.com/ksyasuda/SubMiner)
[![Docs](https://img.shields.io/badge/docs-docs.subminer.moe-blueviolet)](https://docs.subminer.moe)
[![AUR](https://img.shields.io/aur/version/subminer-bin)](https://aur.archlinux.org/packages/subminer-bin)
</div>
@@ -75,6 +75,7 @@ git clone https://aur.archlinux.org/subminer-bin.git && cd subminer-bin && makep
<summary><b>Linux (AppImage)</b></summary>
```bash
mkdir -p ~/.local/bin
wget https://github.com/ksyasuda/SubMiner/releases/latest/download/SubMiner.AppImage -O ~/.local/bin/SubMiner.AppImage \
&& chmod +x ~/.local/bin/SubMiner.AppImage
wget https://github.com/ksyasuda/SubMiner/releases/latest/download/subminer -O ~/.local/bin/subminer \
@@ -107,6 +108,9 @@ Run `SubMiner.AppImage` (Linux), `SubMiner.app` (macOS), or `SubMiner.exe` (Wind
subminer video.mkv # auto-starts overlay + resumes playback
subminer --start video.mkv # explicit overlay start (if plugin auto_start=no)
subminer stats # open the immersion dashboard
subminer stats -b # keep the stats daemon running in background
subminer stats -s # stop the dedicated stats daemon
subminer stats cleanup # repair/prune stored stats vocabulary rows
```
---
@@ -114,7 +118,7 @@ subminer stats # open the immersion dashboard
## Requirements
| Required | Optional |
|---|---|
| ------------------------------------------------------ | ----------------------------- |
| [`mpv`](https://mpv.io) with IPC socket | `yt-dlp` |
| `ffmpeg` | `guessit` (AniSkip detection) |
| `mecab` + `mecab-ipadic` | `fzf` / `rofi` |

View File

@@ -0,0 +1,80 @@
---
id: TASK-169
title: Cut minor release v0.7.0 for stats and runtime polish
status: Done
assignee:
- codex
created_date: '2026-03-19 17:20'
updated_date: '2026-03-19 17:31'
labels:
- release
- docs
- minor
dependencies:
- TASK-168
references:
- package.json
- README.md
- docs/RELEASING.md
- docs-site/changelog.md
- CHANGELOG.md
- release/release-notes.md
priority: high
ordinal: 108000
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Prepare the next release cut as `v0.7.0`, keeping 0-ver semantics by rolling the accumulated stats/dashboard, launcher, overlay, and stability work into the next minor line instead of a `1.0.0` release.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Repository version metadata is updated to `0.7.0`.
- [x] #2 Root release-facing docs are refreshed for the `0.7.0` release cut.
- [x] #3 `CHANGELOG.md` and `release/release-notes.md` contain the committed `v0.7.0` section and consumed fragments are removed.
- [x] #4 Public changelog/docs surfaces reflect the new release.
- [x] #5 Release-prep verification is recorded.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Bump `package.json` to `0.7.0`.
2. Refresh release-facing docs: root `README.md`, release guide versioning note, and public docs changelog summary.
3. Run `bun run changelog:build --version 0.7.0` to commit release artifacts and consume pending fragments.
4. Run release-prep verification (`changelog`, typecheck, tests, docs build if docs-site changed).
5. Update this task with notes, verification, and final summary.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Bumped `package.json` from `0.6.5` to `0.7.0` and refreshed the root release-facing copy in `README.md` so the release prep explicitly calls out the new stats/dashboard line plus the background stats daemon commands. Updated `docs/RELEASING.md` with the repo's 0-ver versioning policy and an explicit `--date` reminder after the changelog generator initially stamped `2026-03-20` from UTC instead of the intended local release date `2026-03-19`.
Ran `bun run changelog:build --version 0.7.0`, which generated `CHANGELOG.md` and `release/release-notes.md` and removed the queued `changes/*.md` fragments for the accumulated stats, launcher, overlay, JLPT, and stability work. Added a curated `v0.7.0` summary to `docs-site/changelog.md` so the public docs changelog stays aligned with the committed root changelog while remaining user-facing.
Verification:
- `bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh`
- `bun run changelog:lint`
- `bun run changelog:check --version 0.7.0`
- `bun run verify:config-example`
- `bun run typecheck`
- `bun run test:fast`
- `bun run test:env`
- `bun run build`
- `bun run docs:test`
- `bun run docs:build`
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Prepared minor release `v0.7.0` as the next 0-ver major line. Version metadata, root changelog, generated release notes, README release copy, release-guide policy, and the public docs changelog are now aligned for the release cut.
Docs update required: yes. Completed in `README.md`, `docs/RELEASING.md`, and `docs-site/changelog.md`.
Changelog fragment required: no new fragment for this task. Existing pending release fragments were consumed into the committed `v0.7.0` changelog section and `release/release-notes.md`.
Release-prep verification passed across changelog validation, config-example verification, typecheck, fast/env tests, full build, and docs-site test/build.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,64 @@
---
id: TASK-177.1
title: Fix overview lookup rate metric
status: Done
assignee:
- '@codex'
created_date: '2026-03-19 17:46'
updated_date: '2026-03-19 17:54'
labels:
- stats
- immersion-tracking
- yomitan
dependencies: []
references:
- stats/src/components/overview/OverviewTab.tsx
- stats/src/lib/dashboard-data.ts
- stats/src/lib/yomitan-lookup.ts
- src/core/services/immersion-tracker/query.ts
- src/core/services/stats-server.ts
parent_task_id: TASK-177
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Update the stats homepage Tracking Snapshot so Lookup Rate reflects lifetime intentional Yomitan lookups normalized by total tokens seen, matching the newer stats semantics already used in session, media, and anime views.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Overview data exposes the lifetime totals needed to compute global Yomitan lookups per 100 tokens on the homepage
- [x] #2 The homepage Tracking Snapshot Lookup Rate card shows Yomitan lookup rate as `X / 100 tokens` with tooltip/copy aligned to that meaning
- [x] #3 Automated tests cover the lifetime totals plumbing and homepage summary/rendering change
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Extend overview lifetime hints/query plumbing to include total tokens seen and total intentional Yomitan lookups from finished sessions.
2. Add/adjust focused tests first for query hints, stats overview API typing/mocks, and overview summary formatting so the homepage metric fails under old semantics.
3. Update the overview summary/card to derive Lookup Rate from lifetime Yomitan lookups per 100 tokens and align tooltip/copy with that meaning.
4. Run focused verification on the touched query, stats-server, and stats UI tests; record results and blockers in the task notes.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Extended overview lifetime hints to include total tokens seen and total intentional Yomitan lookups from finished sessions so the homepage can compute a true global lookup rate.
Extracted the homepage Tracking Snapshot into a dedicated presentational component to keep OverviewTab smaller and make the Lookup Rate card copy directly testable.
Focused verification passed for query hints, IPC/stats overview plumbing, stats server overview response, dashboard summary logic, and homepage snapshot rendering.
SubMiner verifier core lane artifact: .tmp/skill-verification/subminer-verify-20260319-105320-7FDlwh. `bun run typecheck` passed there; `bun run test:fast` failed for a pre-existing/unrelated environment issue in scripts/update-aur-package.test.ts because scripts/update-aur-package.sh reported `mapfile: command not found`.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Homepage Lookup Rate now uses lifetime intentional Yomitan lookups normalized by lifetime tokens seen, matching the existing session/media/anime semantics instead of the old known-word hit-rate metric. I extended overview query hints and API typings with total token and Yomitan lookup totals, updated the overview summary builder to reuse the shared per-100-token formatter, and replaced the inline Tracking Snapshot block with a dedicated component that renders `X / 100 tokens` plus Yomitan-specific tooltip copy.
Tests added/updated: query hints coverage for the new lifetime totals, stats server and IPC overview fixtures, overview summary assertions, and a dedicated Tracking Snapshot render test for the homepage card text. Focused `bun test` runs passed for those touched areas. Repo-native verifier `--lane core` also passed `bun run typecheck`; its `bun run test:fast` step still fails for the unrelated existing `scripts/update-aur-package.sh: line 71: mapfile: command not found` environment issue.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,62 @@
---
id: TASK-177.2
title: Count homepage new words by headword
status: Done
assignee:
- '@codex'
created_date: '2026-03-19 19:38'
updated_date: '2026-03-19 19:40'
labels:
- stats
- immersion-tracking
- vocabulary
dependencies: []
references:
- src/core/services/immersion-tracker/query.ts
- stats/src/components/overview/TrackingSnapshot.tsx
- stats/src/lib/dashboard-data.ts
parent_task_id: TASK-177
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Align the homepage New Words metric with the Known Words semantics by counting distinct headwords first seen in the selected window, so inflected or alternate forms of the same word do not inflate the summary.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Homepage new-word counts use distinct headwords by earliest first-seen timestamp instead of counting separate word-form rows
- [x] #2 Homepage tooltip/copy reflects the headword-based semantics
- [x] #3 Automated tests cover the headword de-duplication behavior and affected overview copy
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Change the new-word aggregate query to group `imm_words` by headword, compute each headword's earliest `first_seen`, and count headwords whose first sighting falls within today/week windows.
2. Add failing tests first for the aggregate path so multiple rows sharing a headword only contribute once.
3. Update homepage tooltip/copy to say unique headwords first seen today/week.
4. Run focused query and stats overview tests, then record verification and any blockers.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Updated the new-word aggregate to count distinct headwords by each headword's earliest `first_seen` timestamp, so multiple inflected/form rows for the same headword contribute only once.
Adjusted homepage tooltip copy to say unique headwords first seen today/week, keeping the visible card labels unchanged.
Focused verification passed for the query aggregate and homepage snapshot tests.
SubMiner verifier core lane artifact: .tmp/skill-verification/subminer-verify-20260319-123942-4intgW. `bun run typecheck` passed there; `bun run test:fast` still fails for the unrelated environment issue in scripts/update-aur-package.test.ts (`mapfile: command not found`).
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Homepage New Words now uses headword-level semantics instead of counting separate `(headword, word, reading)` rows. The aggregate query groups `imm_words` by headword, uses each headword's earliest `first_seen`, and counts headwords first seen today or this week so alternate forms do not inflate the summary. The homepage tooltip copy now explicitly says the metric is based on unique headwords.
Added focused regression coverage for the de-duplication rule in `getQueryHints` and for the updated homepage tooltip text. Targeted `bun test` runs passed for the touched query and stats UI files. Repo verifier `--lane core` again passed `bun run typecheck`; `bun run test:fast` remains blocked by the unrelated existing `scripts/update-aur-package.sh: line 71: mapfile: command not found` failure.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,64 @@
---
id: TASK-177.3
title: Fix attached stats command flow and browser config
status: Done
assignee:
- '@codex'
created_date: '2026-03-19 20:15'
updated_date: '2026-03-19 20:17'
labels:
- launcher
- stats
- cli
dependencies: []
references:
- launcher/commands/stats-command.ts
- launcher/commands/command-modules.test.ts
- launcher/main.test.ts
- src/main/runtime/stats-cli-command.ts
- src/main/runtime/stats-cli-command.test.ts
parent_task_id: TASK-177
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Make `subminer stats` stay attached to the foreground app process instead of routing through daemon startup, while keeping background/stop behavior on the daemon path. Ensure browser opening for stats respects only `stats.autoOpenBrowser` in the normal stats flow.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Default `subminer stats` forwards through the attached foreground stats command path instead of the daemon-start path
- [x] #2 `subminer stats --background` and `subminer stats --stop` continue using the daemon control path
- [x] #3 Normal stats launches do not open a browser when `stats.autoOpenBrowser` is false, and automated tests cover the launcher/runtime regressions
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add failing launcher tests first so default `stats` expects `--stats` forwarding while `--background` and `--stop` continue to expect daemon control flags.
2. Add/adjust runtime stats command tests to prove `stats.autoOpenBrowser=false` suppresses browser opening on the normal attached stats path.
3. Patch launcher forwarding logic in `launcher/commands/stats-command.ts` to choose foreground vs daemon flags correctly without changing cleanup handling.
4. Run targeted launcher and stats runtime tests, then record verification results and blockers.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Confirmed root cause: launcher default `stats` flow always forwarded `--stats-daemon-start` plus `--stats-daemon-open-browser`, which detached the terminal process and bypassed `stats.autoOpenBrowser` because browser opening happened in daemon control instead of the normal stats CLI handler.
Updated launcher forwarding so plain `subminer stats` now uses the attached `--stats` path, while explicit `--background` and `--stop` continue using daemon control flags.
Added launcher regression coverage for the attached/default path and preserved background/stop expectations; added runtime coverage proving `stats.autoOpenBrowser=false` suppresses browser opening on the normal stats path.
Verifier passed for `launcher-plugin` and `runtime-compat` lanes. Artifact: .tmp/skill-verification/subminer-verify-20260319-131703-ZaAaUV.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Fixed `subminer stats` so the default command now forwards to the normal attached `--stats` app path instead of the daemon-start path. That keeps the foreground process attached to the terminal as expected, while `subminer stats --background` and `subminer stats --stop` still use daemon control. Because the normal stats CLI path already respects `config.stats.autoOpenBrowser`, this also fixes the unwanted browser-open behavior that previously bypassed config via `--stats-daemon-open-browser`.
Added launcher command and launcher integration regressions for the new forwarding behavior, plus a runtime stats CLI regression that asserts `stats.autoOpenBrowser=false` suppresses browser opening. Verification passed with targeted launcher tests, targeted runtime stats tests, and the SubMiner verifier `launcher-plugin` + `runtime-compat` lanes.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,66 @@
---
id: TASK-182.2
title: Improve session detail known-word chart scaling
status: Done
assignee:
- codex
created_date: '2026-03-19 20:31'
updated_date: '2026-03-19 20:52'
labels:
- bug
- stats
- ui
dependencies: []
references:
- >-
/Users/sudacode/projects/japanese/SubMiner/stats/src/components/sessions/SessionDetail.tsx
- >-
/Users/sudacode/projects/japanese/SubMiner/stats/src/lib/session-detail.test.tsx
parent_task_id: TASK-182
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Adjust the expanded session-detail known-word percentage chart so the vertical range reflects the session's actual percent range instead of always spanning 0-100. Keep the chart easier to read while preserving the percent-based tooltip/legend behavior already used in the stats UI.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Expanded session detail scales the known/unknown percent chart to the session's observed percent range instead of hard-coding a 0-100 top bound
- [x] #2 The chart keeps a small headroom above the highest observed known-word percent so the line remains visually readable near the top edge
- [x] #3 Automated frontend coverage locks the new percent-domain behavior and preserves existing session-detail rendering
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add a focused frontend regression test for the session-detail ratio chart domain calculation, covering a session whose known-word percentage stays in a narrow band below 100% and expecting a dynamic top bound with headroom.
2. Update `stats/src/components/sessions/SessionDetail.tsx` to compute a dynamic percent-axis domain and matching ticks for the ratio chart, keeping the lower bound at 0%, adding modest padding above the highest known percentage, rounding to clean tick steps, and capping at 100%.
3. Apply the computed percent-axis bounds consistently to the right-side Y axis and the session chart pause overlays so the visual framing stays aligned.
4. Run targeted frontend tests and the SubMiner verification helper on the touched files, then record results and any blockers in the task.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Implemented dynamic known-percentage axis scaling in `stats/src/components/sessions/SessionDetail.tsx`: the ratio chart now keeps a 0% floor, uses the highest observed known percentage plus 5 points of headroom for the top bound, rounds that bound up to clean 10-point ticks, caps at 100%, and enables `allowDataOverflow` so the stacked area chart actually honors the tighter domain.
Added frontend regression coverage in `stats/src/lib/session-detail.test.tsx` for the axis-max helper, covering both a narrow-band session and near-100% cap behavior.
Added user-visible changelog fragment `changes/2026-03-19-session-detail-chart-scaling.md`.
Verification: `bun test stats/src/lib/session-detail.test.tsx` passed; `bun run typecheck` passed; `bun run changelog:lint` passed; `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core stats/src/components/sessions/SessionDetail.tsx stats/src/lib/session-detail.test.tsx` ran and passed `typecheck` but failed `bun run test:fast` on a pre-existing unrelated issue in `scripts/update-aur-package.test.ts` / `scripts/update-aur-package.sh` (`mapfile: command not found`). Artifacts: `.tmp/skill-verification/subminer-verify-20260319-134440-JRHAUJ`.
Docs decision: no internal docs update required; the behavior change is localized UI presentation with no API/workflow change. Changelog decision: yes, required and completed because the fix is user-visible.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Improved expanded session-detail chart readability by replacing the fixed 0-100 known-word percentage axis with a dynamic top bound based on the sessions highest observed known percentage plus modest headroom, rounded to clean ticks and capped at 100%. The ratio chart now also enables `allowDataOverflow` so Recharts preserves the tighter percent domain even though the stacked known/unknown areas sum to 100%.
Added frontend regression coverage for the new axis-max behavior and a changelog fragment for the user-visible stats fix.
Verification: `bun test stats/src/lib/session-detail.test.tsx`, `bun run typecheck`, and `bun run changelog:lint` passed. The SubMiner verification helpers `core` lane also passed `typecheck`, but `bun run test:fast` remains red on a pre-existing unrelated bash-compat failure in `scripts/update-aur-package.test.ts` / `scripts/update-aur-package.sh` (`mapfile: command not found`).
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,91 @@
---
id: TASK-200
title: 'Address latest PR #19 CodeRabbit follow-ups'
status: Done
assignee:
- '@codex'
created_date: '2026-03-19 07:18'
updated_date: '2026-03-19 07:28'
labels:
- pr-review
- anki-integration
- launcher
milestone: m-1
dependencies: []
references:
- launcher/mpv.test.ts
- src/anki-integration.ts
- src/anki-integration/card-creation.ts
- src/anki-integration/runtime.ts
- src/anki-integration/known-word-cache.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Validate the latest 2026-03-19 CodeRabbit review round on PR #19, implement only the confirmed fixes, and verify the touched launcher and Anki integration paths.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Each latest-round PR #19 CodeRabbit inline comment is validated against the current branch and classified as actionable or not warranted
- [x] #2 Confirmed correctness issues in launcher and Anki integration code are fixed with focused regression coverage where practical
- [x] #3 Targeted verification runs for the touched areas and the task notes record what changed versus what was rejected
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Validate the five inline comments from the 2026-03-19 CodeRabbit PR #19 review against current launcher and Anki integration code.
2. Add or extend focused tests for any confirmed launcher env-sandbox, notification-state, AVIF lead-in propagation, or known-word-cache lifecycle/scope regressions.
3. Apply the smallest safe fixes in `launcher/mpv.test.ts`, `src/anki-integration.ts`, `src/anki-integration/card-creation.ts`, `src/anki-integration/runtime.ts`, and `src/anki-integration/known-word-cache.ts` as needed.
4. Run targeted unit tests plus the SubMiner verification helper on the touched files, then record which comments were accepted or rejected in task notes.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Validated the five latest inline comments from CodeRabbit review `3973222927` on PR #19.
Accepted fixes:
- Hardened the three `findAppBinary` launcher tests against host leakage by sandboxing `SUBMINER_APPIMAGE_PATH` / `SUBMINER_BINARY_PATH` and stubbing executable checks so `/opt` and PATH resolution are deterministic.
- `showNotification()` now marks OSD/both updates as failed when `errorSuffix` is present instead of always rendering a success marker.
- `applyRuntimeConfigPatch()` now avoids starting or stopping known-word cache lifecycle work while the runtime is stopped, while still clearing cached state when highlighting is disabled.
- Extracted shared known-word cache lifecycle helpers and switched the persisted cache identity to the same lifecycle config used by runtime restart detection, so changes to `fields.word`, per-deck field mappings, or refresh interval invalidate stale cache state correctly.
Rejected fix:
- The `createSentenceCard()` AVIF lead-in comment was technically incomplete for this branch. There is no current caller that computes an `animatedLeadInSeconds` input for sentence-card creation, and the existing lead-in resolver depends on note media fields that do not exist before the new card's media is generated.
Regression coverage added:
- `src/anki-integration.test.ts` partial-failure OSD result marker.
- `src/anki-integration/runtime.test.ts` stopped-runtime known-word lifecycle guards.
- `src/anki-integration/known-word-cache.test.ts` cache invalidation when `fields.word` or per-deck field mappings change.
Verification:
- `bun test src/anki-integration/runtime.test.ts`
- `bun test src/anki-integration/known-word-cache.test.ts`
- `bun test src/anki-integration.test.ts --test-name-pattern 'marks partial update notifications as failures in OSD mode'`
- `bun test launcher/mpv.test.ts --test-name-pattern 'findAppBinary resolves ~/.local/bin/SubMiner.AppImage when it exists|findAppBinary resolves /opt/SubMiner/SubMiner.AppImage when ~/.local/bin candidate does not exist|findAppBinary finds subminer on PATH when AppImage candidates do not exist'`
- `bun test src/anki-integration.test.ts src/anki-integration/runtime.test.ts src/anki-integration/known-word-cache.test.ts launcher/mpv.test.ts`
- `bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh launcher/mpv.test.ts src/anki-integration.ts src/anki-integration/runtime.ts src/anki-integration/known-word-cache.ts src/anki-integration/runtime.test.ts src/anki-integration/known-word-cache.test.ts src/anki-integration.test.ts`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane launcher-plugin --lane core launcher/mpv.test.ts src/anki-integration.ts src/anki-integration/runtime.ts src/anki-integration/known-word-cache.ts src/anki-integration/runtime.test.ts src/anki-integration/known-word-cache.test.ts src/anki-integration.test.ts`
Verifier result:
- `launcher-plugin` lane passed (`test:launcher:smoke:src`, `test:plugin:src`).
- `core/typecheck` passed.
- `core/test-fast` failed for an unrelated existing environment issue in `scripts/update-aur-package.test.ts`: `scripts/update-aur-package.sh: line 71: mapfile: command not found` under the local macOS Bash environment.
- Verifier artifacts: `.tmp/skill-verification/subminer-verify-20260319-002617-UgpKUy`
Classification: actionable and fixed -> `launcher/mpv.test.ts` env leakage hardening, `src/anki-integration.ts` partial-failure OSD marker, `src/anki-integration/runtime.ts` started-guard for known-word lifecycle calls, `src/anki-integration/known-word-cache.ts` cache identity alignment with runtime lifecycle config.
Classification: not warranted as written -> `src/anki-integration/card-creation.ts` lead-in threading comment. No current `createSentenceCard()` caller computes or owns an `animatedLeadInSeconds` value, and the existing lead-in helper derives from preexisting note media fields, so blindly adding an optional parameter would not fix a real branch behavior bug.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Fixed four confirmed PR #19 latest-round CodeRabbit issues locally: deterministic launcher `findAppBinary` tests, correct partial-failure OSD result markers, started-state guards around known-word cache lifecycle restarts, and shared known-word cache identity logic so field-mapping changes invalidate stale cache state. Added focused regression coverage for each confirmed behavior.
One comment was intentionally not applied: the `createSentenceCard()` AVIF lead-in suggestion does not match the current branch architecture because no caller computes that value today and the existing resolver requires preexisting note media fields. Verification is green for all touched targeted tests plus the launcher-plugin/core typecheck lanes; the only remaining red is an unrelated existing `test:fast` failure in `scripts/update-aur-package.test.ts` caused by `mapfile` being unavailable in the local Bash environment.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,66 @@
---
id: TASK-201
title: Suppress repeated macOS overlay loading OSD during fullscreen tracker flaps
status: Done
assignee:
- '@codex'
created_date: '2026-03-19 18:47'
updated_date: '2026-03-19 19:01'
labels:
- bug
- macos
- overlay
dependencies: []
references:
- >-
/Users/sudacode/projects/japanese/SubMiner/src/core/services/overlay-visibility.ts
- >-
/Users/sudacode/projects/japanese/SubMiner/src/main/overlay-visibility-runtime.ts
- /Users/sudacode/projects/japanese/SubMiner/src/main/state.ts
- >-
/Users/sudacode/projects/japanese/SubMiner/src/core/services/overlay-visibility.test.ts
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Reduce macOS fullscreen annoyance where the visible overlay briefly loses tracking and re-shows the `Overlay loading...` OSD even though the overlay runtime is already initialized and no new instance is launching. Keep the first startup/loading feedback, but suppress repeat loading notifications caused by subsequent tracker churn during fullscreen enter/leave or focus flaps.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 The first macOS visible-overlay load still shows the existing `Overlay loading...` OSD when tracker data is not yet ready.
- [x] #2 Repeated macOS tracker flaps after the overlay has already recovered do not immediately re-show `Overlay loading...` on every loss/recovery cycle.
- [x] #3 Focused regression tests cover the repeated tracker-loss/recovery path and preserve the initial-load notification behavior.
- [x] #4 The change does not alter overlay runtime bootstrap or single-instance behavior; only notification suppression behavior changes.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add focused failing regressions in `src/core/services/overlay-visibility.test.ts` that preserve the first macOS `Overlay loading...` OSD and suppress an immediate second OSD after tracker recovery/loss churn.
2. Extend the overlay-visibility state/runtime plumbing with a small macOS loading-OSD suppression state so tracker flap retries can be rate-limited without touching overlay bootstrap or single-instance logic.
3. Reset the suppression when the user explicitly hides the visible overlay so intentional hide/show retries can still surface first-load feedback.
4. Run focused verification for the touched overlay visibility/runtime tests and update the task with results.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Added optional loading-OSD suppression hooks to `src/core/services/overlay-visibility.ts` so macOS can rate-limit repeated `Overlay loading...` notifications without changing overlay bootstrap behavior.
Implemented service-local suppression state in `src/main/overlay-visibility-runtime.ts` with a 30s cooldown and explicit reset when the visible overlay is manually hidden, so fullscreen tracker flaps stay quiet but intentional hide/show retries can still show loading feedback.
Added focused regressions in `src/core/services/overlay-visibility.test.ts` for `loss -> recover -> immediate loss` suppression and for manual hide resetting suppression.
Verification: `bun test src/core/services/overlay-visibility.test.ts`; `bun test src/main/runtime/overlay-visibility-runtime-main-deps.test.ts src/main/runtime/overlay-visibility-runtime.test.ts`; `bun run typecheck`; `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane runtime-compat src/core/services/overlay-visibility.ts src/main/overlay-visibility-runtime.ts src/core/services/overlay-visibility.test.ts` -> passed. Real-runtime lane skipped: change is notification suppression logic and cheap/runtime-compat coverage was sufficient for this scoped behavior change; no live mpv/macOS fullscreen session was run in this turn.
Docs update required: no. Changelog fragment required: yes; added `changes/2026-03-19-overlay-loading-osd-fullscreen-flaps.md`.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Reduced repeated macOS `Overlay loading...` popups caused by fullscreen tracker flap churn without touching overlay bootstrap or single-instance behavior. `src/core/services/overlay-visibility.ts` now accepts optional suppression hooks around the loading OSD path, and `src/main/overlay-visibility-runtime.ts` uses service-local state to rate-limit that OSD for 30 seconds while resetting the suppression when the visible overlay is explicitly hidden. Added focused regressions in `src/core/services/overlay-visibility.test.ts` to preserve the first-load notification, suppress immediate repeat notifications after tracker recovery/loss churn, and keep manual hide/show retries able to surface the loading OSD again. Added changelog fragment `changes/2026-03-19-overlay-loading-osd-fullscreen-flaps.md`. Verification passed with targeted overlay tests, typecheck, and the `runtime-compat` verifier lane; live macOS/mpv fullscreen runtime validation was not run in this turn.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,70 @@
---
id: TASK-202
title: Use ended session media position for anime episode progress
status: Done
assignee:
- Codex
created_date: '2026-03-19 14:55'
updated_date: '2026-03-19 17:36'
labels:
- stats
- ui
- bug
milestone: m-1
dependencies: []
references:
- stats/src/components/anime/EpisodeList.tsx
- stats/src/types/stats.ts
- src/core/services/immersion-tracker/session.ts
- src/core/services/immersion-tracker/query.ts
- src/core/services/immersion-tracker/storage.ts
priority: medium
ordinal: 105720
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
The anime episode list currently computes the `Progress` column from cumulative `totalActiveMs / durationMs`, which can exceed the intended watch-position meaning after rewatches or repeated sessions. Persist the playback position at the time a session ends and drive episode progress from that stored stop position instead.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Session finalization persists the playback position reached when the session ended.
- [x] #2 Anime episode queries expose the most recent ended-session media position for each episode.
- [x] #3 Episode-list progress renders from ended media position instead of cumulative active watch time.
- [x] #4 Regression coverage locks storage/query/UI behavior for the new progress source.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add failing regression coverage for persisted ended media position and episode progress rendering.
2. Add `ended_media_ms` to the immersion-session schema and persist `lastMediaMs` when ending a session.
3. Thread the new field through episode queries/types and render episode progress from `endedMediaMs / durationMs`.
4. Run targeted verification plus typecheck, then record the outcome.
<!-- SECTION:PLAN:END -->
## Outcome
<!-- SECTION:OUTCOME:BEGIN -->
Added nullable `ended_media_ms` storage to immersion sessions, persisted `lastMediaMs` when sessions finalize, and exposed the most recent ended-session media position through anime episode queries/types. The anime episode list now renders `Progress` from `endedMediaMs / durationMs` instead of cumulative active watch time, so rewatches no longer inflate the displayed percentage.
Verification:
- `bun test src/core/services/immersion-tracker/storage-session.test.ts`
- `bun test src/core/services/immersion-tracker/__tests__/query.test.ts`
- `bun test stats/src/lib/yomitan-lookup.test.tsx stats/src/lib/stats-ui-navigation.test.tsx`
- `bun run typecheck`
- `bun run changelog:lint`
- `bun x prettier --check 'src/core/services/immersion-tracker/types.ts' 'src/core/services/immersion-tracker/storage.ts' 'src/core/services/immersion-tracker/session.ts' 'src/core/services/immersion-tracker/query.ts' 'src/core/services/immersion-tracker/storage-session.test.ts' 'src/core/services/immersion-tracker/__tests__/query.test.ts' 'stats/src/types/stats.ts' 'stats/src/components/anime/EpisodeList.tsx' 'stats/src/lib/yomitan-lookup.test.tsx' 'stats/src/lib/stats-ui-navigation.test.tsx' 'backlog/tasks/task-202 - Use-ended-session-media-position-for-anime-episode-progress.md' 'changes/2026-03-19-stats-ended-media-progress.md'`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core 'src/core/services/immersion-tracker/types.ts' 'src/core/services/immersion-tracker/storage.ts' 'src/core/services/immersion-tracker/session.ts' 'src/core/services/immersion-tracker/query.ts' 'src/core/services/immersion-tracker/storage-session.test.ts' 'src/core/services/immersion-tracker/__tests__/query.test.ts' 'stats/src/types/stats.ts' 'stats/src/components/anime/EpisodeList.tsx' 'stats/src/lib/yomitan-lookup.test.tsx' 'stats/src/lib/stats-ui-navigation.test.tsx' 'backlog/tasks/task-202 - Use-ended-session-media-position-for-anime-episode-progress.md' 'changes/2026-03-19-stats-ended-media-progress.md'`
- Verifier artifacts: `.tmp/skill-verification/subminer-verify-20260319-173511-AV7kUg/`
<!-- SECTION:OUTCOME:END -->

View File

@@ -0,0 +1,47 @@
---
id: TASK-203
title: Restore known and JLPT annotation for reading-mismatch subtitle tokens
status: Done
assignee:
- Codex
created_date: '2026-03-19 18:25'
updated_date: '2026-03-19 18:25'
labels:
- subtitle
- bug
dependencies: []
references:
- src/core/services/tokenizer/annotation-stage.ts
- src/core/services/tokenizer/annotation-stage.test.ts
priority: medium
ordinal: 105721
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Some subtitle tokens lose both known-word coloring and JLPT underline even though the popup resolves a valid dictionary term. Repro example: `大体` in `大体 僕だって困ってたんですよ!` can be known via kana-only Anki data (`だいたい`) while JLPT lookup should still resolve from the kanji surface/headword.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Subtitle annotation can mark a token known via its reading when the configured headword/surface lookup misses.
- [x] #2 JLPT eligibility no longer drops valid kanji terms just because their reading contains repeated kana patterns.
- [x] #3 Regression coverage locks the combined known + JLPT case for `大体`.
<!-- AC:END -->
## Outcome
<!-- SECTION:OUTCOME:BEGIN -->
Known-word annotation now falls back to the token reading after the configured headword/surface lookup misses, so kana-only known-card entries still light up matching subtitle tokens. JLPT eligibility now ignores repeated-kana noise checks on the reading when a real surface/headword is present, which preserves JLPT tagging for words like `大体`.
Verification:
- `bun test src/core/services/tokenizer/annotation-stage.test.ts`
<!-- SECTION:OUTCOME:END -->

View File

@@ -0,0 +1,60 @@
---
id: TASK-204
title: Make known-word cache incremental and avoid full rebuilds
status: Done
assignee:
- Codex
created_date: '2026-03-19 19:05'
updated_date: '2026-03-19 19:12'
labels:
- anki
- cache
- performance
dependencies: []
references:
- src/anki-integration/known-word-cache.ts
- src/anki-integration.ts
- src/config/resolve/anki-connect.ts
- src/config/definitions/defaults-integrations.ts
priority: high
ordinal: 105722
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Replace the known-word cache rebuild behavior with incremental synchronization. Startup should load existing cache state without immediately pulling all tracked Anki notes. Config-timed sync should reconcile adds, deletes, and in-place field edits against cached per-note state. Mined cards should optionally append their extracted words immediately after mining, enabled by default. Full rebuild should remain available only through explicit doctor tooling.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Known-word cache startup no longer performs an automatic full rebuild.
- [x] #2 Config-timed sync incrementally reconciles note additions, deletions, and edited word fields for the tracked known-word deck scope.
- [x] #3 Newly mined cards update the known-word cache immediately when the new config flag is enabled, and skip that fast path when disabled.
- [x] #4 Persisted cache state remains usable by stats endpoints that read the `words` set from disk.
- [x] #5 Regression tests cover startup behavior, incremental sync diffs, and the new config flag.
<!-- AC:END -->
## Outcome
<!-- SECTION:OUTCOME:BEGIN -->
Known-word cache startup now loads persisted state and schedules sync based on refresh timing instead of wiping and rebuilding immediately. Persisted cache state now includes per-note word snapshots so timed refreshes can remove deleted notes, update edited notes, and keep the global `words` set stable for stats consumers. Added `ankiConnect.knownWords.addMinedWordsImmediately`, default `true`, so newly mined cards can update the cache immediately without waiting for the next timed sync.
Verification:
- `bun test src/anki-integration/known-word-cache.test.ts`
- `bun test src/config/resolve/anki-connect.test.ts src/config/config.test.ts`
- `bun test src/anki-integration.test.ts src/anki-integration/runtime.test.ts src/core/services/__tests__/stats-server.test.ts`
- `bun run test:config:src`
- `bun run typecheck`
- `bun run test:fast`
- `bun run test:env`
- `bun run build`
- `bun run test:smoke:dist`
<!-- SECTION:OUTCOME:END -->

View File

@@ -0,0 +1,53 @@
---
id: TASK-204.1
title: Restore stale-only startup known-word cache refresh
status: Done
assignee:
- '@Codex'
created_date: '2026-03-20 02:52'
updated_date: '2026-03-20 03:02'
labels:
- anki
- cache
- bug
dependencies: []
references:
- src/anki-integration/known-word-cache.ts
- src/anki-integration/known-word-cache.test.ts
- docs/plans/2026-03-19-known-word-cache-incremental-sync-design.md
parent_task_id: TASK-204
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Follow up on the incremental known-word cache change so startup still performs a refresh when the persisted cache is older than the configured refresh interval, while leaving fresh persisted state untouched.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Startup refreshes known words immediately when persisted cache state is stale for the configured interval.
- [x] #2 Startup skips the immediate refresh when persisted cache state is still fresh.
- [x] #3 Regression tests cover both stale and fresh startup paths.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add focused known-word cache lifecycle tests that distinguish fresh startup state from stale startup state and verify the stale path currently fails.
2. Update startup scheduling in src/anki-integration/known-word-cache.ts so persisted cache still loads immediately, but startup only triggers an immediate refresh when the cache is stale for the configured interval or the cache scope/config changed.
3. Run focused known-word cache tests and targeted SubMiner verification for the touched cache/runtime lane, then update the task with results.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Verified current lifecycle behavior: fresh persisted known-word cache already skips immediate startup refresh when the cache scope/config matches; stale persisted cache already refreshes immediately. Added regression coverage for both startup paths plus a proxy integration test showing addNote responses return without waiting for background enrichment.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Added regression coverage for known-word cache startup behavior and proxy response timing. The cache tests now lock in the intended lifecycle: fresh persisted state stays load-only on startup, while stale persisted state refreshes immediately. Added a proxy integration test proving addNote responses return without waiting for background enrichment. Verification: targeted Bun tests passed (`bun test src/anki-connect.test.ts src/anki-integration/anki-connect-proxy.test.ts src/anki-integration/known-word-cache.test.ts src/anki-integration/note-update-workflow.test.ts src/anki-integration/runtime.test.ts`) and direct `bun run test:fast` passed. The `subminer-change-verification` helper repeatedly reported `bun run test:fast` as failed in its isolated lane despite the direct command passing, so that helper lane remains a flaky/blocking verification artifact rather than a reproduced code failure.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,62 @@
---
id: TASK-205
title: 'Address PR #19 Claude frontend review follow-ups'
status: Done
assignee:
- codex
created_date: '2026-03-20 02:41'
updated_date: '2026-03-20 02:46'
labels: []
milestone: m-1
dependencies: []
references:
- stats/src/components/vocabulary/VocabularyTab.tsx
- stats/src/hooks/useSessions.ts
- stats/src/hooks/useTrends.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Assess Claude's latest PR #19 review, apply any valid frontend fixes from that review batch, and verify the stats dashboard behavior stays unchanged aside from the targeted performance and error-handling improvements.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 VocabularyTab avoids recomputing expensive known-word and summary aggregates on unrelated rerenders while preserving current displayed values.
- [x] #2 useSessions and useSessionDetail normalize rejected values into stable string errors without throwing from the catch handler.
- [x] #3 Targeted tests cover the addressed review items and pass locally.
- [x] #4 Any user-facing docs remain accurate after the changes.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add focused tests that fail on the current branch for the two valid Claude findings: render-time aggregate recomputation in VocabularyTab and unsafe non-Error rejection handling in useSessions/useSessionDetail.
2. Update VocabularyTab to memoize the expensive summary and known-word aggregate calculations off the existing filteredWords/kanji/knownWords inputs without changing rendered values.
3. Normalize hook error handling to convert unknown rejection values into stable strings, matching the existing useTrends pattern.
4. Run the targeted stats/frontend test lane, verify no docs changes are needed, and record results in task notes.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Validated Claude's latest PR #19 review comment from 2026-03-20 and narrowed it to two valid frontend follow-ups: memoized VocabularyTab aggregates and non-Error-safe session hook error handling.
Added focused regression tests in stats/src/lib/vocabulary-tab.test.ts and stats/src/hooks/useSessions.test.ts before patching the implementation.
Verification: `cd stats && bun test src/lib/vocabulary-tab.test.ts src/hooks/useSessions.test.ts` passed; `bun run format:check:stats` passed.
Project-native verifier (`.agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core ...`) passed root `bun run typecheck` and failed at `bun run test:fast` due an unrelated existing failure in `scripts/update-aur-package.test.ts` (`mapfile: command not found`). Artifact: `.tmp/skill-verification/subminer-verify-20260319-194525-vxVD9V`.
No user-facing docs changes were needed because the fixes only affect render-time memoization and error normalization.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Assessed Claude's latest PR #19 review and applied the two valid follow-ups. `stats/src/components/vocabulary/VocabularyTab.tsx` now memoizes `buildVocabularySummary(filteredWords, kanji)` and the known-word count so unrelated rerenders do not rescan the filtered vocabulary list. `stats/src/hooks/useSessions.ts` now exports a small `toErrorMessage` helper and uses it in both `useSessions` and `useSessionDetail`, preventing `.catch()` handlers from throwing when a promise rejects with a non-`Error` value.
Added targeted regressions in `stats/src/lib/vocabulary-tab.test.ts` and `stats/src/hooks/useSessions.test.ts` to lock in the memoization shape and error normalization behavior. Verification passed for `cd stats && bun test src/lib/vocabulary-tab.test.ts src/hooks/useSessions.test.ts` and `bun run format:check:stats`. The repo-native verification wrapper for the classified `core` lane also passed root `bun run typecheck`, but `bun run test:fast` is currently blocked by an unrelated existing failure in `scripts/update-aur-package.test.ts` (`mapfile: command not found`); artifacts are recorded under `.tmp/skill-verification/subminer-verify-20260319-194525-vxVD9V`.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,80 @@
---
id: TASK-206
title: 'Assess latest PR #19 CodeRabbit review comments'
status: Done
assignee:
- '@codex'
created_date: '2026-03-20 02:51'
updated_date: '2026-03-20 02:59'
labels:
- pr-review
- launcher
- anki-integration
- docs
milestone: m-1
dependencies: []
references:
- launcher/commands/command-modules.test.ts
- launcher/commands/stats-command.ts
- launcher/config/cli-parser-builder.ts
- launcher/mpv.ts
- README.md
- src/anki-integration.ts
- src/anki-integration/known-word-cache.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Validate the latest 2026-03-20 CodeRabbit review round on PR #19 against the current branch, implement only the confirmed fixes, and record which bot suggestions are stale, incorrect, or incomplete.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Each latest-round 2026-03-20 CodeRabbit inline comment on PR #19 is validated against current branch behavior and classified as actionable or not warranted
- [x] #2 Confirmed correctness issues in launcher, Anki integration, and docs are fixed with focused regression coverage where practical
- [x] #3 Targeted verification runs for the touched areas succeed or remaining unrelated failures are documented in task notes
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Pull the 2026-03-20 CodeRabbit review threads from PR #19 and validate each comment against the current branch, separating real issues from stale or incomplete bot guidance.
2. For each confirmed behavior bug, add or extend a focused failing test before changing production code; keep docs-only fixes scoped to the exact markdownlint/install issue.
3. Patch the smallest safe fixes in launcher, README, and Anki integration code, taking care not to overwrite unrelated local edits.
4. Run targeted tests and relevant SubMiner verification lanes for touched files, then record accepted versus rejected review comments in task notes and summary.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Validated the 2026-03-20 CodeRabbit PR #19 round as eight actionable items: one launcher test-name mismatch, three launcher behavior/test fixes, two README markdown/install fixes, one dead-code cleanup in Anki integration, and one real known-word cache deck-scoping bug.
Known-word cache review comment was correct in substance but needed a branch-specific fix: preserve deck->field scoping by querying per deck and carrying the allowed field list per note, rather than changing `notesInfo` shape.
Verification passed for targeted tests plus verifier docs/launcher-plugin lanes. Core verifier failed on unrelated pre-existing typecheck worktree state in `src/anki-integration/anki-connect-proxy.test.ts` (`TS2349` at line 395, `releaseProcessing?.()`), which is outside this task's touched files.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Assessed the latest 2026-03-20 CodeRabbit review round on PR #19 and applied all eight confirmed action items. Launcher behavior now surfaces non-zero stats-process exits after the startup handshake, rejects cleanup-only stats flags unless `cleanup` is selected, preserves empty quoted `mpv` args, and has updated regression coverage for each case. The known-word cache now preserves deck-specific field mappings during refresh by querying configured decks separately and extracting only the fields assigned to each deck; the unused `getPreferredWordValue` wrapper in `src/anki-integration.ts` was removed.
Documentation/test hygiene fixes also landed: the README platform badge no longer has an empty link target, Linux AppImage install instructions create `~/.local/bin` before downloads, the stats-command timing test was renamed to match actual behavior, and `launcher/picker.test.ts` now restores `XDG_DATA_HOME` safely while forcing Linux-path expectations explicitly so the file passes on macOS hosts.
Verification run:
- `bun test launcher/commands/command-modules.test.ts`
- `bun test launcher/parse-args.test.ts`
- `bun test launcher/mpv.test.ts`
- `bun test launcher/picker.test.ts`
- `bun test src/anki-integration/known-word-cache.test.ts`
- `bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh README.md launcher/commands/command-modules.test.ts launcher/commands/stats-command.ts launcher/config/cli-parser-builder.ts launcher/mpv.test.ts launcher/mpv.ts launcher/parse-args.test.ts launcher/picker.test.ts src/anki-integration.ts src/anki-integration/known-word-cache.test.ts src/anki-integration/known-word-cache.ts`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane docs --lane launcher-plugin --lane core README.md launcher/commands/command-modules.test.ts launcher/commands/stats-command.ts launcher/config/cli-parser-builder.ts launcher/mpv.test.ts launcher/mpv.ts launcher/parse-args.test.ts launcher/picker.test.ts src/anki-integration.ts src/anki-integration/known-word-cache.test.ts src/anki-integration/known-word-cache.ts`
Verifier results:
- `docs` lane passed (`docs:test`, `docs:build`)
- `launcher-plugin` lane passed (`test:launcher:smoke:src`, `test:plugin:src`)
- `core/typecheck` failed on unrelated existing worktree changes in `src/anki-integration/anki-connect-proxy.test.ts(395,5)`: `TS2349 This expression is not callable. Type 'never' has no call signatures.`
- Verifier artifacts: `.tmp/skill-verification/subminer-verify-20260319-195752-RNLVgE`
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,67 @@
---
id: TASK-207
title: 'Verify PR #19 follow-up typecheck blocker is cleared'
status: Done
assignee:
- '@codex'
created_date: '2026-03-20 03:03'
updated_date: '2026-03-20 03:04'
labels:
- pr-review
- anki-integration
- verification
milestone: m-1
dependencies: []
references:
- src/anki-integration/anki-connect-proxy.test.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Confirm the previously unrelated `anki-connect-proxy.test.ts` typecheck failure no longer blocks verification for the PR #19 CodeRabbit follow-up work, and only patch it if the failure still reproduces.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Reproduce or clear the `src/anki-integration/anki-connect-proxy.test.ts` typecheck blocker with current workspace state
- [x] #2 If the blocker still exists, apply the smallest safe fix and verify it
- [x] #3 Document the verification result and any remaining unrelated blockers
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Re-run `bun run typecheck` and a focused proxy test against the current workspace to confirm whether the previous `anki-connect-proxy.test.ts` failure still reproduces.
2. If the failure reproduces, use the typecheck failure itself as the red test, patch the smallest type-safe fix in the test, and rerun focused verification.
3. Re-run the relevant verifier lane(s), then record whether the blocker is cleared or if any unrelated failures remain.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Re-ran `bun run typecheck` against the current workspace and the prior `src/anki-integration/anki-connect-proxy.test.ts` blocker no longer reproduces.
Focused verification passed for `bun test src/anki-integration/anki-connect-proxy.test.ts`. Core verifier now passes `typecheck` and reaches `test:fast`.
Current remaining unrelated verifier failure is unchanged local environment behavior in `scripts/update-aur-package.test.ts`: `scripts/update-aur-package.sh: line 71: mapfile: command not found` under macOS Bash. Artifact: `.tmp/skill-verification/subminer-verify-20260319-200320-vy2YHa`.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Verified the previously reported PR #19 follow-up typecheck blocker is cleared in the current workspace. `bun run typecheck` now passes, and the focused proxy regression file `src/anki-integration/anki-connect-proxy.test.ts` also passes, including the background-enrichment response timing test.
Re-running the SubMiner core verifier confirms the blocker moved forward: `core/typecheck` passes, and the remaining `core/test-fast` failure is unrelated to the proxy test. The only red is the existing macOS Bash compatibility issue in `scripts/update-aur-package.test.ts`, where `scripts/update-aur-package.sh` uses `mapfile` and exits with `line 71: mapfile: command not found`.
Verification run:
- `bun run typecheck`
- `bun test src/anki-integration/anki-connect-proxy.test.ts`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core src/anki-integration/anki-connect-proxy.test.ts`
Verifier result:
- `core/typecheck` passed
- `core/test-fast` failed only in `scripts/update-aur-package.test.ts` because local macOS Bash lacks `mapfile`
- Artifact: `.tmp/skill-verification/subminer-verify-20260319-200320-vy2YHa`
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,72 @@
---
id: TASK-208
title: 'Assess newest PR #19 CodeRabbit round after 1227706'
status: Done
assignee:
- '@codex'
created_date: '2026-03-20 03:37'
updated_date: '2026-03-20 03:47'
labels:
- pr-review
- launcher
- anki-integration
milestone: m-1
dependencies: []
references:
- launcher/commands/stats-command.ts
- launcher/mpv.ts
- src/anki-integration.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Validate the newest 2026-03-20 03:23 CodeRabbit review round on PR #19 after commit `1227706`, implement only the confirmed fixes, and record any bot suggestions that are stale or technically incomplete.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Each newest-round CodeRabbit inline comment posted after commit `1227706` is validated against current branch behavior and classified as actionable or not warranted
- [x] #2 Confirmed issues are fixed with focused regression coverage where practical
- [x] #3 Targeted verification runs for the touched areas succeed or remaining unrelated failures are documented
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Pull the three newest CodeRabbit inline threads posted after commit `1227706` and restate each finding against the current branch code.
2. For each confirmed behavior bug, add or extend a focused failing test before changing production code; reject any stale or incorrect bot suggestion with notes.
3. Patch the smallest safe fixes in `launcher/commands/stats-command.ts`, `launcher/mpv.ts`, and/or `src/anki-integration.ts` as warranted, without disturbing unrelated local edits.
4. Run targeted tests and the cheapest sufficient verifier lanes, then record accepted versus rejected comments in task notes and summary.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Validated the newest 2026-03-20 03:23 CodeRabbit round as three comments: two actionable launcher issues and one non-warranted Anki suggestion.
Accepted fixes: cancel the pending stats response poll when the attached app exits non-zero before startup response, and surface `spawnSync()` launch/stop errors in launcher mpv helpers instead of treating `result.status ?? 0` / ignored status as success.
Rejected fix: the `src/anki-integration.ts` / card-creation suggestion would double count locally mined cards. Local sentence mining already records stats in `src/main/runtime/anki-actions.ts` when `mineSentenceCardCore` returns `true`; adding a second callback in card creation would increment tracker counts twice for the same card.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Assessed the newest CodeRabbit PR #19 round after commit `1227706` and fixed the two confirmed launcher regressions. `runStatsCommand()` now gives the startup response waiter an abort signal and cancels the polling loop immediately when the attached app exits non-zero before startup response, covering both the normal stats startup race and the cleanup/startup race. `launchTexthookerOnly()` now fails non-zero when `spawnSync()` reports an execution error, and `stopOverlay()` logs a warning when the stop command cannot be spawned or exits non-zero instead of silently treating that path as success.
One bot comment was intentionally rejected: recording mined-card stats inside the direct card-creation path would double count locally mined cards, because the successful local mining flow already records cards in `src/main/runtime/anki-actions.ts` after `mineSentenceCardCore()` returns `true`.
Verification run:
- `bun test launcher/commands/command-modules.test.ts`
- `bun test launcher/mpv.test.ts`
- `bun run typecheck`
- `bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh launcher/commands/stats-command.ts launcher/commands/command-modules.test.ts launcher/mpv.ts launcher/mpv.test.ts`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane launcher-plugin launcher/commands/stats-command.ts launcher/commands/command-modules.test.ts launcher/mpv.ts launcher/mpv.test.ts`
Verifier result:
- `launcher-plugin` lane passed (`test:launcher:smoke:src`, `test:plugin:src`)
- `typecheck` passed
- Verifier artifacts: `.tmp/skill-verification/subminer-verify-20260319-204639-dzUj16`
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,59 @@
---
id: TASK-209
title: Exclude grammar-tail そうだ from subtitle annotations
status: Done
assignee:
- codex
created_date: '2026-03-20 04:06'
updated_date: '2026-03-20 04:33'
labels:
- bug
- tokenizer
dependencies: []
references:
- >-
/Users/sudacode/projects/japanese/SubMiner/src/core/services/tokenizer/annotation-stage.ts
- >-
/Users/sudacode/projects/japanese/SubMiner/src/core/services/tokenizer/annotation-stage.test.ts
- >-
/Users/sudacode/projects/japanese/SubMiner/src/core/services/tokenizer.test.ts
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Sentence-final grammar-tail `そうだ` tokens can still receive subtitle annotation styling, including frequency highlighting, when Yomitan returns a standalone `そうだ` token and MeCab enriches it as an auxiliary-stem/coupla pattern (`名詞|助動詞`, `助動詞語幹`). Keep the subtitle text visible, but treat this grammar tail like other grammar-only endings so it renders without annotation metadata.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Sentence-final grammar-tail `そうだ` tokens enriched as auxiliary-stem/copula patterns do not receive frequency highlighting or other subtitle annotation metadata.
- [x] #2 The preceding lexical token in cases like `与えるそうだ` keeps its existing annotation behavior.
- [x] #3 Regression tests cover the annotation-stage exclusion and end-to-end subtitle tokenization for the `そうだ` grammar-tail case.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add focused regression coverage for the reported `与えるそうだ` case at both annotation-stage and tokenizeSubtitle levels.
2. Reproduce failure by modeling the MeCab-enriched grammar-tail shape (`名詞|助動詞`, `特殊`, `助動詞語幹`) that currently keeps frequency metadata.
3. Update subtitle-annotation exclusion logic to recognize auxiliary-stem/copula grammar tails via POS metadata plus normalized tail text, not a raw sentence-specific string match.
4. Re-run targeted tokenizer and annotation-stage tests, then record the verification commands and outcome in the task notes.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Investigated reported `与えるそうだ` case. MeCab tags `そう` as `名詞,特殊,助動詞語幹` and `だ` as `助動詞`; after overlap enrichment the Yomitan token becomes `pos1=名詞|助動詞`, `pos2=特殊`, `pos3=助動詞語幹`, which currently escapes subtitle-annotation exclusion and can keep a frequency rank.
Implemented a POS-shape subtitle-annotation exclusion for MeCab-enriched auxiliary-stem grammar tails. The new predicate keys off merged tokens whose POS tags stay within `名詞/助動詞/助詞` and whose POS3 includes `助動詞語幹`, which clears annotation metadata for `そうだ`-style tails without hard-coding the full subtitle text.
Verification: `bun test src/core/services/tokenizer/annotation-stage.test.ts`, `bun test src/core/services/tokenizer.test.ts --test-name-pattern 'explanatory ending|interjection|single-kana merged tokens from frequency highlighting|auxiliary-stem そうだ grammar tails|composite function/content token from frequency highlighting|keeps frequency for content-led merged token with trailing colloquial suffixes'`
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Added regression coverage for `与えるそうだ` and updated subtitle annotation exclusion logic to drop annotation metadata for MeCab-enriched auxiliary-stem grammar tails. The fix is POS-driven rather than sentence-specific, so `そうだ`-style grammar endings stay visible/hoverable as plain text while neighboring lexical tokens keep their existing frequency/JLPT behavior.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,62 @@
---
id: TASK-210
title: Show latest session position in anime episode progress
status: Done
assignee:
- '@Codex'
created_date: '2026-03-20 04:09'
updated_date: '2026-03-20 04:25'
labels:
- stats
- bug
- ui
milestone: m-1
dependencies: []
references:
- stats/src/components/anime/EpisodeList.tsx
- src/core/services/immersion-tracker/query.ts
- src/core/services/immersion-tracker/session.ts
- src/core/services/immersion-tracker-service.ts
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Anime episode rows in stats can show watch time and lookups from the latest session while the Progress column stays blank because it only reads `ended_media_ms` from ended sessions. Update the progress source so a just-watched episode reflects the latest known session stop position without falling back to cumulative watch time.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Anime episode progress uses the latest known session position for the episode, including the most recent active session when available.
- [x] #2 Ended-session progress remains correct and does not regress to cumulative watch time.
- [x] #3 Regression coverage locks query and/or UI behavior for active-session and ended-session episode progress.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Add failing regression coverage for anime episode progress when the latest session is still active but has a known playback position.
2. Persist the latest playback position on the active `imm_sessions` row during playback so stats queries can read it before session finalization.
3. Update anime episode queries to use the newest known session position for progress while preserving ended-session behavior.
4. Run targeted verification for immersion tracker, stats query, and cheap repo checks; record results and task outcome.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Root cause: stale active-session recovery rebuilt session state with `lastMediaMs = null`, so `finalizeSessionRecord` overwrote persisted progress checkpoints with `ended_media_ms = NULL` during startup reconciliation.
Implemented telemetry-flush checkpointing to persist `lastMediaMs` onto the active `imm_sessions` row, preserved that checkpoint through stale-session reconciliation, and updated anime episode progress queries to read the latest known non-null session position across active or ended sessions.
Verification: targeted regressions passed (`bun test src/core/services/immersion-tracker-service.test.ts --test-name-pattern 'flushTelemetry checkpoints latest playback position on the active session row|startup finalizes stale active sessions and applies lifetime summaries'`, `bun test src/core/services/immersion-tracker/__tests__/query.test.ts --test-name-pattern 'getAnimeEpisodes prefers the latest session media position when the latest session is still active|getAnimeEpisodes returns latest ended media position and aggregate metrics'`), broader tracker/query suite passed (`bun test src/core/services/immersion-tracker-service.test.ts src/core/services/immersion-tracker/__tests__/query.test.ts`), `bun run typecheck` passed via verifier, `bun run changelog:lint` passed.
Verification blocker: `.agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane core ...` reported `bun run test:fast` failure from pre-existing `scripts/update-aur-package.test.ts` (`mapfile: command not found` under bash), unrelated to this change set.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Persist anime episode progress checkpoints before session finalization so stats can survive crashes/restarts and still show the latest known watch position. Telemetry flushes now checkpoint `lastMediaMs` onto the active `imm_sessions` row, stale-session recovery preserves that checkpoint when finalizing recovered sessions, and `getAnimeEpisodes` now reads the newest non-null session position whether it came from an active or ended session.
Added regressions for active-session checkpoint persistence, stale-session recovery preserving `ended_media_ms`, and episode queries preferring the latest known session position. Verification passed for the targeted and broader immersion tracker/query suites, plus `bun run typecheck` and `bun run changelog:lint`. The verifier's `bun run test:fast` step still fails on the pre-existing `scripts/update-aur-package.test.ts` bash `mapfile` issue, which is outside this task's scope.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -1,5 +0,0 @@
type: changed
area: anki
- Changed known-word cache settings to live under `ankiConnect.knownWords` instead of mixing them into `ankiConnect.nPlusOne`.
- Kept legacy `ankiConnect.nPlusOne` known-word keys and older `ankiConnect.behavior.nPlusOne*` keys as deprecated compatibility fallbacks.

View File

@@ -1,4 +0,0 @@
type: fixed
area: launcher
- Fixed mpv Lua plugin binary auto-detection on Linux to also search `/usr/bin/subminer` and `/usr/local/bin/subminer` (lowercase), matching the conventional Unix wrapper name used by packaged installs such as the AUR package.

View File

@@ -1,4 +0,0 @@
type: changed
area: stats
- Added session deletion to the Sessions tab with the same confirmation prompt used by anime episode/session deletes, and removed all associated session rows from the stats database.

View File

@@ -1,4 +0,0 @@
type: fixed
area: stats
- Fixed the in-app stats overlay so it connects to the configured `stats.serverPort` instead of falling back to the default port.

View File

@@ -1,9 +0,0 @@
type: fixed
area: overlay
- Fixed subtitle frequency tagging for merged lookup-backed tokens like `陰に` by falling back to exact surface-form Yomitan frequencies when the normalized headword lookup misses.
- Fixed MeCab merged-token position mapping across line breaks so merged content-plus-particle tokens like `陰に` keep their matched Yomitan frequency instead of inheriting shifted POS tags.
- Fixed grouped frequency parsing in both Yomitan and fallback frequency-dictionary lookups so display values like `118,121` use the leading rank instead of collapsing the rank and occurrence count into `118121`.
- Fixed frequency-rank ingestion to ignore Yomitan dictionaries explicitly marked `occurrence-based`, so raw occurrence counts are no longer treated as subtitle rank values.
- Fixed inflected headword frequency tagging to prefer ranks from the selected Yomitan `termsFind` popup entry itself, ordered by configured dictionary priority, so forms like `潜み` use primary-dictionary ranks like `4073` before falling back to lower-priority raw lemma metadata such as `CC100`.
- Fixed annotation-stage frequency filtering so exact kanji noun tokens like `者` keep their matched rank even when MeCab labels them `名詞/非自立`, instead of dropping the highlight after scan-time frequency lookup succeeds.

View File

@@ -1,4 +0,0 @@
type: fixed
area: anki
- Fixed repeated character-dictionary startup work by scheduling auto-sync only from mpv media-path changes instead of also re-triggering it from connection and media-title events for the same title.

View File

@@ -1,7 +0,0 @@
type: fixed
area: overlay
- Fixed macOS fullscreen overlay stability by keeping the passive visible overlay from stealing focus, re-raising the overlay window when reasserting its macOS topmost level, and tolerating one transient macOS tracker/helper miss before hiding the overlay.
- Kept subtitle tokenization warmup one-shot for the lifetime of the app so later fullscreen/media churn on macOS does not replay the startup warmup gate after the first file is ready.
- Added a bounded macOS tracker loss-grace window so fullscreen enter/leave transitions do not immediately hide and reload the overlay when the helper briefly loses the mpv window.
- Skipped subtitle/tokenization refresh invalidation on character-dictionary auto-sync completion when the dictionary was already current, preventing startup flash/reload loops on unchanged media.

View File

@@ -1,11 +0,0 @@
type: added
area: immersion
- Added Mine Word, Mine Sentence, and Mine Audio buttons to word detail example lines in the stats dashboard.
- Mine Word creates a full Yomitan card (definition, reading, pitch accent) via the hidden search page bridge, then enriches with sentence audio, screenshot, and metadata extracted from the source video.
- Mine Sentence and Mine Audio create cards directly with appropriate Lapis/Kiku flags, sentence highlighting, and media from the source file.
- Media generation (audio + image/AVIF) runs in parallel and respects all AnkiConnect config options.
- Added word exclusion list to the Vocabulary tab with localStorage persistence and a management modal.
- Fixed truncated readings in the frequency rank table (e.g. お前 now shows おまえ instead of まえ).
- Clicking a bar in the Top Repeated Words chart now opens the word detail panel.
- Secondary subtitle text is now stored alongside primary subtitle lines for use as translation when mining cards from the stats page.

View File

@@ -1,8 +0,0 @@
type: changed
area: immersion
- Kept immersion tracking history by default while preserving daily/monthly rollup maintenance.
- Added exact lifetime summary reads for overview/anime/media stats so dashboard totals no longer depend on rescanning raw telemetry.
- Reduced tracker storage overhead by removing duplicated subtitle text from subtitle-line event payloads.
- Deduplicated episode cover-art blobs through a shared blob store and updated cover-art reads/writes to resolve shared images correctly.
- Added indexes for large-history session, telemetry, vocabulary, kanji, and cover-art queries to keep dashboard reads fast as the SQLite database grows.

View File

@@ -1,6 +0,0 @@
type: added
area: stats
- Added `subminer stats -b` to start or reuse a dedicated background stats server without blocking normal SubMiner instances.
- Added `subminer stats -s` to stop the dedicated background stats server without closing browser tabs.
- Stats server startup now reuses a running background stats daemon instead of trying to bind a second local server in another SubMiner instance.

View File

@@ -1,5 +0,0 @@
type: fixed
area: stats
- Fixed session stats so known-word counts track real known-word occurrences without collapsing subtitle-line gaps.
- Fixed session word totals in session-facing stats views to prefer token counts when available, preventing known words from exceeding total words in the session chart.

View File

@@ -1,4 +0,0 @@
type: changed
area: immersion
- Renamed the stats dashboard's Anime tab to Library so the media browser label matches non-anime sources like YouTube and other yt-dlp-backed content.

View File

@@ -1,4 +0,0 @@
type: fixed
area: stats
- Fixed the stats Vocabulary tab blank-screen regression caused by a hook-order crash after vocabulary data finished loading.

View File

@@ -1,5 +0,0 @@
type: changed
area: anilist
- Standardized episode completion threshold by introducing `DEFAULT_MIN_WATCH_RATIO` and using it for both local watched state transitions and AniList post-watch progress updates.
- Episode auto-marking now uses the same threshold as AniList (`85%`), removing divergent completion behavior.

View File

@@ -1,4 +0,0 @@
type: fixed
area: anki
- Fixed card-mine OSD feedback so the final mine result stops the Anki spinner first, then shows a single-line `✓`/`x` status without being overwritten by a later spinner tick.

View File

@@ -1,4 +0,0 @@
type: fixed
area: stats
- Removed the misleading `New words` series from expanded session charts; session detail now shows only the real total-word and known-word lines.

View File

@@ -1,4 +0,0 @@
type: fixed
area: stats
- Restored the cross-anime word table behavior in stats vocabulary surfaces so shared vocabulary entries no longer disappear or merge incorrectly across related media.

View File

@@ -1,5 +0,0 @@
type: fixed
area: stats
- `subminer stats -b` now runs as a standalone background stats daemon instead of reusing the main SubMiner app process, so the overlay app can still be launched separately for normal video watching.
- Dashboard word mining still works against the background daemon by using a short-lived hidden helper for the Yomitan add-note flow.

View File

@@ -1,4 +0,0 @@
type: fixed
area: stats
- Load full session timelines by default in stats session detail views so long sessions preserve complete telemetry history instead of being truncated by a fixed sample limit.

View File

@@ -1,4 +0,0 @@
type: added
area: launcher
- Added launcher passthrough for `-a/--args` so mpv receives raw extra launch flags (`--fs`, `--ytdl-format`, custom audio/video settings, etc.) from the `subminer` command.

View File

@@ -1,5 +0,0 @@
type: fixed
area: stats
- Replaced heuristic stats word counts with Yomitan token counts, so session, media, anime, and trend subtitle totals now come directly from parsed subtitle tokens.
- Updated stats UI labels and lookup-rate copy to refer to tokens instead of words where those counts are shown.

View File

@@ -1,5 +0,0 @@
type: changed
area: overlay
- Excluded interjections and sound-effect tokens from subtitle annotation styling so they no longer inherit misleading lexical highlight treatment while still remaining visible and hoverable as plain subtitle tokens.
- Expanded subtitle annotation noise filtering to also strip annotation metadata from standalone grammar-only helper tokens such as particles, auxiliaries, adnominals, common explanatory endings like `んです` / `のだ`, and merged trailing quote-particle forms like `...って` while keeping them tokenized for hover lookup.

View File

@@ -0,0 +1,4 @@
type: fixed
area: anki
- Known-word cache refreshes now reconcile Anki changes incrementally instead of wiping and rebuilding on startup, mined cards can append their word into the cache immediately through a new default-enabled config flag, and explicit refreshes now run through `subminer doctor --refresh-known-words`.

View File

@@ -0,0 +1,4 @@
type: fixed
area: subtitle
- Restored known-word coloring and JLPT underlines for subtitle tokens like `大体` when the subtitle token is kanji but the known-word cache only matches the kana reading.

View File

@@ -0,0 +1,4 @@
type: fixed
area: stats
- Episode progress in the anime page now uses the last ended playback position instead of cumulative active watch time, avoiding distorted percentages after rewatches or repeated sessions.

View File

@@ -0,0 +1,4 @@
type: fixed
area: stats
- Anime episode progress now keeps the latest known playback position through active-session checkpoints and stale-session recovery, so recently watched episodes no longer lose their progress percentage.

View File

@@ -1,4 +0,0 @@
type: fixed
area: jlpt
- Reduced JLPT dictionary startup log noise by summarizing duplicate surface-form collisions instead of logging one line per duplicate entry.

View File

@@ -1,6 +0,0 @@
type: added
area: launcher
- Added `subminer stats` to launch the local stats dashboard, force-start the stats server on demand, and open the dashboard in your browser.
- Added `subminer stats cleanup` to backfill vocabulary metadata and prune stale or excluded immersion rows on demand.
- Added `stats.autoOpenBrowser` so browser launch after `subminer stats` can be enabled or disabled explicitly.

View File

@@ -1,7 +0,0 @@
type: added
area: immersion
- Added a local stats dashboard for immersion tracking with Overview, Anime, Trends, Vocabulary, and Sessions views.
- Added anime progress, episode completion, Anki card links, and occurrence drill-down across the stats dashboard.
- Added richer session timelines with new-word activity, cumulative totals, and pause/seek/card event markers.
- Added completed-episodes and completed-anime totals to the Overview tracking snapshot.

View File

@@ -348,6 +348,7 @@
"knownWords": {
"highlightEnabled": false, // Enable fast local highlighting for words already known in Anki. Values: true | false
"refreshMinutes": 1440, // Minutes between known-word cache refreshes.
"addMinedWordsImmediately": true, // Immediately append newly mined card words into the known-word cache. Values: true | false
"matchMode": "headword", // Known-word matching strategy for subtitle annotations. Values: headword | surface
"decks": {}, // Decks and fields for known-word cache. Object mapping deck names to arrays of field names to extract, e.g. { "Kaishi 1.5k": ["Word", "Word Reading"] }.
"color": "#a6da95" // Color used for known-word highlights.

View File

@@ -1,5 +1,14 @@
# Changelog
## v0.7.0 (2026-03-19)
- Added a full local immersion dashboard release line with Overview, Library, Trends, Vocabulary, and Sessions drill-down views backed by SQLite tracking data.
- Added browser-first stats workflows: `subminer stats`, background stats daemon controls (`-b` / `-s`), stats cleanup, and dashboard-side mining actions with media enrichment.
- Improved stats accuracy and scale handling with Yomitan token counts, full session timelines, known-word timeline fixes, cross-media vocabulary fixes, and clearer session charts.
- Improved overlay/runtime stability with quieter macOS fullscreen recovery, reduced repeated loading OSD popups, and better frequency/noise handling for subtitle annotations.
- Added launcher mpv-args passthrough plus Linux plugin wrapper-name fallback for packaged installs.
- Added a hover-revealed ↗ button on Sessions tab rows to navigate directly to the anime media-detail view, with correct "Back to Sessions" back-navigation.
- Excluded auxiliary-stem `そうだ` grammar tails (MeCab POS3 `助動詞語幹`) from subtitle annotation metadata so frequency, JLPT, and N+1 styling no longer bleed onto grammar-tail tokens.
## v0.6.5 (2026-03-15)
- Seeded the AUR checkout with the repo `.SRCINFO` template before rewriting metadata so tagged releases do not depend on prior AUR state.

View File

@@ -52,7 +52,7 @@ Watch time, sessions, words seen, and per-anime progress/pattern charts with con
#### Sessions
Expandable session history with new-word activity, cumulative totals, and pause/seek/card markers.
Expandable session history with new-word activity, cumulative totals, and pause/seek/card markers. Each session row exposes a hover-revealed ↗ button that navigates to the anime media-detail view for that session; pressing the back button there returns to the Sessions tab.
![Stats Sessions](/screenshots/stats-sessions.png)

View File

@@ -348,6 +348,7 @@
"knownWords": {
"highlightEnabled": false, // Enable fast local highlighting for words already known in Anki. Values: true | false
"refreshMinutes": 1440, // Minutes between known-word cache refreshes.
"addMinedWordsImmediately": true, // Immediately append newly mined card words into the known-word cache. Values: true | false
"matchMode": "headword", // Known-word matching strategy for subtitle annotations. Values: headword | surface
"decks": {}, // Decks and fields for known-word cache. Object mapping deck names to arrays of field names to extract, e.g. { "Kaishi 1.5k": ["Word", "Word Reading"] }.
"color": "#a6da95" // Color used for known-word highlights.

View File

@@ -4,7 +4,7 @@ SubMiner annotates subtitle tokens in real time as they appear in the overlay. F
All four are opt-in and configured under `subtitleStyle`, `ankiConnect.knownWords`, and `ankiConnect.nPlusOne` in your config. They apply independently — you can enable any combination.
Before any of those layers render, SubMiner strips annotation metadata from tokens that are usually just subtitle glue or annotation noise. Standalone particles, auxiliaries, adnominals, common explanatory endings like `んです` / `のだ`, merged trailing quote-particle forms like `...って`, repeated kana interjections, and similar non-lexical helper tokens remain hoverable in the subtitle text, but they render as plain tokens without known-word, N+1, frequency, JLPT, or name-match annotation styling.
Before any of those layers render, SubMiner strips annotation metadata from tokens that are usually just subtitle glue or annotation noise. Standalone particles, auxiliaries, adnominals, common explanatory endings like `んです` / `のだ`, merged trailing quote-particle forms like `...って`, auxiliary-stem grammar tails like `そうだ` (MeCab POS3 `助動詞語幹`), repeated kana interjections, and similar non-lexical helper tokens remain hoverable in the subtitle text, but they render as plain tokens without known-word, N+1, frequency, JLPT, or name-match annotation styling.
## N+1 Word Highlighting

View File

@@ -7,7 +7,7 @@
3. Run `bun run changelog:lint`.
4. Bump `package.json` to the release version.
5. Build release metadata before tagging:
`bun run changelog:build --version <version>`
`bun run changelog:build --version <version> --date <yyyy-mm-dd>`
6. Review `CHANGELOG.md` and `release/release-notes.md`.
7. Run release gate locally:
`bun run changelog:check --version <version>`
@@ -25,6 +25,8 @@
Notes:
- Versioning policy: SubMiner stays 0-ver. Large or breaking release lines still bump the minor number (`0.x.0`), not `1.0.0`. Example: the next major line after `0.6.5` is `0.7.0`.
- Pass `--date` explicitly when you want the release stamped with the local cut date; otherwise the generator uses the current ISO date, which can roll over to the next UTC day late at night.
- `changelog:check` now rejects tag/package version mismatches.
- `changelog:build` generates `CHANGELOG.md` + `release/release-notes.md` and removes the released `changes/*.md` fragments.
- Do not tag while `changes/*.md` fragments still exist.

View File

@@ -0,0 +1,46 @@
<!-- read_when: changing known-word cache lifecycle, stats cache semantics, or Anki sync behavior -->
# Incremental Known-Word Cache Sync
## Goal
Stop rebuilding the entire known-word cache on startup or routine refreshes. Keep the cache correct through incremental reconciliation on the configured sync cadence, with an immediate append path for freshly mined cards.
## Scope
- Persist per-note extracted known-word snapshots beside the existing global `words` list.
- Replace startup refresh with load-only behavior.
- Make timed refresh diff current Anki note IDs against cached note IDs, then apply add/remove/edit deltas.
- Add `ankiConnect.knownWords.addMinedWordsImmediately`, default `true`.
- Keep full rebuild out of normal lifecycle; reserve it for explicit doctor tooling.
## Data Model
Persist versioned cache state with:
- `words`: deduplicated global known-word set for stats/UI consumers
- `notes`: record of `noteId -> extractedWords[]`
- `refreshedAtMs`
- `scope`
The in-memory manager derives the global set from the per-note snapshots during sync updates so deletes and edits can remove stale words safely.
## Sync Behavior
- Startup: load persisted state only
- Interval tick or explicit refresh command: run incremental sync
- Incremental sync:
- query tracked note IDs for configured deck scope
- remove note snapshots for note IDs that disappeared
- fetch `notesInfo` for note IDs that are new or need field reconciliation
- compare extracted words per note and update the global set
## Immediate Mining Path
When SubMiner already has fresh `noteInfo` after mining or updating a note, append/update that note snapshot immediately if `addMinedWordsImmediately` is enabled.
## Verification
- focused cache manager tests for add/delete/edit reconciliation
- focused integration/config tests for startup behavior and new config flag
- config verification lane because defaults/schema/example change

File diff suppressed because it is too large Load Diff

View File

@@ -77,11 +77,37 @@ test('doctor command exits non-zero for missing hard dependencies', () => {
commandExists: () => false,
configExists: () => true,
resolveMainConfigPath: () => '/tmp/SubMiner/config.jsonc',
runAppCommandWithInherit: () => {
throw new Error('unexpected app handoff');
},
}),
(error: unknown) => error instanceof ExitSignal && error.code === 1,
);
});
test('doctor command forwards refresh-known-words to app binary', () => {
const context = createContext();
context.args.doctor = true;
context.args.doctorRefreshKnownWords = true;
const forwarded: string[][] = [];
assert.throws(
() =>
runDoctorCommand(context, {
commandExists: () => false,
configExists: () => true,
resolveMainConfigPath: () => '/tmp/SubMiner/config.jsonc',
runAppCommandWithInherit: (_appPath, appArgs) => {
forwarded.push(appArgs);
throw new ExitSignal(0);
},
}),
(error: unknown) => error instanceof ExitSignal && error.code === 0,
);
assert.deepEqual(forwarded, [['--refresh-known-words']]);
});
test('mpv pre-app command exits non-zero when socket is not ready', async () => {
const context = createContext();
context.args.mpvStatus = true;
@@ -150,10 +176,9 @@ test('stats command launches attached app command with response path', async ()
assert.equal(handled, true);
assert.deepEqual(forwarded, [
[
'--stats-daemon-start',
'--stats',
'--stats-response-path',
'/tmp/subminer-stats-test/response.json',
'--stats-daemon-open-browser',
'--log-level',
'debug',
],
@@ -187,7 +212,7 @@ test('stats background command launches attached daemon control command with res
]);
});
test('stats command returns after startup response even if app process stays running', async () => {
test('stats command waits for attached app exit after startup response', async () => {
const context = createContext();
context.args.stats = true;
const forwarded: string[][] = [];
@@ -214,14 +239,31 @@ test('stats command returns after startup response even if app process stays run
assert.equal(final, true);
assert.deepEqual(forwarded, [
[
'--stats-daemon-start',
'--stats',
'--stats-response-path',
'/tmp/subminer-stats-test/response.json',
'--stats-daemon-open-browser',
],
]);
});
test('stats command throws when attached app exits non-zero after startup response', async () => {
const context = createContext();
context.args.stats = true;
await assert.rejects(async () => {
await runStatsCommand(context, {
createTempDir: () => '/tmp/subminer-stats-test',
joinPath: (...parts) => parts.join('/'),
runAppCommandAttached: async () => {
await new Promise((resolve) => setTimeout(resolve, 10));
return 3;
},
waitForStatsResponse: async () => ({ ok: true, url: 'http://127.0.0.1:5175' }),
removeDir: () => {},
});
}, /Stats app exited with status 3\./);
});
test('stats cleanup command forwards cleanup vocab flags to the app', async () => {
const context = createContext();
context.args.stats = true;
@@ -367,3 +409,95 @@ test('stats cleanup command fails if attached app exits before startup response'
});
}, /Stats app exited before startup response \(status 2\)\./);
});
test('stats command aborts pending response wait when app exits before startup response', async () => {
const context = createContext();
context.args.stats = true;
let aborted = false;
await assert.rejects(async () => {
await runStatsCommand(context, {
createTempDir: () => '/tmp/subminer-stats-test',
joinPath: (...parts) => parts.join('/'),
runAppCommandAttached: async () => 2,
waitForStatsResponse: async (_responsePath, signal) =>
await new Promise((resolve) => {
signal?.addEventListener(
'abort',
() => {
aborted = true;
resolve({ ok: false, error: 'aborted' });
},
{ once: true },
);
}),
removeDir: () => {},
});
}, /Stats app exited before startup response \(status 2\)\./);
assert.equal(aborted, true);
});
test('stats command aborts pending response wait when attached app fails to spawn', async () => {
const context = createContext();
context.args.stats = true;
const spawnError = new Error('spawn failed');
let aborted = false;
await assert.rejects(
async () => {
await runStatsCommand(context, {
createTempDir: () => '/tmp/subminer-stats-test',
joinPath: (...parts) => parts.join('/'),
runAppCommandAttached: async () => {
throw spawnError;
},
waitForStatsResponse: async (_responsePath, signal) =>
await new Promise((resolve) => {
signal?.addEventListener(
'abort',
() => {
aborted = true;
resolve({ ok: false, error: 'aborted' });
},
{ once: true },
);
}),
removeDir: () => {},
});
},
(error: unknown) => error === spawnError,
);
assert.equal(aborted, true);
});
test('stats cleanup command aborts pending response wait when app exits before startup response', async () => {
const context = createContext();
context.args.stats = true;
context.args.statsCleanup = true;
context.args.statsCleanupVocab = true;
let aborted = false;
await assert.rejects(async () => {
await runStatsCommand(context, {
createTempDir: () => '/tmp/subminer-stats-test',
joinPath: (...parts) => parts.join('/'),
runAppCommandAttached: async () => 2,
waitForStatsResponse: async (_responsePath, signal) =>
await new Promise((resolve) => {
signal?.addEventListener(
'abort',
() => {
aborted = true;
resolve({ ok: false, error: 'aborted' });
},
{ once: true },
);
}),
removeDir: () => {},
});
}, /Stats app exited before startup response \(status 2\)\./);
assert.equal(aborted, true);
});

View File

@@ -1,5 +1,6 @@
import fs from 'node:fs';
import { log } from '../log.js';
import { runAppCommandWithInherit } from '../mpv.js';
import { commandExists } from '../util.js';
import { resolveMainConfigPath } from '../config-path.js';
import type { LauncherCommandContext } from './context.js';
@@ -8,12 +9,14 @@ interface DoctorCommandDeps {
commandExists(command: string): boolean;
configExists(path: string): boolean;
resolveMainConfigPath(): string;
runAppCommandWithInherit(appPath: string, appArgs: string[]): never;
}
const defaultDeps: DoctorCommandDeps = {
commandExists,
configExists: fs.existsSync,
resolveMainConfigPath,
runAppCommandWithInherit,
};
export function runDoctorCommand(
@@ -72,14 +75,21 @@ export function runDoctorCommand(
},
];
const hasHardFailure = checks.some((entry) =>
entry.label === 'app binary' || entry.label === 'mpv' ? !entry.ok : false,
);
for (const check of checks) {
log(check.ok ? 'info' : 'warn', args.logLevel, `[doctor] ${check.label}: ${check.detail}`);
}
if (args.doctorRefreshKnownWords) {
if (!appPath) {
processAdapter.exit(1);
return true;
}
deps.runAppCommandWithInherit(appPath, ['--refresh-known-words']);
}
const hasHardFailure = checks.some((entry) =>
entry.label === 'app binary' || entry.label === 'mpv' ? !entry.ok : false,
);
processAdapter.exit(hasHardFailure ? 1 : 0);
return true;
}

View File

@@ -20,20 +20,39 @@ type StatsCommandDeps = {
logLevel: LauncherCommandContext['args']['logLevel'],
label: string,
) => Promise<number>;
waitForStatsResponse: (responsePath: string) => Promise<StatsCommandResponse>;
waitForStatsResponse: (
responsePath: string,
signal?: AbortSignal,
) => Promise<StatsCommandResponse>;
removeDir: (targetPath: string) => void;
};
const STATS_STARTUP_RESPONSE_TIMEOUT_MS = 12_000;
type StatsResponseWait = {
controller: AbortController;
promise: Promise<{ kind: 'response'; response: StatsCommandResponse }>;
};
type StatsStartupResult =
| { kind: 'response'; response: StatsCommandResponse }
| { kind: 'exit'; status: number }
| { kind: 'spawn-error'; error: unknown };
const defaultDeps: StatsCommandDeps = {
createTempDir: (prefix) => fs.mkdtempSync(path.join(os.tmpdir(), prefix)),
joinPath: (...parts) => path.join(...parts),
runAppCommandAttached: (appPath, appArgs, logLevel, label) =>
runAppCommandAttached(appPath, appArgs, logLevel, label),
waitForStatsResponse: async (responsePath) => {
waitForStatsResponse: async (responsePath, signal) => {
const deadline = Date.now() + STATS_STARTUP_RESPONSE_TIMEOUT_MS;
while (Date.now() < deadline) {
if (signal?.aborted) {
return {
ok: false,
error: 'Cancelled waiting for stats dashboard startup response.',
};
}
try {
if (fs.existsSync(responsePath)) {
return JSON.parse(fs.readFileSync(responsePath, 'utf8')) as StatsCommandResponse;
@@ -53,6 +72,49 @@ const defaultDeps: StatsCommandDeps = {
},
};
async function performStartupHandshake(
createResponseWait: () => StatsResponseWait,
attachedExitPromise: Promise<number>,
): Promise<boolean> {
const responseWait = createResponseWait();
const startupResult = await Promise.race<StatsStartupResult>([
responseWait.promise,
attachedExitPromise.then(
(status) => ({ kind: 'exit' as const, status }),
(error) => ({ kind: 'spawn-error' as const, error }),
),
]);
if (startupResult.kind === 'spawn-error') {
responseWait.controller.abort();
throw startupResult.error;
}
if (startupResult.kind === 'exit') {
if (startupResult.status !== 0) {
responseWait.controller.abort();
throw new Error(`Stats app exited before startup response (status ${startupResult.status}).`);
}
const response = await responseWait.promise.then((result) => result.response);
if (!response.ok) {
throw new Error(response.error || 'Stats dashboard failed to start.');
}
return true;
}
if (!startupResult.response.ok) {
throw new Error(startupResult.response.error || 'Stats dashboard failed to start.');
}
const exitStatus = await attachedExitPromise;
if (exitStatus !== 0) {
throw new Error(`Stats app exited with status ${exitStatus}.`);
}
return true;
}
export async function runStatsCommand(
context: LauncherCommandContext,
deps: Partial<StatsCommandDeps> = {},
@@ -66,17 +128,24 @@ export async function runStatsCommand(
const tempDir = resolvedDeps.createTempDir('subminer-stats-');
const responsePath = resolvedDeps.joinPath(tempDir, 'response.json');
const createResponseWait = () => {
const controller = new AbortController();
return {
controller,
promise: resolvedDeps
.waitForStatsResponse(responsePath, controller.signal)
.then((response) => ({ kind: 'response' as const, response })),
};
};
try {
const forwarded = args.statsCleanup
? ['--stats', '--stats-response-path', responsePath]
: [
args.statsStop ? '--stats-daemon-stop' : '--stats-daemon-start',
'--stats-response-path',
responsePath,
];
if (!args.statsCleanup && !args.statsBackground && !args.statsStop) {
forwarded.push('--stats-daemon-open-browser');
}
: args.statsStop
? ['--stats-daemon-stop', '--stats-response-path', responsePath]
: args.statsBackground
? ['--stats-daemon-start', '--stats-response-path', responsePath]
: ['--stats', '--stats-response-path', responsePath];
if (args.statsCleanup) {
forwarded.push('--stats-cleanup');
}
@@ -104,59 +173,7 @@ export async function runStatsCommand(
return true;
}
if (!args.statsCleanup && !args.statsStop) {
const startupResult = await Promise.race([
resolvedDeps
.waitForStatsResponse(responsePath)
.then((response) => ({ kind: 'response' as const, response })),
attachedExitPromise.then((status) => ({ kind: 'exit' as const, status })),
]);
if (startupResult.kind === 'exit') {
if (startupResult.status !== 0) {
throw new Error(
`Stats app exited before startup response (status ${startupResult.status}).`,
);
}
const response = await resolvedDeps.waitForStatsResponse(responsePath);
if (!response.ok) {
throw new Error(response.error || 'Stats dashboard failed to start.');
}
return true;
}
if (!startupResult.response.ok) {
throw new Error(startupResult.response.error || 'Stats dashboard failed to start.');
}
await attachedExitPromise;
return true;
}
const attachedExitPromiseCleanup = attachedExitPromise;
const startupResult = await Promise.race([
resolvedDeps
.waitForStatsResponse(responsePath)
.then((response) => ({ kind: 'response' as const, response })),
attachedExitPromiseCleanup.then((status) => ({ kind: 'exit' as const, status })),
]);
if (startupResult.kind === 'exit') {
if (startupResult.status !== 0) {
throw new Error(
`Stats app exited before startup response (status ${startupResult.status}).`,
);
}
const response = await resolvedDeps.waitForStatsResponse(responsePath);
if (!response.ok) {
throw new Error(response.error || 'Stats dashboard failed to start.');
}
return true;
}
if (!startupResult.response.ok) {
throw new Error(startupResult.response.error || 'Stats dashboard failed to start.');
}
const exitStatus = await attachedExitPromiseCleanup;
if (exitStatus !== 0) {
throw new Error(`Stats app exited with status ${exitStatus}.`);
}
return true;
return await performStartupHandshake(createResponseWait, attachedExitPromise);
} finally {
resolvedDeps.removeDir(tempDir);
}

View File

@@ -129,6 +129,7 @@ export function createDefaultArgs(launcherConfig: LauncherYoutubeSubgenConfig):
statsCleanupVocab: false,
statsCleanupLifetime: false,
doctor: false,
doctorRefreshKnownWords: false,
configPath: false,
configShow: false,
mpvIdle: false,
@@ -206,6 +207,7 @@ export function applyInvocationsToArgs(parsed: Args, invocations: CliInvocations
parsed.dictionaryTarget = parseDictionaryTarget(invocations.dictionaryTarget);
}
if (invocations.doctorTriggered) parsed.doctor = true;
if (invocations.doctorRefreshKnownWords) parsed.doctorRefreshKnownWords = true;
if (invocations.texthookerTriggered) parsed.texthookerOnly = true;
if (invocations.jellyfinInvocation) {

View File

@@ -49,6 +49,7 @@ export interface CliInvocations {
statsLogLevel: string | null;
doctorTriggered: boolean;
doctorLogLevel: string | null;
doctorRefreshKnownWords: boolean;
texthookerTriggered: boolean;
texthookerLogLevel: string | null;
}
@@ -156,6 +157,7 @@ export function parseCliPrograms(
let statsCleanupLifetime = false;
let statsLogLevel: string | null = null;
let doctorLogLevel: string | null = null;
let doctorRefreshKnownWords = false;
let texthookerLogLevel: string | null = null;
let doctorTriggered = false;
let texthookerTriggered = false;
@@ -289,6 +291,12 @@ export function parseCliPrograms(
if (normalizedAction && (statsBackground || statsStop)) {
throw new Error('Stats background and stop flags cannot be combined with stats actions.');
}
if (
normalizedAction !== 'cleanup' &&
(options.vocab === true || options.lifetime === true)
) {
throw new Error('Stats --vocab and --lifetime flags require the cleanup action.');
}
if (normalizedAction === 'cleanup') {
statsCleanup = true;
statsCleanupLifetime = options.lifetime === true;
@@ -304,10 +312,12 @@ export function parseCliPrograms(
commandProgram
.command('doctor')
.description('Run dependency and environment checks')
.option('--refresh-known-words', 'Refresh known words cache')
.option('--log-level <level>', 'Log level')
.action((options: Record<string, unknown>) => {
doctorTriggered = true;
doctorLogLevel = typeof options.logLevel === 'string' ? options.logLevel : null;
doctorRefreshKnownWords = options.refreshKnownWords === true;
});
commandProgram
@@ -388,6 +398,7 @@ export function parseCliPrograms(
statsLogLevel,
doctorTriggered,
doctorLogLevel,
doctorRefreshKnownWords,
texthookerTriggered,
texthookerLogLevel,
},

View File

@@ -178,6 +178,33 @@ test('doctor reports checks and exits non-zero without hard dependencies', () =>
});
});
test('doctor refresh-known-words forwards app refresh command without requiring mpv', () => {
withTempDir((root) => {
const homeDir = path.join(root, 'home');
const xdgConfigHome = path.join(root, 'xdg');
const appPath = path.join(root, 'fake-subminer.sh');
const capturePath = path.join(root, 'captured-args.txt');
fs.writeFileSync(
appPath,
'#!/bin/sh\nif [ -n "$SUBMINER_TEST_CAPTURE" ]; then printf "%s\\n" "$@" > "$SUBMINER_TEST_CAPTURE"; fi\nexit 0\n',
);
fs.chmodSync(appPath, 0o755);
const env = {
...makeTestEnv(homeDir, xdgConfigHome),
PATH: '',
Path: '',
SUBMINER_APPIMAGE_PATH: appPath,
SUBMINER_TEST_CAPTURE: capturePath,
};
const result = runLauncher(['doctor', '--refresh-known-words'], env);
assert.equal(result.status, 0);
assert.equal(fs.readFileSync(capturePath, 'utf8'), '--refresh-known-words\n');
assert.match(result.stdout, /\[doctor\] mpv: missing/);
});
});
test('youtube command rejects removed --mode option', () => {
withTempDir((root) => {
const homeDir = path.join(root, 'home');
@@ -536,7 +563,7 @@ exit 0
assert.equal(result.status, 0, `stdout:\n${result.stdout}\nstderr:\n${result.stderr}`);
assert.match(
fs.readFileSync(capturePath, 'utf8'),
/^--stats-daemon-start\n--stats-response-path\n.+\n--stats-daemon-open-browser\n--log-level\ndebug\n$/,
/^--stats\n--stats-response-path\n.+\n--log-level\ndebug\n$/,
);
});
},

View File

@@ -9,14 +9,46 @@ import type { Args } from './types';
import {
cleanupPlaybackSession,
findAppBinary,
launchAppCommandDetached,
launchTexthookerOnly,
parseMpvArgString,
runAppCommandCaptureOutput,
shouldResolveAniSkipMetadata,
stopOverlay,
startOverlay,
state,
waitForUnixSocketReady,
} from './mpv';
import * as mpvModule from './mpv';
class ExitSignal extends Error {
code: number;
constructor(code: number) {
super(`exit:${code}`);
this.code = code;
}
}
function withProcessExitIntercept(callback: () => void): ExitSignal {
const originalExit = process.exit;
try {
process.exit = ((code?: number) => {
throw new ExitSignal(code ?? 0);
}) as typeof process.exit;
callback();
} catch (error) {
if (error instanceof ExitSignal) {
return error;
}
throw error;
} finally {
process.exit = originalExit;
}
throw new Error('expected process.exit');
}
function createTempSocketPath(): { dir: string; socketPath: string } {
const baseDir = path.join(process.cwd(), '.tmp', 'launcher-mpv-tests');
fs.mkdirSync(baseDir, { recursive: true });
@@ -40,6 +72,94 @@ test('runAppCommandCaptureOutput captures status and stdio', () => {
assert.equal(result.error, undefined);
});
test('runAppCommandCaptureOutput strips ELECTRON_RUN_AS_NODE from app child env', () => {
const original = process.env.ELECTRON_RUN_AS_NODE;
try {
process.env.ELECTRON_RUN_AS_NODE = '1';
const result = runAppCommandCaptureOutput(process.execPath, [
'-e',
'process.stdout.write(String(process.env.ELECTRON_RUN_AS_NODE ?? ""));',
]);
assert.equal(result.status, 0);
assert.equal(result.stdout, '');
} finally {
if (original === undefined) {
delete process.env.ELECTRON_RUN_AS_NODE;
} else {
process.env.ELECTRON_RUN_AS_NODE = original;
}
}
});
test('parseMpvArgString preserves empty quoted tokens', () => {
assert.deepEqual(parseMpvArgString('--title "" --force-media-title \'\' --pause'), [
'--title',
'',
'--force-media-title',
'',
'--pause',
]);
});
test('launchTexthookerOnly exits non-zero when app binary cannot be spawned', () => {
const error = withProcessExitIntercept(() => {
launchTexthookerOnly('/definitely-missing-subminer-binary', makeArgs());
});
assert.equal(error.code, 1);
});
test('launchAppCommandDetached handles child process spawn errors', async () => {
let uncaughtError: Error | null = null;
const onUncaughtException = (error: Error) => {
uncaughtError = error;
};
process.once('uncaughtException', onUncaughtException);
try {
launchAppCommandDetached(
'/definitely-missing-subminer-binary',
[],
makeArgs({ logLevel: 'warn' }).logLevel,
'test',
);
await new Promise((resolve) => setTimeout(resolve, 50));
assert.equal(uncaughtError, null);
} finally {
process.removeListener('uncaughtException', onUncaughtException);
}
});
test('stopOverlay logs a warning when stop command cannot be spawned', () => {
const originalWrite = process.stdout.write;
const writes: string[] = [];
const overlayProc = {
killed: false,
kill: () => true,
} as unknown as NonNullable<typeof state.overlayProc>;
try {
process.stdout.write = ((chunk: string | Uint8Array) => {
writes.push(Buffer.isBuffer(chunk) ? chunk.toString('utf8') : String(chunk));
return true;
}) as typeof process.stdout.write;
state.stopRequested = false;
state.overlayManagedByLauncher = true;
state.appPath = '/definitely-missing-subminer-binary';
state.overlayProc = overlayProc;
stopOverlay(makeArgs({ logLevel: 'warn' }));
assert.ok(writes.some((text) => text.includes('Failed to stop SubMiner overlay')));
} finally {
process.stdout.write = originalWrite;
state.stopRequested = false;
state.overlayManagedByLauncher = false;
state.appPath = '';
state.overlayProc = null;
}
});
test('waitForUnixSocketReady returns false when socket never appears', async () => {
const { dir, socketPath } = createTempSocketPath();
try {
@@ -137,6 +257,7 @@ function makeArgs(overrides: Partial<Args> = {}): Args {
dictionary: false,
stats: false,
doctor: false,
doctorRefreshKnownWords: false,
configPath: false,
configShow: false,
mpvIdle: false,

View File

@@ -42,6 +42,7 @@ export function parseMpvArgString(input: string): string[] {
const chars = input;
const args: string[] = [];
let current = '';
let tokenStarted = false;
let inSingleQuote = false;
let inDoubleQuote = false;
let escaping = false;
@@ -52,6 +53,7 @@ export function parseMpvArgString(input: string): string[] {
const ch = chars[i] || '';
if (escaping) {
current += ch;
tokenStarted = true;
escaping = false;
continue;
}
@@ -61,6 +63,7 @@ export function parseMpvArgString(input: string): string[] {
inSingleQuote = false;
} else {
current += ch;
tokenStarted = true;
}
continue;
}
@@ -71,6 +74,7 @@ export function parseMpvArgString(input: string): string[] {
escaping = true;
} else {
current += ch;
tokenStarted = true;
}
continue;
}
@@ -79,33 +83,40 @@ export function parseMpvArgString(input: string): string[] {
continue;
}
current += ch;
tokenStarted = true;
continue;
}
if (ch === '\\') {
if (canEscape(chars[i + 1])) {
escaping = true;
tokenStarted = true;
} else {
current += ch;
tokenStarted = true;
}
continue;
}
if (ch === "'") {
tokenStarted = true;
inSingleQuote = true;
continue;
}
if (ch === '"') {
tokenStarted = true;
inDoubleQuote = true;
continue;
}
if (/\s/.test(ch)) {
if (current) {
if (tokenStarted) {
args.push(current);
current = '';
tokenStarted = false;
}
continue;
}
current += ch;
tokenStarted = true;
}
if (escaping) {
@@ -114,7 +125,7 @@ export function parseMpvArgString(input: string): string[] {
if (inSingleQuote || inDoubleQuote) {
fail('Could not parse mpv args: unmatched quote');
}
if (current) {
if (tokenStarted) {
args.push(current);
}
@@ -661,7 +672,7 @@ export async function startOverlay(appPath: string, args: Args, socketPath: stri
const target = resolveAppSpawnTarget(appPath, overlayArgs);
state.overlayProc = spawn(target.command, target.args, {
stdio: 'inherit',
env: { ...process.env, SUBMINER_MPV_LOG: getMpvLogPath() },
env: buildAppEnv(),
});
state.overlayManagedByLauncher = true;
@@ -688,7 +699,13 @@ export function launchTexthookerOnly(appPath: string, args: Args): never {
if (args.logLevel !== 'info') overlayArgs.push('--log-level', args.logLevel);
log('info', args.logLevel, 'Launching texthooker mode...');
const result = spawnSync(appPath, overlayArgs, { stdio: 'inherit' });
const result = spawnSync(appPath, overlayArgs, {
stdio: 'inherit',
env: buildAppEnv(),
});
if (result.error) {
fail(`Failed to launch texthooker mode: ${result.error.message}`);
}
process.exit(result.status ?? 0);
}
@@ -702,7 +719,15 @@ export function stopOverlay(args: Args): void {
const stopArgs = ['--stop'];
if (args.logLevel !== 'info') stopArgs.push('--log-level', args.logLevel);
spawnSync(state.appPath, stopArgs, { stdio: 'ignore' });
const result = spawnSync(state.appPath, stopArgs, {
stdio: 'ignore',
env: buildAppEnv(),
});
if (result.error) {
log('warn', args.logLevel, `Failed to stop SubMiner overlay: ${result.error.message}`);
} else if (typeof result.status === 'number' && result.status !== 0) {
log('warn', args.logLevel, `SubMiner overlay stop command exited with status ${result.status}`);
}
if (state.overlayProc && !state.overlayProc.killed) {
try {
@@ -763,6 +788,7 @@ function buildAppEnv(): NodeJS.ProcessEnv {
...process.env,
SUBMINER_MPV_LOG: getMpvLogPath(),
};
delete env.ELECTRON_RUN_AS_NODE;
const layers = env.VK_INSTANCE_LAYERS;
if (typeof layers === 'string' && layers.trim().length > 0) {
const filtered = layers
@@ -932,6 +958,9 @@ export function launchAppCommandDetached(
detached: true,
env: buildAppEnv(),
});
proc.once('error', (error) => {
log('warn', logLevel, `${label}: failed to launch detached app: ${error.message}`);
});
proc.unref();
}

View File

@@ -2,6 +2,34 @@ import test from 'node:test';
import assert from 'node:assert/strict';
import { parseArgs } from './config';
class ExitSignal extends Error {
code: number;
constructor(code: number) {
super(`exit:${code}`);
this.code = code;
}
}
function withProcessExitIntercept(callback: () => void): ExitSignal {
const originalExit = process.exit;
try {
process.exit = ((code?: number) => {
throw new ExitSignal(code ?? 0);
}) as typeof process.exit;
callback();
} catch (error) {
if (error instanceof ExitSignal) {
return error;
}
throw error;
} finally {
process.exit = originalExit;
}
throw new Error('expected parseArgs to exit');
}
test('parseArgs captures passthrough args for app subcommand', () => {
const parsed = parseArgs(['app', '--anilist', '--log-level', 'debug'], 'subminer', {});
@@ -119,6 +147,15 @@ test('parseArgs maps lifetime stats cleanup flag', () => {
assert.equal(parsed.statsCleanupLifetime, true);
});
test('parseArgs rejects cleanup-only stats flags without cleanup action', () => {
const error = withProcessExitIntercept(() => {
parseArgs(['stats', '--vocab'], 'subminer', {});
});
assert.equal(error.code, 1);
assert.match(error.message, /exit:1/);
});
test('parseArgs maps stats rebuild action to cleanup lifetime mode', () => {
const parsed = parseArgs(['stats', 'rebuild'], 'subminer', {});
@@ -127,3 +164,10 @@ test('parseArgs maps stats rebuild action to cleanup lifetime mode', () => {
assert.equal(parsed.statsCleanupVocab, false);
assert.equal(parsed.statsCleanupLifetime, true);
});
test('parseArgs maps doctor refresh-known-words flag', () => {
const parsed = parseArgs(['doctor', '--refresh-known-words'], 'subminer', {});
assert.equal(parsed.doctor, true);
assert.equal(parsed.doctorRefreshKnownWords, true);
});

View File

@@ -14,6 +14,20 @@ function makeFile(filePath: string): void {
fs.writeFileSync(filePath, '/* theme */');
}
function withPlatform<T>(platform: NodeJS.Platform, callback: () => T): T {
const originalDescriptor = Object.getOwnPropertyDescriptor(process, 'platform');
Object.defineProperty(process, 'platform', {
value: platform,
});
try {
return callback();
} finally {
if (originalDescriptor) {
Object.defineProperty(process, 'platform', originalDescriptor);
}
}
}
test('findRofiTheme resolves /usr/local/share/SubMiner/themes/subminer.rasi when it exists', () => {
const originalExistsSync = fs.existsSync;
const targetPath = `/usr/local/share/SubMiner/themes/${ROFI_THEME_FILE}`;
@@ -24,7 +38,7 @@ test('findRofiTheme resolves /usr/local/share/SubMiner/themes/subminer.rasi when
return false;
};
const result = findRofiTheme('/usr/local/bin/subminer');
const result = withPlatform('linux', () => findRofiTheme('/usr/local/bin/subminer'));
assert.equal(result, targetPath);
} finally {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
@@ -44,7 +58,7 @@ test('findRofiTheme resolves /usr/share/SubMiner/themes/subminer.rasi when /usr/
return false;
};
const result = findRofiTheme('/usr/bin/subminer');
const result = withPlatform('linux', () => findRofiTheme('/usr/bin/subminer'));
assert.equal(result, sharePath);
} finally {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
@@ -60,10 +74,14 @@ test('findRofiTheme resolves XDG_DATA_HOME/SubMiner/themes/subminer.rasi when se
const themePath = path.join(baseDir, `SubMiner/themes/${ROFI_THEME_FILE}`);
makeFile(themePath);
const result = findRofiTheme('/usr/bin/subminer');
const result = withPlatform('linux', () => findRofiTheme('/usr/bin/subminer'));
assert.equal(result, themePath);
} finally {
if (originalXdgDataHome !== undefined) {
process.env.XDG_DATA_HOME = originalXdgDataHome;
} else {
delete process.env.XDG_DATA_HOME;
}
fs.rmSync(baseDir, { recursive: true, force: true });
}
});
@@ -78,7 +96,7 @@ test('findRofiTheme resolves ~/.local/share/SubMiner/themes/subminer.rasi when X
const themePath = path.join(baseDir, `.local/share/SubMiner/themes/${ROFI_THEME_FILE}`);
makeFile(themePath);
const result = findRofiTheme('/usr/bin/subminer');
const result = withPlatform('linux', () => findRofiTheme('/usr/bin/subminer'));
assert.equal(result, themePath);
} finally {
os.homedir = originalHomedir;

View File

@@ -119,6 +119,7 @@ export interface Args {
statsCleanupLifetime?: boolean;
dictionaryTarget?: string;
doctor: boolean;
doctorRefreshKnownWords: boolean;
configPath: boolean;
configShow: boolean;
mpvIdle: boolean;

View File

@@ -1,6 +1,6 @@
{
"name": "subminer",
"version": "0.6.5",
"version": "0.7.0",
"description": "All-in-one sentence mining overlay with AnkiConnect and dictionary integration",
"packageManager": "bun@1.3.5",
"main": "dist/main-entry.js",

50
src/anki-connect.test.ts Normal file
View File

@@ -0,0 +1,50 @@
import test from 'node:test';
import assert from 'node:assert/strict';
import { AnkiConnectClient } from './anki-connect';
test('AnkiConnectClient disables keep-alive agents to avoid stale socket retries', () => {
const client = new AnkiConnectClient('http://127.0.0.1:8765') as unknown as {
client: {
defaults: {
httpAgent?: { options?: { keepAlive?: boolean } };
httpsAgent?: { options?: { keepAlive?: boolean } };
};
};
};
assert.equal(client.client.defaults.httpAgent?.options?.keepAlive, false);
assert.equal(client.client.defaults.httpsAgent?.options?.keepAlive, false);
});
test('AnkiConnectClient includes action name in retry logs', async () => {
const client = new AnkiConnectClient('http://127.0.0.1:8765') as unknown as {
client: { post: (url: string, body: unknown, options: unknown) => Promise<unknown> };
sleep: (ms: number) => Promise<void>;
};
let shouldFail = true;
client.client = {
post: async () => {
if (shouldFail) {
shouldFail = false;
const error = Object.assign(new Error('socket hang up'), { code: 'ECONNRESET' });
throw error;
}
return { data: { result: [], error: null } };
},
};
client.sleep = async () => undefined;
const originalInfo = console.info;
const messages: string[] = [];
try {
console.info = (...args: unknown[]) => {
messages.push(args.map((value) => String(value)).join(' '));
};
await (client as unknown as AnkiConnectClient).invoke('notesInfo', { notes: [1] });
assert.match(messages.join('\n'), /AnkiConnect notesInfo retry 1\/3 after 200ms delay/);
} finally {
console.info = originalInfo;
}
});

View File

@@ -43,7 +43,7 @@ export class AnkiConnectClient {
constructor(url: string) {
const httpAgent = new http.Agent({
keepAlive: true,
keepAlive: false,
keepAliveMsecs: 1000,
maxSockets: 5,
maxFreeSockets: 2,
@@ -51,7 +51,7 @@ export class AnkiConnectClient {
});
const httpsAgent = new https.Agent({
keepAlive: true,
keepAlive: false,
keepAliveMsecs: 1000,
maxSockets: 5,
maxFreeSockets: 2,
@@ -106,7 +106,7 @@ export class AnkiConnectClient {
try {
if (attempt > 0) {
const delay = Math.min(this.backoffMs * Math.pow(2, attempt - 1), this.maxBackoffMs);
log.info(`AnkiConnect retry ${attempt}/${maxRetries} after ${delay}ms delay`);
log.info(`AnkiConnect ${action} retry ${attempt}/${maxRetries} after ${delay}ms delay`);
await this.sleep(delay);
}

View File

@@ -199,6 +199,25 @@ export class AnkiIntegration {
});
}
private recordCardsMinedSafely(
count: number,
noteIds: number[] | undefined,
source: string,
): void {
if (!this.recordCardsMinedCallback) {
return;
}
try {
this.recordCardsMinedCallback(count, noteIds);
} catch (error) {
log.warn(
`recordCardsMined callback failed during ${source}:`,
(error as Error).message,
);
}
}
private createKnownWordCache(knownWordCacheStatePath?: string): KnownWordCacheManager {
return new KnownWordCacheManager({
client: {
@@ -221,7 +240,7 @@ export class AnkiIntegration {
shouldAutoUpdateNewCards: () => this.config.behavior?.autoUpdateNewCards !== false,
processNewCard: (noteId) => this.processNewCard(noteId),
recordCardsAdded: (count, noteIds) => {
this.recordCardsMinedCallback?.(count, noteIds);
this.recordCardsMinedSafely(count, noteIds, 'polling');
},
isUpdateInProgress: () => this.updateInProgress,
setUpdateInProgress: (value) => {
@@ -245,7 +264,7 @@ export class AnkiIntegration {
shouldAutoUpdateNewCards: () => this.config.behavior?.autoUpdateNewCards !== false,
processNewCard: (noteId: number) => this.processNewCard(noteId),
recordCardsAdded: (count, noteIds) => {
this.recordCardsMinedCallback?.(count, noteIds);
this.recordCardsMinedSafely(count, noteIds, 'proxy');
},
getDeck: () => this.config.deck,
findNotes: async (query, options) =>
@@ -344,6 +363,9 @@ export class AnkiIntegration {
trackLastAddedNoteId: (noteId) => {
this.previousNoteIds.add(noteId);
},
recordCardsMinedCallback: (count, noteIds) => {
this.recordCardsMinedSafely(count, noteIds, 'card creation');
},
});
}
@@ -1048,10 +1070,6 @@ export class AnkiIntegration {
return getConfiguredWordFieldCandidates(this.config);
}
private getPreferredWordValue(fields: Record<string, string>): string {
return getPreferredWordValueFromExtractedFields(fields, this.config);
}
private async getAnimatedImageLeadInSeconds(noteInfo: NoteInfo): Promise<number> {
return resolveAnimatedImageLeadInSeconds({
config: this.config,

View File

@@ -1,4 +1,6 @@
import assert from 'node:assert/strict';
import http from 'node:http';
import { once } from 'node:events';
import test from 'node:test';
import { AnkiConnectProxyServer } from './anki-connect-proxy';
@@ -322,6 +324,83 @@ test('proxy fallback-enqueues latest note for addNote responses without note IDs
assert.deepEqual(recordedCards, [1]);
});
test('proxy returns addNote response without waiting for background enrichment', async () => {
const processed: number[] = [];
let releaseProcessing: (() => void) | undefined;
const processingGate = new Promise<void>((resolve) => {
releaseProcessing = resolve;
});
const upstream = http.createServer((req, res) => {
assert.equal(req.method, 'POST');
res.statusCode = 200;
res.setHeader('content-type', 'application/json');
res.end(JSON.stringify({ result: 42, error: null }));
});
upstream.listen(0, '127.0.0.1');
await once(upstream, 'listening');
const upstreamAddress = upstream.address();
assert.ok(upstreamAddress && typeof upstreamAddress === 'object');
const upstreamPort = upstreamAddress.port;
const proxy = new AnkiConnectProxyServer({
shouldAutoUpdateNewCards: () => true,
processNewCard: async (noteId) => {
processed.push(noteId);
await processingGate;
},
logInfo: () => undefined,
logWarn: () => undefined,
logError: () => undefined,
});
try {
proxy.start({
host: '127.0.0.1',
port: 0,
upstreamUrl: `http://127.0.0.1:${upstreamPort}`,
});
const proxyServer = (
proxy as unknown as {
server: http.Server | null;
}
).server;
assert.ok(proxyServer);
if (!proxyServer.listening) {
await once(proxyServer, 'listening');
}
const proxyAddress = proxyServer.address();
assert.ok(proxyAddress && typeof proxyAddress === 'object');
const proxyPort = proxyAddress.port;
const response = await Promise.race([
fetch(`http://127.0.0.1:${proxyPort}`, {
method: 'POST',
headers: {
'content-type': 'application/json',
},
body: JSON.stringify({ action: 'addNote', version: 6, params: {} }),
}),
new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error('Timed out waiting for proxy response')), 500);
}),
]);
assert.equal(response.status, 200);
assert.deepEqual(await response.json(), { result: 42, error: null });
await waitForCondition(() => processed.length === 1);
assert.deepEqual(processed, [42]);
} finally {
if (releaseProcessing) {
releaseProcessing();
}
proxy.stop();
upstream.close();
await once(upstream, 'close');
}
});
test('proxy detects self-referential loop configuration', () => {
const proxy = new AnkiConnectProxyServer({
shouldAutoUpdateNewCards: () => true,

View File

@@ -0,0 +1,285 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { CardCreationService } from './card-creation';
import type { AnkiConnectConfig } from '../types';
test('CardCreationService counts locally created sentence cards', async () => {
const minedCards: Array<{ count: number; noteIds?: number[] }> = [];
const service = new CardCreationService({
getConfig: () =>
({
deck: 'Mining',
fields: {
sentence: 'Sentence',
audio: 'SentenceAudio',
},
media: {
generateAudio: false,
generateImage: false,
},
behavior: {},
ai: false,
}) as AnkiConnectConfig,
getAiConfig: () => ({}),
getTimingTracker: () => ({}) as never,
getMpvClient: () =>
({
currentVideoPath: '/video.mp4',
currentSubText: '字幕',
currentSubStart: 1,
currentSubEnd: 2,
currentTimePos: 1.5,
currentAudioStreamIndex: 0,
}) as never,
client: {
addNote: async () => 42,
addTags: async () => undefined,
notesInfo: async () => [],
updateNoteFields: async () => undefined,
storeMediaFile: async () => undefined,
findNotes: async () => [],
retrieveMediaFile: async () => '',
},
mediaGenerator: {
generateAudio: async () => null,
generateScreenshot: async () => null,
generateAnimatedImage: async () => null,
},
showOsdNotification: () => undefined,
showUpdateResult: () => undefined,
showStatusNotification: () => undefined,
showNotification: async () => undefined,
beginUpdateProgress: () => undefined,
endUpdateProgress: () => undefined,
withUpdateProgress: async (_message, action) => action(),
resolveConfiguredFieldName: () => null,
resolveNoteFieldName: () => null,
getAnimatedImageLeadInSeconds: async () => 0,
extractFields: () => ({}),
processSentence: (sentence) => sentence,
setCardTypeFields: () => undefined,
mergeFieldValue: (_existing, newValue) => newValue,
formatMiscInfoPattern: () => '',
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: false,
kikuFieldGrouping: 'disabled',
kikuDeleteDuplicateInAuto: false,
}),
getFallbackDurationSeconds: () => 10,
appendKnownWordsFromNoteInfo: () => undefined,
isUpdateInProgress: () => false,
setUpdateInProgress: () => undefined,
trackLastAddedNoteId: () => undefined,
recordCardsMinedCallback: (count, noteIds) => {
minedCards.push({ count, noteIds });
},
});
const created = await service.createSentenceCard('テスト', 0, 1);
assert.equal(created, true);
assert.deepEqual(minedCards, [{ count: 1, noteIds: [42] }]);
});
test('CardCreationService keeps updating after trackLastAddedNoteId throws', async () => {
const calls = {
notesInfo: 0,
updateNoteFields: 0,
};
const service = new CardCreationService({
getConfig: () =>
({
deck: 'Mining',
fields: {
sentence: 'Sentence',
audio: 'SentenceAudio',
},
media: {
generateAudio: false,
generateImage: false,
},
behavior: {},
ai: false,
}) as AnkiConnectConfig,
getAiConfig: () => ({}),
getTimingTracker: () => ({}) as never,
getMpvClient: () =>
({
currentVideoPath: '/video.mp4',
currentSubText: '字幕',
currentSubStart: 1,
currentSubEnd: 2,
currentTimePos: 1.5,
currentAudioStreamIndex: 0,
}) as never,
client: {
addNote: async () => 42,
addTags: async () => undefined,
notesInfo: async () => {
calls.notesInfo += 1;
return [
{
noteId: 42,
fields: {
Sentence: { value: 'existing' },
},
},
];
},
updateNoteFields: async () => {
calls.updateNoteFields += 1;
},
storeMediaFile: async () => undefined,
findNotes: async () => [],
retrieveMediaFile: async () => '',
},
mediaGenerator: {
generateAudio: async () => null,
generateScreenshot: async () => null,
generateAnimatedImage: async () => null,
},
showOsdNotification: () => undefined,
showUpdateResult: () => undefined,
showStatusNotification: () => undefined,
showNotification: async () => undefined,
beginUpdateProgress: () => undefined,
endUpdateProgress: () => undefined,
withUpdateProgress: async (_message, action) => action(),
resolveConfiguredFieldName: () => null,
resolveNoteFieldName: () => null,
getAnimatedImageLeadInSeconds: async () => 0,
extractFields: () => ({}),
processSentence: (sentence) => sentence,
setCardTypeFields: (updatedFields) => {
updatedFields.CardType = 'sentence';
},
mergeFieldValue: (_existing, newValue) => newValue,
formatMiscInfoPattern: () => '',
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: false,
kikuFieldGrouping: 'disabled',
kikuDeleteDuplicateInAuto: false,
}),
getFallbackDurationSeconds: () => 10,
appendKnownWordsFromNoteInfo: () => undefined,
isUpdateInProgress: () => false,
setUpdateInProgress: () => undefined,
trackLastAddedNoteId: () => {
throw new Error('track failed');
},
});
const created = await service.createSentenceCard('テスト', 0, 1);
assert.equal(created, true);
assert.equal(calls.notesInfo, 1);
assert.equal(calls.updateNoteFields, 1);
});
test('CardCreationService keeps updating after recordCardsMinedCallback throws', async () => {
const calls = {
notesInfo: 0,
updateNoteFields: 0,
};
const service = new CardCreationService({
getConfig: () =>
({
deck: 'Mining',
fields: {
sentence: 'Sentence',
audio: 'SentenceAudio',
},
media: {
generateAudio: false,
generateImage: false,
},
behavior: {},
ai: false,
}) as AnkiConnectConfig,
getAiConfig: () => ({}),
getTimingTracker: () => ({}) as never,
getMpvClient: () =>
({
currentVideoPath: '/video.mp4',
currentSubText: '字幕',
currentSubStart: 1,
currentSubEnd: 2,
currentTimePos: 1.5,
currentAudioStreamIndex: 0,
}) as never,
client: {
addNote: async () => 42,
addTags: async () => undefined,
notesInfo: async () => {
calls.notesInfo += 1;
return [
{
noteId: 42,
fields: {
Sentence: { value: 'existing' },
},
},
];
},
updateNoteFields: async () => {
calls.updateNoteFields += 1;
},
storeMediaFile: async () => undefined,
findNotes: async () => [],
retrieveMediaFile: async () => '',
},
mediaGenerator: {
generateAudio: async () => null,
generateScreenshot: async () => null,
generateAnimatedImage: async () => null,
},
showOsdNotification: () => undefined,
showUpdateResult: () => undefined,
showStatusNotification: () => undefined,
showNotification: async () => undefined,
beginUpdateProgress: () => undefined,
endUpdateProgress: () => undefined,
withUpdateProgress: async (_message, action) => action(),
resolveConfiguredFieldName: () => null,
resolveNoteFieldName: () => null,
getAnimatedImageLeadInSeconds: async () => 0,
extractFields: () => ({}),
processSentence: (sentence) => sentence,
setCardTypeFields: (updatedFields) => {
updatedFields.CardType = 'sentence';
},
mergeFieldValue: (_existing, newValue) => newValue,
formatMiscInfoPattern: () => '',
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: false,
kikuFieldGrouping: 'disabled',
kikuDeleteDuplicateInAuto: false,
}),
getFallbackDurationSeconds: () => 10,
appendKnownWordsFromNoteInfo: () => undefined,
isUpdateInProgress: () => false,
setUpdateInProgress: () => undefined,
recordCardsMinedCallback: () => {
throw new Error('record failed');
},
});
const created = await service.createSentenceCard('テスト', 0, 1);
assert.equal(created, true);
assert.equal(calls.notesInfo, 1);
assert.equal(calls.updateNoteFields, 1);
});

View File

@@ -110,6 +110,7 @@ interface CardCreationDeps {
isUpdateInProgress: () => boolean;
setUpdateInProgress: (value: boolean) => void;
trackLastAddedNoteId?: (noteId: number) => void;
recordCardsMinedCallback?: (count: number, noteIds?: number[]) => void;
}
export class CardCreationService {
@@ -550,13 +551,24 @@ export class CardCreationService {
this.getConfiguredAnkiTags(),
);
log.info('Created sentence card:', noteId);
this.deps.trackLastAddedNoteId?.(noteId);
} catch (error) {
log.error('Failed to create sentence card:', (error as Error).message);
this.deps.showUpdateResult(`Sentence card failed: ${(error as Error).message}`, false);
return false;
}
try {
this.deps.trackLastAddedNoteId?.(noteId);
} catch (error) {
log.warn('Failed to track last added note:', (error as Error).message);
}
try {
this.deps.recordCardsMinedCallback?.(1, [noteId]);
} catch (error) {
log.warn('Failed to record mined card:', (error as Error).message);
}
try {
const noteInfoResult = await this.deps.client.notesInfo([noteId]);
const noteInfos = noteInfoResult as CardCreationNoteInfo[];

View File

@@ -7,16 +7,59 @@ import path from 'node:path';
import type { AnkiConnectConfig } from '../types';
import { KnownWordCacheManager } from './known-word-cache';
async function waitForCondition(
condition: () => boolean,
timeoutMs = 500,
intervalMs = 10,
): Promise<void> {
const startedAt = Date.now();
while (Date.now() - startedAt < timeoutMs) {
if (condition()) {
return;
}
await new Promise((resolve) => setTimeout(resolve, intervalMs));
}
throw new Error('Timed out waiting for condition');
}
function createKnownWordCacheHarness(config: AnkiConnectConfig): {
manager: KnownWordCacheManager;
calls: {
findNotes: number;
notesInfo: number;
};
statePath: string;
clientState: {
findNotesResult: number[];
notesInfoResult: Array<{ noteId: number; fields: Record<string, { value: string }> }>;
findNotesByQuery: Map<string, number[]>;
};
cleanup: () => void;
} {
const stateDir = fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-known-word-cache-'));
const statePath = path.join(stateDir, 'known-words-cache.json');
const calls = {
findNotes: 0,
notesInfo: 0,
};
const clientState = {
findNotesResult: [] as number[],
notesInfoResult: [] as Array<{ noteId: number; fields: Record<string, { value: string }> }>,
findNotesByQuery: new Map<string, number[]>(),
};
const manager = new KnownWordCacheManager({
client: {
findNotes: async () => [],
notesInfo: async () => [],
findNotes: async (query) => {
calls.findNotes += 1;
if (clientState.findNotesByQuery.has(query)) {
return clientState.findNotesByQuery.get(query) ?? [];
}
return clientState.findNotesResult;
},
notesInfo: async (noteIds) => {
calls.notesInfo += 1;
return clientState.notesInfoResult.filter((note) => noteIds.includes(note.noteId));
},
},
getConfig: () => config,
knownWordCacheStatePath: statePath,
@@ -25,12 +68,99 @@ function createKnownWordCacheHarness(config: AnkiConnectConfig): {
return {
manager,
calls,
statePath,
clientState,
cleanup: () => {
fs.rmSync(stateDir, { recursive: true, force: true });
},
};
}
test('KnownWordCacheManager startLifecycle keeps fresh persisted cache without immediate refresh', async () => {
const config: AnkiConnectConfig = {
knownWords: {
highlightEnabled: true,
refreshMinutes: 60,
},
};
const { manager, calls, statePath, cleanup } = createKnownWordCacheHarness(config);
try {
fs.writeFileSync(
statePath,
JSON.stringify({
version: 2,
refreshedAtMs: Date.now(),
scope: '{"refreshMinutes":60,"scope":"is:note","fieldsWord":""}',
words: ['猫'],
notes: {
'1': ['猫'],
},
}),
'utf-8',
);
manager.startLifecycle();
await new Promise((resolve) => setTimeout(resolve, 25));
assert.equal(manager.isKnownWord('猫'), true);
assert.equal(calls.findNotes, 0);
assert.equal(calls.notesInfo, 0);
} finally {
manager.stopLifecycle();
cleanup();
}
});
test('KnownWordCacheManager startLifecycle immediately refreshes stale persisted cache', async () => {
const config: AnkiConnectConfig = {
fields: {
word: 'Word',
},
knownWords: {
highlightEnabled: true,
refreshMinutes: 1,
},
};
const { manager, calls, statePath, clientState, cleanup } = createKnownWordCacheHarness(config);
try {
fs.writeFileSync(
statePath,
JSON.stringify({
version: 2,
refreshedAtMs: Date.now() - 61_000,
scope: '{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}',
words: ['猫'],
notes: {
'1': ['猫'],
},
}),
'utf-8',
);
clientState.findNotesResult = [1];
clientState.notesInfoResult = [
{
noteId: 1,
fields: {
Word: { value: '犬' },
},
},
];
manager.startLifecycle();
await waitForCondition(() => calls.findNotes === 1 && calls.notesInfo === 1);
assert.equal(manager.isKnownWord('猫'), false);
assert.equal(manager.isKnownWord('犬'), true);
} finally {
manager.stopLifecycle();
cleanup();
}
});
test('KnownWordCacheManager invalidates persisted cache when fields.word changes', () => {
const config: AnkiConnectConfig = {
deck: 'Mining',
@@ -69,6 +199,200 @@ test('KnownWordCacheManager invalidates persisted cache when fields.word changes
}
});
test('KnownWordCacheManager refresh incrementally reconciles deleted and edited note words', async () => {
const config: AnkiConnectConfig = {
fields: {
word: 'Word',
},
knownWords: {
highlightEnabled: true,
},
};
const { manager, statePath, clientState, cleanup } = createKnownWordCacheHarness(config);
try {
fs.writeFileSync(
statePath,
JSON.stringify({
version: 2,
refreshedAtMs: 1,
scope: '{"refreshMinutes":1440,"scope":"is:note","fieldsWord":"Word"}',
words: ['猫', '犬'],
notes: {
'1': ['猫'],
'2': ['犬'],
},
}),
'utf-8',
);
(
manager as unknown as {
loadKnownWordCacheState: () => void;
}
).loadKnownWordCacheState();
clientState.findNotesResult = [1];
clientState.notesInfoResult = [
{
noteId: 1,
fields: {
Word: { value: '鳥' },
},
},
];
await manager.refresh(true);
assert.equal(manager.isKnownWord('猫'), false);
assert.equal(manager.isKnownWord('犬'), false);
assert.equal(manager.isKnownWord('鳥'), true);
const persisted = JSON.parse(fs.readFileSync(statePath, 'utf-8')) as {
version: number;
words: string[];
notes?: Record<string, string[]>;
};
assert.equal(persisted.version, 2);
assert.deepEqual(persisted.words.sort(), ['鳥']);
assert.deepEqual(persisted.notes, {
'1': ['鳥'],
});
} finally {
cleanup();
}
});
test('KnownWordCacheManager skips malformed note info without fields', async () => {
const config: AnkiConnectConfig = {
fields: {
word: 'Word',
},
knownWords: {
highlightEnabled: true,
},
};
const { manager, clientState, cleanup } = createKnownWordCacheHarness(config);
try {
clientState.findNotesResult = [1, 2];
clientState.notesInfoResult = [
{
noteId: 1,
fields: undefined as unknown as Record<string, { value: string }>,
},
{
noteId: 2,
fields: {
Word: { value: '猫' },
},
},
];
await manager.refresh(true);
assert.equal(manager.isKnownWord('猫'), true);
assert.equal(manager.isKnownWord('犬'), false);
} finally {
cleanup();
}
});
test('KnownWordCacheManager preserves cache state key captured before refresh work', async () => {
const config: AnkiConnectConfig = {
fields: {
word: 'Word',
},
knownWords: {
highlightEnabled: true,
refreshMinutes: 1,
},
};
const stateDir = fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-known-word-cache-key-'));
const statePath = path.join(stateDir, 'known-words-cache.json');
let notesInfoStarted = false;
let releaseNotesInfo!: () => void;
const notesInfoGate = new Promise<void>((resolve) => {
releaseNotesInfo = resolve;
});
const manager = new KnownWordCacheManager({
client: {
findNotes: async () => [1],
notesInfo: async () => {
notesInfoStarted = true;
await notesInfoGate;
return [
{
noteId: 1,
fields: {
Word: { value: '猫' },
},
},
];
},
},
getConfig: () => config,
knownWordCacheStatePath: statePath,
showStatusNotification: () => undefined,
});
try {
const refreshPromise = manager.refresh(true);
await waitForCondition(() => notesInfoStarted);
config.fields = {
...config.fields,
word: 'Expression',
};
releaseNotesInfo();
await refreshPromise;
const persisted = JSON.parse(fs.readFileSync(statePath, 'utf-8')) as {
scope: string;
words: string[];
};
assert.equal(
persisted.scope,
'{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}',
);
assert.deepEqual(persisted.words, ['猫']);
} finally {
fs.rmSync(stateDir, { recursive: true, force: true });
}
});
test('KnownWordCacheManager does not borrow fields from other decks during refresh', async () => {
const config: AnkiConnectConfig = {
knownWords: {
highlightEnabled: true,
decks: {
Mining: [],
Reading: ['AltWord'],
},
},
};
const { manager, clientState, cleanup } = createKnownWordCacheHarness(config);
try {
clientState.findNotesByQuery.set('deck:"Mining"', [1]);
clientState.findNotesByQuery.set('deck:"Reading"', []);
clientState.notesInfoResult = [
{
noteId: 1,
fields: {
AltWord: { value: '猫' },
},
},
];
await manager.refresh(true);
assert.equal(manager.isKnownWord('猫'), false);
} finally {
cleanup();
}
});
test('KnownWordCacheManager invalidates persisted cache when per-deck fields change', () => {
const config: AnkiConnectConfig = {
fields: {
@@ -110,3 +434,102 @@ test('KnownWordCacheManager invalidates persisted cache when per-deck fields cha
cleanup();
}
});
test('KnownWordCacheManager preserves deck-specific field mappings during refresh', async () => {
const config: AnkiConnectConfig = {
knownWords: {
highlightEnabled: true,
decks: {
Mining: ['Expression'],
Reading: ['Word'],
},
},
};
const { manager, clientState, cleanup } = createKnownWordCacheHarness(config);
try {
clientState.findNotesByQuery.set('deck:"Mining"', [1]);
clientState.findNotesByQuery.set('deck:"Reading"', [2]);
clientState.notesInfoResult = [
{
noteId: 1,
fields: {
Expression: { value: '猫' },
Word: { value: 'should-not-count' },
},
},
{
noteId: 2,
fields: {
Word: { value: '犬' },
Expression: { value: 'also-ignored' },
},
},
];
await manager.refresh(true);
assert.equal(manager.isKnownWord('猫'), true);
assert.equal(manager.isKnownWord('犬'), true);
assert.equal(manager.isKnownWord('should-not-count'), false);
assert.equal(manager.isKnownWord('also-ignored'), false);
} finally {
cleanup();
}
});
test('KnownWordCacheManager uses the current deck fields for immediate append', () => {
const config: AnkiConnectConfig = {
deck: 'Mining',
fields: {
word: 'Word',
},
knownWords: {
highlightEnabled: true,
decks: {
Mining: ['Expression'],
Reading: ['Word'],
},
},
};
const { manager, cleanup } = createKnownWordCacheHarness(config);
try {
manager.appendFromNoteInfo({
noteId: 1,
fields: {
Expression: { value: '猫' },
Word: { value: 'should-not-count' },
},
});
assert.equal(manager.isKnownWord('猫'), true);
assert.equal(manager.isKnownWord('should-not-count'), false);
} finally {
cleanup();
}
});
test('KnownWordCacheManager skips immediate append when addMinedWordsImmediately is disabled', () => {
const config: AnkiConnectConfig = {
knownWords: {
highlightEnabled: true,
addMinedWordsImmediately: false,
},
};
const { manager, statePath, cleanup } = createKnownWordCacheHarness(config);
try {
manager.appendFromNoteInfo({
noteId: 1,
fields: {
Expression: { value: '猫' },
},
});
assert.equal(manager.isKnownWord('猫'), false);
assert.equal(fs.existsSync(statePath), false);
} finally {
cleanup();
}
});

View File

@@ -64,13 +64,23 @@ export interface KnownWordCacheNoteInfo {
fields: Record<string, { value: string }>;
}
interface KnownWordCacheState {
interface KnownWordCacheStateV1 {
readonly version: 1;
readonly refreshedAtMs: number;
readonly scope: string;
readonly words: string[];
}
interface KnownWordCacheStateV2 {
readonly version: 2;
readonly refreshedAtMs: number;
readonly scope: string;
readonly words: string[];
readonly notes: Record<string, string[]>;
}
type KnownWordCacheState = KnownWordCacheStateV1 | KnownWordCacheStateV2;
interface KnownWordCacheClient {
findNotes: (
query: string,
@@ -88,11 +98,19 @@ interface KnownWordCacheDeps {
showStatusNotification: (message: string) => void;
}
type KnownWordQueryScope = {
query: string;
fields: string[];
};
export class KnownWordCacheManager {
private knownWordsLastRefreshedAtMs = 0;
private knownWordsStateKey = '';
private knownWords: Set<string> = new Set();
private wordReferenceCounts = new Map<string, number>();
private noteWordsById = new Map<number, string[]>();
private knownWordsRefreshTimer: ReturnType<typeof setInterval> | null = null;
private knownWordsRefreshTimeout: ReturnType<typeof setTimeout> | null = null;
private isRefreshingKnownWords = false;
private readonly statePath: string;
@@ -133,14 +151,14 @@ export class KnownWordCacheManager {
);
this.loadKnownWordCacheState();
void this.refreshKnownWords();
const refreshIntervalMs = this.getKnownWordRefreshIntervalMs();
this.knownWordsRefreshTimer = setInterval(() => {
void this.refreshKnownWords();
}, refreshIntervalMs);
this.scheduleKnownWordRefreshLifecycle();
}
stopLifecycle(): void {
if (this.knownWordsRefreshTimeout) {
clearTimeout(this.knownWordsRefreshTimeout);
this.knownWordsRefreshTimeout = null;
}
if (this.knownWordsRefreshTimer) {
clearInterval(this.knownWordsRefreshTimer);
this.knownWordsRefreshTimer = null;
@@ -148,7 +166,7 @@ export class KnownWordCacheManager {
}
appendFromNoteInfo(noteInfo: KnownWordCacheNoteInfo): void {
if (!this.isKnownWordCacheEnabled()) {
if (!this.isKnownWordCacheEnabled() || !this.shouldAddMinedWordsImmediately()) {
return;
}
@@ -160,32 +178,31 @@ export class KnownWordCacheManager {
this.knownWordsStateKey = currentStateKey;
}
let addedCount = 0;
for (const rawWord of this.extractKnownWordsFromNoteInfo(noteInfo)) {
const normalized = this.normalizeKnownWordForLookup(rawWord);
if (!normalized || this.knownWords.has(normalized)) {
continue;
}
this.knownWords.add(normalized);
addedCount += 1;
const preferredFields = this.getImmediateAppendFields();
if (!preferredFields) {
return;
}
const nextWords = this.extractNormalizedKnownWordsFromNoteInfo(noteInfo, preferredFields);
const changed = this.replaceNoteSnapshot(noteInfo.noteId, nextWords);
if (!changed) {
return;
}
if (addedCount > 0) {
if (this.knownWordsLastRefreshedAtMs <= 0) {
this.knownWordsLastRefreshedAtMs = Date.now();
}
this.persistKnownWordCacheState();
log.info(
'Known-word cache updated in-session',
`added=${addedCount}`,
`noteId=${noteInfo.noteId}`,
`wordCount=${nextWords.length}`,
`scope=${getKnownWordCacheScopeForConfig(this.deps.getConfig())}`,
);
}
}
clearKnownWordCacheState(): void {
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
try {
if (fs.existsSync(this.statePath)) {
@@ -210,41 +227,43 @@ export class KnownWordCacheManager {
return;
}
const frozenStateKey = this.getKnownWordCacheStateKey();
this.isRefreshingKnownWords = true;
try {
const query = this.buildKnownWordsQuery();
log.debug('Refreshing known-word cache', `query=${query}`);
const noteIds = (await this.deps.client.findNotes(query, {
maxRetries: 0,
})) as number[];
const noteFieldsById = await this.fetchKnownWordNoteFieldsById();
const currentNoteIds = Array.from(noteFieldsById.keys()).sort((a, b) => a - b);
const nextKnownWords = new Set<string>();
if (noteIds.length > 0) {
const chunkSize = 50;
for (let i = 0; i < noteIds.length; i += chunkSize) {
const chunk = noteIds.slice(i, i + chunkSize);
const notesInfoResult = (await this.deps.client.notesInfo(chunk)) as unknown[];
const notesInfo = notesInfoResult as KnownWordCacheNoteInfo[];
for (const noteInfo of notesInfo) {
for (const word of this.extractKnownWordsFromNoteInfo(noteInfo)) {
const normalized = this.normalizeKnownWordForLookup(word);
if (normalized) {
nextKnownWords.add(normalized);
if (this.noteWordsById.size === 0) {
await this.rebuildFromCurrentNotes(currentNoteIds, noteFieldsById);
} else {
const currentNoteIdSet = new Set(currentNoteIds);
for (const noteId of Array.from(this.noteWordsById.keys())) {
if (!currentNoteIdSet.has(noteId)) {
this.removeNoteSnapshot(noteId);
}
}
if (currentNoteIds.length > 0) {
const noteInfos = await this.fetchKnownWordNotesInfo(currentNoteIds);
for (const noteInfo of noteInfos) {
this.replaceNoteSnapshot(
noteInfo.noteId,
this.extractNormalizedKnownWordsFromNoteInfo(
noteInfo,
noteFieldsById.get(noteInfo.noteId),
),
);
}
}
}
this.knownWords = nextKnownWords;
this.knownWordsLastRefreshedAtMs = Date.now();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
this.knownWordsStateKey = frozenStateKey;
this.persistKnownWordCacheState();
log.info(
'Known-word cache refreshed',
`noteCount=${noteIds.length}`,
`wordCount=${nextKnownWords.size}`,
`noteCount=${currentNoteIds.length}`,
`wordCount=${this.knownWords.size}`,
);
} catch (error) {
log.warn('Failed to refresh known-word cache:', (error as Error).message);
@@ -258,10 +277,19 @@ export class KnownWordCacheManager {
return this.deps.getConfig().knownWords?.highlightEnabled === true;
}
private shouldAddMinedWordsImmediately(): boolean {
return this.deps.getConfig().knownWords?.addMinedWordsImmediately !== false;
}
private getKnownWordRefreshIntervalMs(): number {
return getKnownWordCacheRefreshIntervalMinutes(this.deps.getConfig()) * 60_000;
}
private getDefaultKnownWordFields(): string[] {
const configuredWordField = getConfiguredWordFieldName(this.deps.getConfig());
return [...new Set([configuredWordField, 'Word', 'Reading', 'Word Reading'])];
}
private getKnownWordDecks(): string[] {
const configuredDecks = this.deps.getConfig().knownWords?.decks;
if (configuredDecks && typeof configuredDecks === 'object' && !Array.isArray(configuredDecks)) {
@@ -275,20 +303,69 @@ export class KnownWordCacheManager {
}
private getConfiguredFields(): string[] {
return this.getDefaultKnownWordFields();
}
private getImmediateAppendFields(): string[] | null {
const configuredDecks = this.deps.getConfig().knownWords?.decks;
if (configuredDecks && typeof configuredDecks === 'object' && !Array.isArray(configuredDecks)) {
const allFields = new Set<string>();
for (const fields of Object.values(configuredDecks)) {
if (Array.isArray(fields)) {
for (const f of fields) {
if (typeof f === 'string' && f.trim()) allFields.add(f.trim());
const trimmedDeckEntries = Object.entries(configuredDecks)
.map(([deckName, fields]) => [deckName.trim(), fields] as const)
.filter(([deckName]) => deckName.length > 0);
const currentDeck = this.deps.getConfig().deck?.trim();
const selectedDeckEntry =
currentDeck !== undefined && currentDeck.length > 0
? trimmedDeckEntries.find(([deckName]) => deckName === currentDeck) ?? null
: trimmedDeckEntries.length === 1
? trimmedDeckEntries[0] ?? null
: null;
if (!selectedDeckEntry) {
return null;
}
const deckFields = selectedDeckEntry[1];
if (Array.isArray(deckFields)) {
const normalizedFields = [
...new Set(
deckFields.map(String).map((field) => field.trim()).filter((field) => field.length > 0),
),
];
if (normalizedFields.length > 0) {
return normalizedFields;
}
}
return this.getDefaultKnownWordFields();
}
if (allFields.size > 0) return [...allFields];
return this.getConfiguredFields();
}
const configuredWordField = getConfiguredWordFieldName(this.deps.getConfig());
return [...new Set([configuredWordField, 'Word', 'Reading', 'Word Reading'])];
private getKnownWordQueryScopes(): KnownWordQueryScope[] {
const configuredDecks = this.deps.getConfig().knownWords?.decks;
if (configuredDecks && typeof configuredDecks === 'object' && !Array.isArray(configuredDecks)) {
const scopes: KnownWordQueryScope[] = [];
for (const [deckName, fields] of Object.entries(configuredDecks)) {
const trimmedDeckName = deckName.trim();
if (!trimmedDeckName) {
continue;
}
const normalizedFields = Array.isArray(fields)
? [...new Set(fields.map(String).map((field) => field.trim()).filter(Boolean))]
: [];
scopes.push({
query: `deck:"${escapeAnkiSearchValue(trimmedDeckName)}"`,
fields: normalizedFields.length > 0 ? normalizedFields : this.getDefaultKnownWordFields(),
});
}
if (scopes.length > 0) {
return scopes;
}
}
return [{ query: this.buildKnownWordsQuery(), fields: this.getDefaultKnownWordFields() }];
}
private buildKnownWordsQuery(): string {
@@ -322,64 +399,231 @@ export class KnownWordCacheManager {
return Date.now() - this.knownWordsLastRefreshedAtMs >= this.getKnownWordRefreshIntervalMs();
}
private async fetchKnownWordNoteFieldsById(): Promise<Map<number, string[]>> {
const scopes = this.getKnownWordQueryScopes();
const noteFieldsById = new Map<number, string[]>();
log.debug('Refreshing known-word cache', `queries=${scopes.map((scope) => scope.query).join(' | ')}`);
for (const scope of scopes) {
const noteIds = (await this.deps.client.findNotes(scope.query, {
maxRetries: 0,
})) as number[];
for (const noteId of noteIds) {
if (!Number.isInteger(noteId) || noteId <= 0) {
continue;
}
const existingFields = noteFieldsById.get(noteId) ?? [];
noteFieldsById.set(
noteId,
[...new Set([...existingFields, ...scope.fields])],
);
}
}
return noteFieldsById;
}
private scheduleKnownWordRefreshLifecycle(): void {
const refreshIntervalMs = this.getKnownWordRefreshIntervalMs();
const scheduleInterval = () => {
this.knownWordsRefreshTimer = setInterval(() => {
void this.refreshKnownWords();
}, refreshIntervalMs);
};
const initialDelayMs = this.getMsUntilNextRefresh();
this.knownWordsRefreshTimeout = setTimeout(() => {
this.knownWordsRefreshTimeout = null;
void this.refreshKnownWords();
scheduleInterval();
}, initialDelayMs);
}
private getMsUntilNextRefresh(): number {
if (this.knownWordsStateKey !== this.getKnownWordCacheStateKey()) {
return 0;
}
if (this.knownWordsLastRefreshedAtMs <= 0) {
return 0;
}
const remainingMs =
this.getKnownWordRefreshIntervalMs() - (Date.now() - this.knownWordsLastRefreshedAtMs);
return Math.max(0, remainingMs);
}
private async rebuildFromCurrentNotes(
noteIds: number[],
noteFieldsById: Map<number, string[]>,
): Promise<void> {
this.clearInMemoryState();
if (noteIds.length === 0) {
return;
}
const noteInfos = await this.fetchKnownWordNotesInfo(noteIds);
for (const noteInfo of noteInfos) {
this.replaceNoteSnapshot(
noteInfo.noteId,
this.extractNormalizedKnownWordsFromNoteInfo(noteInfo, noteFieldsById.get(noteInfo.noteId)),
);
}
}
private async fetchKnownWordNotesInfo(noteIds: number[]): Promise<KnownWordCacheNoteInfo[]> {
const noteInfos: KnownWordCacheNoteInfo[] = [];
const chunkSize = 50;
for (let i = 0; i < noteIds.length; i += chunkSize) {
const chunk = noteIds.slice(i, i + chunkSize);
const notesInfoResult = (await this.deps.client.notesInfo(chunk)) as unknown[];
const chunkInfos = notesInfoResult as KnownWordCacheNoteInfo[];
for (const noteInfo of chunkInfos) {
if (
!noteInfo ||
!Number.isInteger(noteInfo.noteId) ||
noteInfo.noteId <= 0 ||
typeof noteInfo.fields !== 'object' ||
noteInfo.fields === null ||
Array.isArray(noteInfo.fields)
) {
continue;
}
noteInfos.push(noteInfo);
}
}
return noteInfos;
}
private replaceNoteSnapshot(noteId: number, nextWords: string[]): boolean {
const normalizedWords = normalizeKnownWordList(nextWords);
const previousWords = this.noteWordsById.get(noteId) ?? [];
if (knownWordListsEqual(previousWords, normalizedWords)) {
return false;
}
this.removeWordsFromCounts(previousWords);
if (normalizedWords.length > 0) {
this.noteWordsById.set(noteId, normalizedWords);
this.addWordsToCounts(normalizedWords);
} else {
this.noteWordsById.delete(noteId);
}
return true;
}
private removeNoteSnapshot(noteId: number): void {
const previousWords = this.noteWordsById.get(noteId);
if (!previousWords) {
return;
}
this.noteWordsById.delete(noteId);
this.removeWordsFromCounts(previousWords);
}
private addWordsToCounts(words: string[]): void {
for (const word of words) {
const nextCount = (this.wordReferenceCounts.get(word) ?? 0) + 1;
this.wordReferenceCounts.set(word, nextCount);
this.knownWords.add(word);
}
}
private removeWordsFromCounts(words: string[]): void {
for (const word of words) {
const nextCount = (this.wordReferenceCounts.get(word) ?? 0) - 1;
if (nextCount > 0) {
this.wordReferenceCounts.set(word, nextCount);
} else {
this.wordReferenceCounts.delete(word);
this.knownWords.delete(word);
}
}
}
private clearInMemoryState(): void {
this.knownWords = new Set();
this.wordReferenceCounts = new Map();
this.noteWordsById = new Map();
this.knownWordsLastRefreshedAtMs = 0;
}
private loadKnownWordCacheState(): void {
try {
if (!fs.existsSync(this.statePath)) {
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
return;
}
const raw = fs.readFileSync(this.statePath, 'utf-8');
if (!raw.trim()) {
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
return;
}
const parsed = JSON.parse(raw) as unknown;
if (!this.isKnownWordCacheStateValid(parsed)) {
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
return;
}
if (parsed.scope !== this.getKnownWordCacheStateKey()) {
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
return;
}
const nextKnownWords = new Set<string>();
this.clearInMemoryState();
if (parsed.version === 2) {
for (const [noteIdKey, words] of Object.entries(parsed.notes)) {
const noteId = Number.parseInt(noteIdKey, 10);
if (!Number.isInteger(noteId) || noteId <= 0) {
continue;
}
const normalizedWords = normalizeKnownWordList(words);
if (normalizedWords.length === 0) {
continue;
}
this.noteWordsById.set(noteId, normalizedWords);
this.addWordsToCounts(normalizedWords);
}
} else {
for (const value of parsed.words) {
const normalized = this.normalizeKnownWordForLookup(value);
if (normalized) {
nextKnownWords.add(normalized);
if (!normalized) {
continue;
}
this.knownWords.add(normalized);
this.wordReferenceCounts.set(normalized, 1);
}
}
this.knownWords = nextKnownWords;
this.knownWordsLastRefreshedAtMs = parsed.refreshedAtMs;
this.knownWordsStateKey = parsed.scope;
} catch (error) {
log.warn('Failed to load known-word cache state:', (error as Error).message);
this.knownWords = new Set();
this.knownWordsLastRefreshedAtMs = 0;
this.clearInMemoryState();
this.knownWordsStateKey = this.getKnownWordCacheStateKey();
}
}
private persistKnownWordCacheState(): void {
try {
const state: KnownWordCacheState = {
version: 1,
const notes: Record<string, string[]> = {};
for (const [noteId, words] of this.noteWordsById.entries()) {
if (words.length > 0) {
notes[String(noteId)] = words;
}
}
const state: KnownWordCacheStateV2 = {
version: 2,
refreshedAtMs: this.knownWordsLastRefreshedAtMs,
scope: this.knownWordsStateKey,
words: Array.from(this.knownWords),
notes,
};
fs.writeFileSync(this.statePath, JSON.stringify(state), 'utf-8');
} catch (error) {
@@ -389,33 +633,52 @@ export class KnownWordCacheManager {
private isKnownWordCacheStateValid(value: unknown): value is KnownWordCacheState {
if (typeof value !== 'object' || value === null) return false;
const candidate = value as Partial<KnownWordCacheState>;
if (candidate.version !== 1) return false;
const candidate = value as Record<string, unknown>;
if (candidate.version !== 1 && candidate.version !== 2) return false;
if (typeof candidate.refreshedAtMs !== 'number') return false;
if (typeof candidate.scope !== 'string') return false;
if (!Array.isArray(candidate.words)) return false;
if (!candidate.words.every((entry) => typeof entry === 'string')) {
if (!candidate.words.every((entry: unknown) => typeof entry === 'string')) {
return false;
}
if (candidate.version === 2) {
if (
typeof candidate.notes !== 'object' ||
candidate.notes === null ||
Array.isArray(candidate.notes)
) {
return false;
}
if (
!Object.values(candidate.notes as Record<string, unknown>).every(
(entry) =>
Array.isArray(entry) && entry.every((word: unknown) => typeof word === 'string'),
)
) {
return false;
}
}
return true;
}
private extractKnownWordsFromNoteInfo(noteInfo: KnownWordCacheNoteInfo): string[] {
private extractNormalizedKnownWordsFromNoteInfo(
noteInfo: KnownWordCacheNoteInfo,
preferredFields = this.getConfiguredFields(),
): string[] {
const words: string[] = [];
const configuredFields = this.getConfiguredFields();
for (const preferredField of configuredFields) {
for (const preferredField of preferredFields) {
const fieldName = resolveFieldName(Object.keys(noteInfo.fields), preferredField);
if (!fieldName) continue;
const raw = noteInfo.fields[fieldName]?.value;
if (!raw) continue;
const extracted = this.normalizeRawKnownWordValue(raw);
if (extracted) {
words.push(extracted);
const normalized = this.normalizeKnownWordForLookup(raw);
if (normalized) {
words.push(normalized);
}
}
return words;
return normalizeKnownWordList(words);
}
private normalizeRawKnownWordValue(value: string): string {
@@ -430,6 +693,22 @@ export class KnownWordCacheManager {
}
}
function normalizeKnownWordList(words: string[]): string[] {
return [...new Set(words.map((word) => word.trim()).filter((word) => word.length > 0))].sort();
}
function knownWordListsEqual(left: string[], right: string[]): boolean {
if (left.length !== right.length) {
return false;
}
for (let index = 0; index < left.length; index += 1) {
if (left[index] !== right[index]) {
return false;
}
}
return true;
}
function resolveFieldName(availableFieldNames: string[], preferredName: string): string | null {
const exact = availableFieldNames.find((name) => name === preferredName);
if (exact) return exact;

View File

@@ -2,6 +2,7 @@ import test from 'node:test';
import assert from 'node:assert/strict';
import {
hasExplicitCommand,
isHeadlessInitialCommand,
parseArgs,
shouldRunSettingsOnlyStartup,
shouldStartApp,
@@ -101,7 +102,8 @@ test('hasExplicitCommand and shouldStartApp preserve command intent', () => {
const refreshKnownWords = parseArgs(['--refresh-known-words']);
assert.equal(refreshKnownWords.help, false);
assert.equal(hasExplicitCommand(refreshKnownWords), true);
assert.equal(shouldStartApp(refreshKnownWords), false);
assert.equal(shouldStartApp(refreshKnownWords), true);
assert.equal(isHeadlessInitialCommand(refreshKnownWords), true);
const settings = parseArgs(['--settings']);
assert.equal(settings.settings, true);

View File

@@ -376,6 +376,10 @@ export function hasExplicitCommand(args: CliArgs): boolean {
);
}
export function isHeadlessInitialCommand(args: CliArgs): boolean {
return args.refreshKnownWords;
}
export function shouldStartApp(args: CliArgs): boolean {
if (args.stop && !args.start) return false;
if (
@@ -391,6 +395,7 @@ export function shouldStartApp(args: CliArgs): boolean {
args.mineSentence ||
args.mineSentenceMultiple ||
args.updateLastCardFromClipboard ||
args.refreshKnownWords ||
args.toggleSecondarySub ||
args.triggerFieldGrouping ||
args.triggerSubsync ||

View File

@@ -19,7 +19,7 @@ test('printHelp includes configured texthooker port', () => {
assert.match(output, /default: 7777/);
assert.match(output, /--launch-mpv/);
assert.match(output, /--stats\s+Open the stats dashboard in your browser/);
assert.match(output, /--refresh-known-words/);
assert.doesNotMatch(output, /--refresh-known-words/);
assert.match(output, /--setup\s+Open first-run setup window/);
assert.match(output, /--anilist-status/);
assert.match(output, /--anilist-retry-queue/);

View File

@@ -35,7 +35,6 @@ ${B}Mining${R}
--trigger-field-grouping Run Kiku field grouping
--trigger-subsync Run subtitle sync
--toggle-secondary-sub Cycle secondary subtitle mode
--refresh-known-words Refresh known words cache
--open-runtime-options Open runtime options palette
${B}AniList${R}

View File

@@ -1435,7 +1435,8 @@ test('validates ankiConnect knownWords behavior values', () => {
"ankiConnect": {
"knownWords": {
"highlightEnabled": "yes",
"refreshMinutes": -5
"refreshMinutes": -5,
"addMinedWordsImmediately": "no"
}
}
}`,
@@ -1456,6 +1457,13 @@ test('validates ankiConnect knownWords behavior values', () => {
);
assert.ok(warnings.some((warning) => warning.path === 'ankiConnect.knownWords.highlightEnabled'));
assert.ok(warnings.some((warning) => warning.path === 'ankiConnect.knownWords.refreshMinutes'));
assert.equal(
config.ankiConnect.knownWords.addMinedWordsImmediately,
DEFAULT_CONFIG.ankiConnect.knownWords.addMinedWordsImmediately,
);
assert.ok(
warnings.some((warning) => warning.path === 'ankiConnect.knownWords.addMinedWordsImmediately'),
);
});
test('accepts valid ankiConnect knownWords behavior values', () => {
@@ -1466,7 +1474,8 @@ test('accepts valid ankiConnect knownWords behavior values', () => {
"ankiConnect": {
"knownWords": {
"highlightEnabled": true,
"refreshMinutes": 120
"refreshMinutes": 120,
"addMinedWordsImmediately": false
}
}
}`,
@@ -1478,6 +1487,7 @@ test('accepts valid ankiConnect knownWords behavior values', () => {
assert.equal(config.ankiConnect.knownWords.highlightEnabled, true);
assert.equal(config.ankiConnect.knownWords.refreshMinutes, 120);
assert.equal(config.ankiConnect.knownWords.addMinedWordsImmediately, false);
});
test('validates ankiConnect n+1 minimum sentence word count', () => {

View File

@@ -55,6 +55,7 @@ export const INTEGRATIONS_DEFAULT_CONFIG: Pick<
knownWords: {
highlightEnabled: false,
refreshMinutes: 1440,
addMinedWordsImmediately: true,
matchMode: 'headword',
decks: {},
color: '#a6da95',

View File

@@ -108,6 +108,12 @@ export function buildIntegrationConfigOptionRegistry(
defaultValue: defaultConfig.ankiConnect.knownWords.refreshMinutes,
description: 'Minutes between known-word cache refreshes.',
},
{
path: 'ankiConnect.knownWords.addMinedWordsImmediately',
kind: 'boolean',
defaultValue: defaultConfig.ankiConnect.knownWords.addMinedWordsImmediately,
description: 'Immediately append newly mined card words into the known-word cache.',
},
{
path: 'ankiConnect.nPlusOne.minSentenceWords',
kind: 'number',

View File

@@ -70,6 +70,20 @@ test('accepts knownWords.decks object format with field arrays', () => {
);
});
test('accepts knownWords.addMinedWordsImmediately boolean override', () => {
const { context, warnings } = makeContext({
knownWords: { addMinedWordsImmediately: false },
});
applyAnkiConnectResolution(context);
assert.equal(context.resolved.ankiConnect.knownWords.addMinedWordsImmediately, false);
assert.equal(
warnings.some((warning) => warning.path === 'ankiConnect.knownWords.addMinedWordsImmediately'),
false,
);
});
test('converts legacy knownWords.decks array to object with default fields', () => {
const { context, warnings } = makeContext({
knownWords: { decks: ['Core Deck'] },

View File

@@ -771,6 +771,24 @@ export function applyAnkiConnectResolution(context: ResolveContext): void {
DEFAULT_CONFIG.ankiConnect.knownWords.refreshMinutes;
}
const knownWordsAddMinedWordsImmediately = asBoolean(knownWordsConfig.addMinedWordsImmediately);
if (knownWordsAddMinedWordsImmediately !== undefined) {
context.resolved.ankiConnect.knownWords.addMinedWordsImmediately =
knownWordsAddMinedWordsImmediately;
} else if (knownWordsConfig.addMinedWordsImmediately !== undefined) {
context.warn(
'ankiConnect.knownWords.addMinedWordsImmediately',
knownWordsConfig.addMinedWordsImmediately,
context.resolved.ankiConnect.knownWords.addMinedWordsImmediately,
'Expected boolean.',
);
context.resolved.ankiConnect.knownWords.addMinedWordsImmediately =
DEFAULT_CONFIG.ankiConnect.knownWords.addMinedWordsImmediately;
} else {
context.resolved.ankiConnect.knownWords.addMinedWordsImmediately =
DEFAULT_CONFIG.ankiConnect.knownWords.addMinedWordsImmediately;
}
const nPlusOneMinSentenceWords = asNumber(nPlusOneConfig.minSentenceWords);
const hasValidNPlusOneMinSentenceWords =
nPlusOneMinSentenceWords !== undefined &&

View File

@@ -260,6 +260,12 @@ function createMockTracker(
totalActiveMin: 120,
totalCards: 0,
activeDays: 7,
totalTokensSeen: 80,
totalLookupCount: 5,
totalLookupHits: 4,
totalYomitanLookupCount: 5,
newWordsToday: 0,
newWordsThisWeek: 0,
}),
getSessionTimeline: async () => [],
getSessionEvents: async () => [],
@@ -337,6 +343,8 @@ describe('stats server API routes', () => {
assert.equal(body.hints.totalAnimeCompleted, 0);
assert.equal(body.hints.totalActiveMin, 120);
assert.equal(body.hints.activeDays, 7);
assert.equal(body.hints.totalTokensSeen, 80);
assert.equal(body.hints.totalYomitanLookupCount, 5);
});
it('GET /api/stats/sessions returns session list', async () => {
@@ -347,6 +355,39 @@ describe('stats server API routes', () => {
assert.ok(Array.isArray(body));
});
it('GET /api/stats/sessions enriches each session with known-word metrics when cache exists', async () => {
await withTempDir(async (dir) => {
const cachePath = path.join(dir, 'known-words.json');
fs.writeFileSync(
cachePath,
JSON.stringify({
version: 1,
words: ['する'],
}),
);
const app = createStatsApp(
createMockTracker({
getSessionWordsByLine: async (sessionId: number) =>
sessionId === 1
? [
{ lineIndex: 1, headword: 'する', occurrenceCount: 2 },
{ lineIndex: 2, headword: '未知', occurrenceCount: 1 },
]
: [],
}),
{ knownWordCachePath: cachePath },
);
const res = await app.request('/api/stats/sessions?limit=5');
assert.equal(res.status, 200);
const body = await res.json();
const first = body[0];
assert.equal(first.knownWordsSeen, 2);
assert.equal(first.knownWordRate, 2.5);
});
});
it('GET /api/stats/sessions/:id/events forwards event type filters to the tracker', async () => {
let seenSessionId = 0;
let seenLimit = 0;

View File

@@ -539,8 +539,21 @@ test('handleCliCommand runs refresh-known-words command', () => {
assert.ok(calls.includes('refreshKnownWords'));
});
test('handleCliCommand stops app after headless initial refresh-known-words completes', async () => {
const { deps, calls } = createDeps({
hasMainWindow: () => false,
});
handleCliCommand(makeArgs({ refreshKnownWords: true }), 'initial', deps);
await new Promise((resolve) => setImmediate(resolve));
assert.ok(calls.includes('refreshKnownWords'));
assert.ok(calls.includes('stopApp'));
});
test('handleCliCommand reports async refresh-known-words errors to OSD', async () => {
const { deps, calls, osd } = createDeps({
hasMainWindow: () => false,
refreshKnownWords: async () => {
throw new Error('refresh boom');
},
@@ -551,4 +564,5 @@ test('handleCliCommand reports async refresh-known-words errors to OSD', async (
assert.ok(calls.some((value) => value.startsWith('error:refreshKnownWords failed:')));
assert.ok(osd.some((value) => value.includes('Refresh known words failed: refresh boom')));
assert.ok(calls.includes('stopApp'));
});

View File

@@ -334,12 +334,18 @@ export function handleCliCommand(
'Update failed',
);
} else if (args.refreshKnownWords) {
runAsyncWithOsd(
() => deps.refreshKnownWords(),
deps,
'refreshKnownWords',
'Refresh known words failed',
);
const shouldStopAfterRun = source === 'initial' && !deps.hasMainWindow();
deps
.refreshKnownWords()
.catch((err) => {
deps.error('refreshKnownWords failed:', err);
deps.showMpvOsd(`Refresh known words failed: ${(err as Error).message}`);
})
.finally(() => {
if (shouldStopAfterRun) {
deps.stopApp();
}
});
} else if (args.toggleSecondarySub) {
deps.cycleSecondarySubMode();
} else if (args.triggerFieldGrouping) {

View File

@@ -657,6 +657,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
video_id,
started_at_ms,
status,
ended_media_ms,
CREATED_DATE,
LAST_UPDATE_DATE
) VALUES (
@@ -665,6 +666,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
1,
${startedAtMs},
1,
321000,
${startedAtMs},
${sampleMs}
);
@@ -709,7 +711,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
const sessionRow = restartedApi.db
.prepare(
`
SELECT ended_at_ms, status, active_watched_ms, tokens_seen, cards_mined
SELECT ended_at_ms, status, ended_media_ms, active_watched_ms, tokens_seen, cards_mined
FROM imm_sessions
WHERE session_id = 1
`,
@@ -717,6 +719,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
.get() as {
ended_at_ms: number | null;
status: number;
ended_media_ms: number | null;
active_watched_ms: number;
tokens_seen: number;
cards_mined: number;
@@ -751,6 +754,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
assert.ok(sessionRow);
assert.ok(Number(sessionRow?.ended_at_ms ?? 0) >= sampleMs);
assert.equal(sessionRow?.status, 2);
assert.equal(sessionRow?.ended_media_ms, 321_000);
assert.equal(sessionRow?.active_watched_ms, 4000);
assert.equal(sessionRow?.tokens_seen, 120);
assert.equal(sessionRow?.cards_mined, 2);
@@ -1230,6 +1234,41 @@ test('recordPlaybackPosition marks watched at 85% completion', async () => {
}
});
test('flushTelemetry checkpoints latest playback position on the active session row', async () => {
const dbPath = makeDbPath();
let tracker: ImmersionTrackerService | null = null;
try {
const Ctor = await loadTrackerCtor();
tracker = new Ctor({ dbPath });
tracker.handleMediaChange('/tmp/episode-progress-checkpoint.mkv', 'Episode Progress Checkpoint');
tracker.recordPlaybackPosition(91);
const privateApi = tracker as unknown as {
db: DatabaseSync;
sessionState: { sessionId: number } | null;
flushTelemetry: (force?: boolean) => void;
flushNow: () => void;
};
const sessionId = privateApi.sessionState?.sessionId;
assert.ok(sessionId);
privateApi.flushTelemetry(true);
privateApi.flushNow();
const row = privateApi.db
.prepare('SELECT ended_media_ms FROM imm_sessions WHERE session_id = ?')
.get(sessionId) as { ended_media_ms: number | null } | null;
assert.ok(row);
assert.equal(row?.ended_media_ms, 91_000);
} finally {
tracker?.destroy();
cleanupDbPath(dbPath);
}
});
test('deleteSession ignores the currently active session and keeps new writes flushable', async () => {
const dbPath = makeDbPath();
let tracker: ImmersionTrackerService | null = null;

View File

@@ -365,6 +365,12 @@ export class ImmersionTrackerService {
totalActiveMin: number;
totalCards: number;
activeDays: number;
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
newWordsToday: number;
newWordsThisWeek: number;
}> {
return getQueryHints(this.db);
}
@@ -1063,6 +1069,7 @@ export class ImmersionTrackerService {
kind: 'telemetry',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
lastMediaMs: this.sessionState.lastMediaMs,
totalWatchedMs: this.sessionState.totalWatchedMs,
activeWatchedMs: this.sessionState.activeWatchedMs,
linesSeen: this.sessionState.linesSeen,

View File

@@ -139,6 +139,74 @@ test('getSessionSummaries returns sessionId and canonicalTitle', () => {
}
});
test('getAnimeEpisodes prefers the latest session media position when the latest session is still active', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/active-progress-episode.mkv', {
canonicalTitle: 'Active Progress Episode',
sourcePath: '/tmp/active-progress-episode.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Active Progress Anime',
canonicalTitle: 'Active Progress Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
linkVideoToAnimeRecord(db, videoId, {
animeId,
parsedBasename: 'active-progress-episode.mkv',
parsedTitle: 'Active Progress Anime',
parsedSeason: 1,
parsedEpisode: 2,
parserSource: 'fallback',
parserConfidence: 1,
parseMetadataJson: '{"episode":2}',
});
const endedSessionId = startSessionRecord(db, videoId, 1_000_000).sessionId;
const activeSessionId = startSessionRecord(db, videoId, 1_010_000).sessionId;
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
ended_media_ms = ?,
active_watched_ms = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(1_005_000, 6_000, 3_000, 1_005_000, endedSessionId);
db.prepare(
`
UPDATE imm_sessions
SET
ended_media_ms = ?,
active_watched_ms = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(9_000, 4_000, 1_012_000, activeSessionId);
const [episode] = getAnimeEpisodes(db, animeId);
assert.ok(episode);
assert.equal(episode?.endedMediaMs, 9_000);
assert.equal(episode?.totalSessions, 2);
assert.equal(episode?.totalActiveMs, 7_000);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('getSessionTimeline returns the full session when no limit is provided', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -360,10 +428,7 @@ test('getTrendsDashboard returns chart-ready aggregated series', () => {
assert.equal(dashboard.activity.watchTime[0]?.value, 30);
assert.equal(dashboard.progress.watchTime[1]?.value, 75);
assert.equal(dashboard.progress.lookups[1]?.value, 18);
assert.equal(
dashboard.ratios.lookupsPerHundred[0]?.value,
+((8 / 120) * 100).toFixed(1),
);
assert.equal(dashboard.ratios.lookupsPerHundred[0]?.value, +((8 / 120) * 100).toFixed(1));
assert.equal(dashboard.animePerDay.watchTime[0]?.animeTitle, 'Trend Dashboard Anime');
assert.equal(dashboard.animeCumulative.watchTime[1]?.value, 75);
assert.equal(
@@ -409,6 +474,28 @@ test('getQueryHints reads all-time totals from lifetime summary', () => {
insert.run(10, 2, 1, 11, 0, 0, 3);
insert.run(9, 1, 1, 10, 0, 0, 1);
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/query-hints.mkv', {
canonicalTitle: 'Query Hints Episode',
sourcePath: '/tmp/query-hints.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const { sessionId } = startSessionRecord(db, videoId, 1_000_000);
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
tokens_seen = ?,
yomitan_lookup_count = ?,
lookup_count = ?,
lookup_hits = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(1_060_000, 120, 8, 11, 7, 1_060_000, sessionId);
const hints = getQueryHints(db);
assert.equal(hints.totalSessions, 4);
assert.equal(hints.totalCards, 2);
@@ -416,6 +503,52 @@ test('getQueryHints reads all-time totals from lifetime summary', () => {
assert.equal(hints.activeDays, 9);
assert.equal(hints.totalEpisodesWatched, 11);
assert.equal(hints.totalAnimeCompleted, 22);
assert.equal(hints.totalTokensSeen, 120);
assert.equal(hints.totalYomitanLookupCount, 8);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('getQueryHints counts new words by distinct headword first-seen time', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const now = new Date();
const todayStartSec =
new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime() / 1000;
const oneHourAgo = todayStartSec + 3_600;
const twoDaysAgo = todayStartSec - 2 * 86_400;
db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run('知る', '知った', 'しった', 'verb', '動詞', '', '', oneHourAgo, oneHourAgo, 1);
db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run('知る', '知っている', 'しっている', 'verb', '動詞', '', '', oneHourAgo, oneHourAgo, 1);
db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run('猫', '猫', 'ねこ', 'noun', '名詞', '', '', twoDaysAgo, twoDaysAgo, 1);
const hints = getQueryHints(db);
assert.equal(hints.newWordsToday, 1);
assert.equal(hints.newWordsThisWeek, 2);
} finally {
db.close();
cleanupDbPath(dbPath);
@@ -1957,6 +2090,7 @@ test('anime/media detail and episode queries use ended-session metrics when tele
SET
ended_at_ms = ?,
status = 2,
ended_media_ms = ?,
active_watched_ms = ?,
cards_mined = ?,
tokens_seen = ?,
@@ -1966,9 +2100,9 @@ test('anime/media detail and episode queries use ended-session metrics when tele
WHERE session_id = ?
`,
);
updateSession.run(1_001_000, 3_000, 1, 10, 4, 3, now, s1);
updateSession.run(1_011_000, 4_000, 2, 20, 5, 4, now, s2);
updateSession.run(1_021_000, 5_000, 3, 30, 6, 5, now, s3);
updateSession.run(1_001_000, 2_500, 3_000, 1, 10, 4, 3, now, s1);
updateSession.run(1_011_000, 6_000, 4_000, 2, 20, 5, 4, now, s2);
updateSession.run(1_021_000, 8_000, 5_000, 3, 30, 6, 5, now, s3);
const animeDetail = getAnimeDetail(db, animeId);
assert.ok(animeDetail);
@@ -1979,6 +2113,7 @@ test('anime/media detail and episode queries use ended-session metrics when tele
assert.deepEqual(
episodes.map((row) => ({
videoId: row.videoId,
endedMediaMs: row.endedMediaMs,
totalSessions: row.totalSessions,
totalActiveMs: row.totalActiveMs,
totalCards: row.totalCards,
@@ -1987,6 +2122,7 @@ test('anime/media detail and episode queries use ended-session metrics when tele
[
{
videoId: episodeOne,
endedMediaMs: 6_000,
totalSessions: 2,
totalActiveMs: 7_000,
totalCards: 3,
@@ -1994,6 +2130,7 @@ test('anime/media detail and episode queries use ended-session metrics when tele
},
{
videoId: episodeTwo,
endedMediaMs: 8_000,
totalSessions: 1,
totalActiveMs: 5_000,
totalCards: 3,

View File

@@ -42,6 +42,7 @@ interface RetainedSessionRow {
videoId: number;
startedAtMs: number;
endedAtMs: number;
lastMediaMs: number | null;
totalWatchedMs: number;
activeWatchedMs: number;
linesSeen: number;
@@ -140,7 +141,7 @@ function toRebuildSessionState(row: RetainedSessionRow): SessionState {
startedAtMs: row.startedAtMs,
currentLineIndex: 0,
lastWallClockMs: row.endedAtMs,
lastMediaMs: null,
lastMediaMs: row.lastMediaMs,
lastPauseStartMs: null,
isPaused: false,
pendingTelemetry: false,
@@ -170,6 +171,7 @@ function getRetainedStaleActiveSessions(db: DatabaseSync): RetainedSessionRow[]
s.video_id AS videoId,
s.started_at_ms AS startedAtMs,
COALESCE(t.sample_ms, s.LAST_UPDATE_DATE, s.started_at_ms) AS endedAtMs,
s.ended_media_ms AS lastMediaMs,
COALESCE(t.total_watched_ms, s.total_watched_ms, 0) AS totalWatchedMs,
COALESCE(t.active_watched_ms, s.active_watched_ms, 0) AS activeWatchedMs,
COALESCE(t.lines_seen, s.lines_seen, 0) AS linesSeen,

View File

@@ -480,8 +480,10 @@ export function getQueryHints(db: DatabaseSync): {
totalActiveMin: number;
totalCards: number;
activeDays: number;
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
newWordsToday: number;
newWordsThisWeek: number;
} {
@@ -556,18 +558,30 @@ export function getQueryHints(db: DatabaseSync): {
.prepare(
`
SELECT
COALESCE(SUM(COALESCE(t.tokens_seen, s.tokens_seen, 0)), 0) AS totalTokensSeen,
COALESCE(SUM(COALESCE(t.lookup_count, s.lookup_count, 0)), 0) AS totalLookupCount,
COALESCE(SUM(COALESCE(t.lookup_hits, s.lookup_hits, 0)), 0) AS totalLookupHits
COALESCE(SUM(COALESCE(t.lookup_hits, s.lookup_hits, 0)), 0) AS totalLookupHits,
COALESCE(SUM(COALESCE(t.yomitan_lookup_count, s.yomitan_lookup_count, 0)), 0) AS totalYomitanLookupCount
FROM imm_sessions s
LEFT JOIN (
SELECT session_id, MAX(lookup_count) AS lookup_count, MAX(lookup_hits) AS lookup_hits
SELECT
session_id,
MAX(tokens_seen) AS tokens_seen,
MAX(lookup_count) AS lookup_count,
MAX(lookup_hits) AS lookup_hits,
MAX(yomitan_lookup_count) AS yomitan_lookup_count
FROM imm_session_telemetry
GROUP BY session_id
) t ON t.session_id = s.session_id
WHERE s.ended_at_ms IS NOT NULL
`,
)
.get() as { totalLookupCount: number; totalLookupHits: number } | null;
.get() as {
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
} | null;
return {
totalSessions,
@@ -579,8 +593,10 @@ export function getQueryHints(db: DatabaseSync): {
totalActiveMin,
totalCards,
activeDays,
totalTokensSeen: Number(lookupTotals?.totalTokensSeen ?? 0),
totalLookupCount: Number(lookupTotals?.totalLookupCount ?? 0),
totalLookupHits: Number(lookupTotals?.totalLookupHits ?? 0),
totalYomitanLookupCount: Number(lookupTotals?.totalYomitanLookupCount ?? 0),
...getNewWordCounts(db),
};
}
@@ -593,11 +609,20 @@ function getNewWordCounts(db: DatabaseSync): { newWordsToday: number; newWordsTh
const row = db
.prepare(
`
WITH headword_first_seen AS (
SELECT
headword,
MIN(first_seen) AS first_seen
FROM imm_words
WHERE first_seen IS NOT NULL
AND headword IS NOT NULL
AND headword != ''
GROUP BY headword
)
SELECT
COALESCE(SUM(CASE WHEN first_seen >= ? THEN 1 ELSE 0 END), 0) AS today,
COALESCE(SUM(CASE WHEN first_seen >= ? THEN 1 ELSE 0 END), 0) AS week
FROM imm_words
WHERE first_seen IS NOT NULL
FROM headword_first_seen
`,
)
.get(todayStartSec, weekAgoSec) as { today: number; week: number } | null;
@@ -793,7 +818,10 @@ function accumulatePoints(points: TrendChartPoint[]): TrendChartPoint[] {
}
function buildAggregatedTrendRows(rollups: ImmersionSessionRollupRow[]) {
const byKey = new Map<number, { activeMin: number; cards: number; words: number; sessions: number }>();
const byKey = new Map<
number,
{ activeMin: number; cards: number; words: number; sessions: number }
>();
for (const rollup of rollups) {
const existing = byKey.get(rollup.rollupDayOrMonth) ?? {
@@ -869,14 +897,8 @@ function buildLookupsPerHundredWords(sessions: TrendSessionMetricRow[]): TrendCh
for (const session of sessions) {
const epochDay = Math.floor(session.startedAtMs / 86_400_000);
lookupsByDay.set(
epochDay,
(lookupsByDay.get(epochDay) ?? 0) + session.yomitanLookupCount,
);
wordsByDay.set(
epochDay,
(wordsByDay.get(epochDay) ?? 0) + getTrendSessionWordCount(session),
);
lookupsByDay.set(epochDay, (lookupsByDay.get(epochDay) ?? 0) + session.yomitanLookupCount);
wordsByDay.set(epochDay, (wordsByDay.get(epochDay) ?? 0) + getTrendSessionWordCount(session));
}
return Array.from(lookupsByDay.entries())
@@ -980,8 +1002,13 @@ function buildCumulativePerAnime(points: TrendPerAnimePoint[]): TrendPerAnimePoi
return result;
}
function getVideoAnimeTitleMap(db: DatabaseSync, videoIds: Array<number | null>): Map<number, string> {
const uniqueIds = [...new Set(videoIds.filter((value): value is number => typeof value === 'number'))];
function getVideoAnimeTitleMap(
db: DatabaseSync,
videoIds: Array<number | null>,
): Map<number, string> {
const uniqueIds = [
...new Set(videoIds.filter((value): value is number => typeof value === 'number')),
];
if (uniqueIds.length === 0) {
return new Map();
}
@@ -1002,7 +1029,10 @@ function getVideoAnimeTitleMap(db: DatabaseSync, videoIds: Array<number | null>)
return new Map(rows.map((row) => [row.videoId, row.animeTitle]));
}
function resolveVideoAnimeTitle(videoId: number | null, titlesByVideoId: Map<number, string>): string {
function resolveVideoAnimeTitle(
videoId: number | null,
titlesByVideoId: Map<number, string>,
): string {
if (videoId === null) {
return 'Unknown';
}
@@ -1062,7 +1092,9 @@ function buildEpisodesPerAnimeFromDailyRollups(
return result;
}
function buildEpisodesPerDayFromDailyRollups(rollups: ImmersionSessionRollupRow[]): TrendChartPoint[] {
function buildEpisodesPerDayFromDailyRollups(
rollups: ImmersionSessionRollupRow[],
): TrendChartPoint[] {
const byDay = new Map<number, Set<number>>();
for (const rollup of rollups) {
@@ -1122,7 +1154,9 @@ function buildNewWordsPerDay(db: DatabaseSync, cutoffMs: number | null): TrendCh
ORDER BY epochDay ASC
`);
const rows = (cutoffMs === null ? prepared.all() : prepared.all(Math.floor(cutoffMs / 1000))) as Array<{
const rows = (
cutoffMs === null ? prepared.all() : prepared.all(Math.floor(cutoffMs / 1000))
) as Array<{
epochDay: number;
wordCount: number;
}>;
@@ -1161,10 +1195,8 @@ export function getTrendsDashboard(
const animePerDay = {
episodes: buildEpisodesPerAnimeFromDailyRollups(dailyRollups, titlesByVideoId),
watchTime: buildPerAnimeFromDailyRollups(
dailyRollups,
titlesByVideoId,
(rollup) => Math.round(rollup.totalActiveMin),
watchTime: buildPerAnimeFromDailyRollups(dailyRollups, titlesByVideoId, (rollup) =>
Math.round(rollup.totalActiveMin),
),
cards: buildPerAnimeFromDailyRollups(
dailyRollups,
@@ -1176,10 +1208,7 @@ export function getTrendsDashboard(
titlesByVideoId,
(rollup) => rollup.totalTokensSeen,
),
lookups: buildPerAnimeFromSessions(
sessions,
(session) => session.yomitanLookupCount,
),
lookups: buildPerAnimeFromSessions(sessions, (session) => session.yomitanLookupCount),
lookupsPerHundred: buildLookupsPerHundredPerAnime(sessions),
};
@@ -1715,6 +1744,16 @@ export function getAnimeEpisodes(db: DatabaseSync, animeId: number): AnimeEpisod
v.parsed_season AS season,
v.parsed_episode AS episode,
v.duration_ms AS durationMs,
(
SELECT s_recent.ended_media_ms
FROM imm_sessions s_recent
WHERE s_recent.video_id = v.video_id
AND s_recent.ended_media_ms IS NOT NULL
ORDER BY
COALESCE(s_recent.ended_at_ms, s_recent.LAST_UPDATE_DATE, s_recent.started_at_ms) DESC,
s_recent.session_id DESC
LIMIT 1
) AS endedMediaMs,
v.watched AS watched,
COUNT(DISTINCT s.session_id) AS totalSessions,
COALESCE(SUM(COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0)), 0) AS totalActiveMs,
@@ -1771,6 +1810,7 @@ export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRo
SELECT
v.video_id AS videoId,
v.canonical_title AS canonicalTitle,
v.anime_id AS animeId,
COALESCE(lm.total_sessions, 0) AS totalSessions,
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
COALESCE(lm.total_cards, 0) AS totalCards,

View File

@@ -39,6 +39,7 @@ export function finalizeSessionRecord(
SET
ended_at_ms = ?,
status = ?,
ended_media_ms = ?,
total_watched_ms = ?,
active_watched_ms = ?,
lines_seen = ?,
@@ -58,6 +59,7 @@ export function finalizeSessionRecord(
).run(
endedAtMs,
SESSION_STATUS_ENDED,
sessionState.lastMediaMs,
sessionState.totalWatchedMs,
sessionState.activeWatchedMs,
sessionState.linesSeen,

View File

@@ -740,6 +740,39 @@ test('start/finalize session updates ended_at and status', () => {
}
});
test('finalize session persists ended media position', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/slice-a-ended-media.mkv', {
canonicalTitle: 'Slice A Ended Media',
sourcePath: '/tmp/slice-a-ended-media.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const startedAtMs = 1_234_567_000;
const endedAtMs = startedAtMs + 8_500;
const { sessionId, state } = startSessionRecord(db, videoId, startedAtMs);
state.lastMediaMs = 91_000;
finalizeSessionRecord(db, state, endedAtMs);
const row = db
.prepare('SELECT ended_media_ms FROM imm_sessions WHERE session_id = ?')
.get(sessionId) as {
ended_media_ms: number | null;
} | null;
assert.ok(row);
assert.equal(row?.ended_media_ms, 91_000);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('executeQueuedWrite inserts event and telemetry rows', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);

View File

@@ -6,6 +6,7 @@ import type { QueuedWrite, VideoMetadata } from './types';
export interface TrackerPreparedStatements {
telemetryInsertStmt: ReturnType<DatabaseSync['prepare']>;
sessionCheckpointStmt: ReturnType<DatabaseSync['prepare']>;
eventInsertStmt: ReturnType<DatabaseSync['prepare']>;
wordUpsertStmt: ReturnType<DatabaseSync['prepare']>;
kanjiUpsertStmt: ReturnType<DatabaseSync['prepare']>;
@@ -569,6 +570,7 @@ export function ensureSchema(db: DatabaseSync): void {
status INTEGER NOT NULL,
locale_id INTEGER, target_lang_id INTEGER,
difficulty_tier INTEGER, subtitle_mode INTEGER,
ended_media_ms INTEGER,
total_watched_ms INTEGER NOT NULL DEFAULT 0,
active_watched_ms INTEGER NOT NULL DEFAULT 0,
lines_seen INTEGER NOT NULL DEFAULT 0,
@@ -1026,6 +1028,10 @@ export function ensureSchema(db: DatabaseSync): void {
`);
}
if (currentVersion?.schema_version && currentVersion.schema_version < 15) {
addColumnIfMissing(db, 'imm_sessions', 'ended_media_ms', 'INTEGER');
}
ensureLifetimeSummaryTables(db);
db.exec(`
@@ -1156,6 +1162,14 @@ export function createTrackerPreparedStatements(db: DatabaseSync): TrackerPrepar
?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?
)
`),
sessionCheckpointStmt: db.prepare(`
UPDATE imm_sessions
SET
ended_media_ms = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
AND ended_at_ms IS NULL
`),
eventInsertStmt: db.prepare(`
INSERT INTO imm_session_events (
session_id, ts_ms, event_type, line_index, segment_start_ms, segment_end_ms,
@@ -1290,6 +1304,7 @@ function incrementKanjiAggregate(
export function executeQueuedWrite(write: QueuedWrite, stmts: TrackerPreparedStatements): void {
if (write.kind === 'telemetry') {
const nowMs = Date.now();
stmts.telemetryInsertStmt.run(
write.sessionId,
write.sampleMs!,
@@ -1306,9 +1321,10 @@ export function executeQueuedWrite(write: QueuedWrite, stmts: TrackerPreparedSta
write.seekForwardCount!,
write.seekBackwardCount!,
write.mediaBufferEvents!,
Date.now(),
Date.now(),
nowMs,
nowMs,
);
stmts.sessionCheckpointStmt.run(write.lastMediaMs ?? null, nowMs, write.sessionId);
return;
}
if (write.kind === 'word') {

View File

@@ -1,4 +1,4 @@
export const SCHEMA_VERSION = 14;
export const SCHEMA_VERSION = 15;
export const DEFAULT_QUEUE_CAP = 1_000;
export const DEFAULT_BATCH_SIZE = 25;
export const DEFAULT_FLUSH_INTERVAL_MS = 500;
@@ -85,6 +85,7 @@ interface QueuedTelemetryWrite {
kind: 'telemetry';
sessionId: number;
sampleMs?: number;
lastMediaMs?: number | null;
totalWatchedMs?: number;
activeWatchedMs?: number;
linesSeen?: number;
@@ -234,6 +235,8 @@ export interface SessionSummaryQueryRow {
lookupCount: number;
lookupHits: number;
yomitanLookupCount: number;
knownWordsSeen?: number;
knownWordRate?: number;
}
export interface LifetimeGlobalRow {
@@ -422,6 +425,7 @@ export interface MediaLibraryRow {
export interface MediaDetailRow {
videoId: number;
canonicalTitle: string;
animeId: number | null;
totalSessions: number;
totalActiveMs: number;
totalCards: number;
@@ -480,6 +484,7 @@ export interface AnimeEpisodeRow {
season: number | null;
episode: number | null;
durationMs: number;
endedMediaMs: number | null;
watched: number;
totalSessions: number;
totalActiveMs: number;

View File

@@ -140,8 +140,10 @@ function createFakeImmersionTracker(
activeDays: 0,
totalEpisodesWatched: 0,
totalAnimeCompleted: 0,
totalTokensSeen: 0,
totalLookupCount: 0,
totalLookupHits: 0,
totalYomitanLookupCount: 0,
newWordsToday: 0,
newWordsThisWeek: 0,
}),
@@ -359,8 +361,10 @@ test('registerIpcHandlers returns empty stats overview shape without a tracker',
activeDays: 0,
totalEpisodesWatched: 0,
totalAnimeCompleted: 0,
totalTokensSeen: 0,
totalLookupCount: 0,
totalLookupHits: 0,
totalYomitanLookupCount: 0,
newWordsToday: 0,
newWordsThisWeek: 0,
},
@@ -397,8 +401,10 @@ test('registerIpcHandlers validates and clamps stats request limits', async () =
activeDays: 0,
totalEpisodesWatched: 0,
totalAnimeCompleted: 0,
totalTokensSeen: 0,
totalLookupCount: 0,
totalLookupHits: 0,
totalYomitanLookupCount: 0,
newWordsToday: 0,
newWordsThisWeek: 0,
}),
@@ -472,6 +478,12 @@ test('registerIpcHandlers requests the full timeline when no limit is provided',
activeDays: 0,
totalEpisodesWatched: 0,
totalAnimeCompleted: 0,
totalTokensSeen: 0,
totalLookupCount: 0,
totalLookupHits: 0,
totalYomitanLookupCount: 0,
newWordsToday: 0,
newWordsThisWeek: 0,
}),
getSessionTimeline: async (sessionId: number, limit?: number) => {
calls.push(['timeline', limit, sessionId]);

View File

@@ -85,6 +85,12 @@ export interface IpcServiceDeps {
activeDays: number;
totalEpisodesWatched: number;
totalAnimeCompleted: number;
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
newWordsToday: number;
newWordsThisWeek: number;
}>;
getSessionTimeline: (sessionId: number, limit?: number) => Promise<unknown>;
getSessionEvents: (sessionId: number, limit?: number) => Promise<unknown>;
@@ -486,8 +492,10 @@ export function registerIpcHandlers(deps: IpcServiceDeps, ipc: IpcMainRegistrar
activeDays: 0,
totalEpisodesWatched: 0,
totalAnimeCompleted: 0,
totalTokensSeen: 0,
totalLookupCount: 0,
totalLookupHits: 0,
totalYomitanLookupCount: 0,
newWordsToday: 0,
newWordsThisWeek: 0,
},

View File

@@ -109,6 +109,60 @@ test('initializeOverlayRuntime starts Anki integration when ankiConnect.enabled
assert.equal(setIntegrationCalls, 1);
});
test('initializeOverlayRuntime can skip starting Anki integration transport', () => {
let createdIntegrations = 0;
let startedIntegrations = 0;
let setIntegrationCalls = 0;
initializeOverlayRuntime({
backendOverride: null,
createMainWindow: () => {},
registerGlobalShortcuts: () => {},
updateVisibleOverlayBounds: () => {},
isVisibleOverlayVisible: () => false,
updateVisibleOverlayVisibility: () => {},
getOverlayWindows: () => [],
syncOverlayShortcuts: () => {},
setWindowTracker: () => {},
getMpvSocketPath: () => '/tmp/mpv.sock',
createWindowTracker: () => null,
getResolvedConfig: () => ({
ankiConnect: { enabled: true } as never,
}),
getSubtitleTimingTracker: () => ({}),
getMpvClient: () => ({
send: () => {},
}),
getRuntimeOptionsManager: () => ({
getEffectiveAnkiConnectConfig: (config) => config as never,
}),
createAnkiIntegration: () => {
createdIntegrations += 1;
return {
start: () => {
startedIntegrations += 1;
},
};
},
setAnkiIntegration: () => {
setIntegrationCalls += 1;
},
showDesktopNotification: () => {},
createFieldGroupingCallback: () => async () => ({
keepNoteId: 7,
deleteNoteId: 8,
deleteDuplicate: false,
cancelled: false,
}),
getKnownWordCacheStatePath: () => '/tmp/known-words-cache.json',
shouldStartAnkiIntegration: () => false,
});
assert.equal(createdIntegrations, 1);
assert.equal(startedIntegrations, 0);
assert.equal(setIntegrationCalls, 1);
});
test('initializeOverlayRuntime merges shared ai config with Anki overrides', () => {
initializeOverlayRuntime({
backendOverride: null,

Some files were not shown because too many files have changed in this diff Show More