mirror of
https://github.com/ksyasuda/SubMiner.git
synced 2026-05-04 00:41:33 -07:00
Fix PR #60 CI failures and address CodeRabbit feedback
- Restore raw tokensSeen for session summaries; keep filtered counts for aggregates/known-words - Fix missing headword binding in insertFilteredWordOccurrence test fixture - Page vocabulary stats until enough visible rows collected after post-query filtering - Use lifetime totals for library/detail word counts instead of partial retained-session sums - Prefer stored rollup totals over recomputed session counts when recomputation is partial - Emit flat known-word timeline points for line indexes with no occurrences - Roll back local excluded-word state and throw on failed persistence - Reset initialized flag on load failure to allow retry on next call - Restore globalThis.localStorage after each excluded-words test
This commit is contained in:
@@ -0,0 +1,70 @@
|
|||||||
|
---
|
||||||
|
id: TASK-330
|
||||||
|
title: Fix PR 60 CI failures and CodeRabbit feedback
|
||||||
|
status: Done
|
||||||
|
assignee:
|
||||||
|
- codex
|
||||||
|
created_date: '2026-05-04 02:50'
|
||||||
|
updated_date: '2026-05-04 02:59'
|
||||||
|
labels:
|
||||||
|
- ci
|
||||||
|
- pr-review
|
||||||
|
dependencies: []
|
||||||
|
references:
|
||||||
|
- 'https://github.com/ksyasuda/SubMiner/pull/60'
|
||||||
|
priority: high
|
||||||
|
---
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
<!-- SECTION:DESCRIPTION:BEGIN -->
|
||||||
|
Resolve failing GitHub Actions checks and actionable unresolved CodeRabbit review feedback on PR #60 (Persist stats exclusions in DB and fix word metrics filtering). Keep fixes scoped to the PR behavior and preserve existing project patterns.
|
||||||
|
<!-- SECTION:DESCRIPTION:END -->
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
<!-- AC:BEGIN -->
|
||||||
|
- [x] #1 Failing GitHub Actions checks for PR #60 have an identified root cause and local fix.
|
||||||
|
- [x] #2 All actionable unresolved CodeRabbit review comments on PR #60 are addressed locally or explicitly documented as non-actionable.
|
||||||
|
- [x] #3 Relevant local verification passes for the changed code paths.
|
||||||
|
- [x] #4 Task notes summarize CI failure context, review-comment handling, and any residual verification gaps.
|
||||||
|
<!-- AC:END -->
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
<!-- SECTION:PLAN:BEGIN -->
|
||||||
|
1. Resolve PR #60 context and inspect GitHub Actions failures with the gh-fix-ci workflow.
|
||||||
|
2. Fetch unresolved review threads with the gh-address-comments workflow, focusing on CodeRabbit actionable comments.
|
||||||
|
3. Read the touched files/tests around the failing paths and comments; identify root cause before edits.
|
||||||
|
4. Apply minimal fixes with regression coverage where appropriate.
|
||||||
|
5. Run targeted verification first, then broader repo gates as time permits.
|
||||||
|
6. Update Backlog notes/acceptance criteria with CI/comment outcomes and residual risks.
|
||||||
|
<!-- SECTION:PLAN:END -->
|
||||||
|
|
||||||
|
## Implementation Notes
|
||||||
|
|
||||||
|
<!-- SECTION:NOTES:BEGIN -->
|
||||||
|
Resolved PR #60 CI failure by restoring raw `tokensSeen` for session summaries while keeping filtered persisted word counts in aggregate/known-word paths. Addressed CodeRabbit feedback: fixed missing `headword` test fixture binding; paged vocabulary stats past filtered rows; preserved lifetime/rollup totals when retained-session recomputation is partial; emitted flat known-word timeline points for zero-visible-word line gaps; restored localStorage mocks; added rollback/retry behavior for excluded-word store persistence/initialization.
|
||||||
|
<!-- SECTION:NOTES:END -->
|
||||||
|
|
||||||
|
## Final Summary
|
||||||
|
|
||||||
|
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
|
||||||
|
Fixed the PR #60 CI failure and addressed actionable CodeRabbit feedback.
|
||||||
|
|
||||||
|
Key changes:
|
||||||
|
- Restored exact Yomitan token counts for session summary metrics while leaving filtered word counts for aggregate and known-word calculations.
|
||||||
|
- Fixed malformed query test fixtures by binding `headword` into `imm_words` inserts.
|
||||||
|
- Updated vocabulary stats to page until enough visible rows are collected after post-query filtering.
|
||||||
|
- Made library/detail/rollup read models preserve lifetime or stored rollup totals when retained-session recomputation is partial, including dashboard rollup-derived word metrics.
|
||||||
|
- Kept known-word timeline line positions stable by emitting flat points for missing line indexes.
|
||||||
|
- Made excluded-word persistence rollback on failed writes, allow initialization retries after transient load failures, and restored mocked `localStorage` in tests.
|
||||||
|
|
||||||
|
Verification passed:
|
||||||
|
- `bun run typecheck`
|
||||||
|
- `bun run test:fast`
|
||||||
|
- `bun run test:env`
|
||||||
|
- `bun run build`
|
||||||
|
- `bun run test:smoke:dist`
|
||||||
|
- `bun run format:check:src`
|
||||||
|
- `git diff --check`
|
||||||
|
<!-- SECTION:FINAL_SUMMARY:END -->
|
||||||
@@ -463,7 +463,9 @@ describe('stats server API routes', () => {
|
|||||||
const res = await app.request('/api/stats/sessions/1/known-words-timeline');
|
const res = await app.request('/api/stats/sessions/1/known-words-timeline');
|
||||||
assert.equal(res.status, 200);
|
assert.equal(res.status, 200);
|
||||||
assert.deepEqual(await res.json(), [
|
assert.deepEqual(await res.json(), [
|
||||||
|
{ linesSeen: 0, knownWordsSeen: 0, totalWordsSeen: 0 },
|
||||||
{ linesSeen: 1, knownWordsSeen: 2, totalWordsSeen: 2 },
|
{ linesSeen: 1, knownWordsSeen: 2, totalWordsSeen: 2 },
|
||||||
|
{ linesSeen: 2, knownWordsSeen: 2, totalWordsSeen: 2 },
|
||||||
{ linesSeen: 3, knownWordsSeen: 3, totalWordsSeen: 7 },
|
{ linesSeen: 3, knownWordsSeen: 3, totalWordsSeen: 7 },
|
||||||
]);
|
]);
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -139,6 +139,7 @@ function insertFilteredWordOccurrence(
|
|||||||
RETURNING id`,
|
RETURNING id`,
|
||||||
)
|
)
|
||||||
.get(
|
.get(
|
||||||
|
headword,
|
||||||
word,
|
word,
|
||||||
options.reading ?? '',
|
options.reading ?? '',
|
||||||
options.pos1 ?? '名詞',
|
options.pos1 ?? '名詞',
|
||||||
@@ -1371,7 +1372,7 @@ test('word-count read models use filtered persisted occurrences with raw fallbac
|
|||||||
const summaries = getSessionSummaries(db, 10);
|
const summaries = getSessionSummaries(db, 10);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
summaries.find((session) => session.sessionId === withOccurrences.sessionId)?.tokensSeen,
|
summaries.find((session) => session.sessionId === withOccurrences.sessionId)?.tokensSeen,
|
||||||
2,
|
5,
|
||||||
);
|
);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
summaries.find((session) => session.sessionId === fallbackOnly.sessionId)?.tokensSeen,
|
summaries.find((session) => session.sessionId === fallbackOnly.sessionId)?.tokensSeen,
|
||||||
@@ -1382,8 +1383,69 @@ test('word-count read models use filtered persisted occurrences with raw fallbac
|
|||||||
assert.equal(hints.totalTokensSeen, 9);
|
assert.equal(hints.totalTokensSeen, 9);
|
||||||
|
|
||||||
const rollup = getDailyRollups(db, 1)[0]!;
|
const rollup = getDailyRollups(db, 1)[0]!;
|
||||||
assert.equal(rollup.totalTokensSeen, 9);
|
assert.equal(rollup.totalTokensSeen, 12);
|
||||||
assert.equal(rollup.tokensPerMin, 9);
|
assert.equal(rollup.tokensPerMin, 12);
|
||||||
|
} finally {
|
||||||
|
db.close();
|
||||||
|
cleanupDbPath(dbPath);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test('rollups keep persisted totals when retained-session word counts are partial', () => {
|
||||||
|
const dbPath = makeDbPath();
|
||||||
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
|
try {
|
||||||
|
ensureSchema(db);
|
||||||
|
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/partial-rollup.mkv', {
|
||||||
|
canonicalTitle: 'Partial Rollup',
|
||||||
|
sourcePath: '/tmp/partial-rollup.mkv',
|
||||||
|
sourceUrl: null,
|
||||||
|
sourceType: SOURCE_TYPE_LOCAL,
|
||||||
|
});
|
||||||
|
|
||||||
|
const startedAtMs = 1_700_000_000_000;
|
||||||
|
const { sessionId } = startSessionRecord(db, videoId, startedAtMs);
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
UPDATE imm_sessions
|
||||||
|
SET ended_at_ms = ?, status = 2, active_watched_ms = ?, tokens_seen = ?
|
||||||
|
WHERE session_id = ?
|
||||||
|
`,
|
||||||
|
).run(startedAtMs + 60_000, 60_000, 4, sessionId);
|
||||||
|
|
||||||
|
insertFilteredWordOccurrence(db, {
|
||||||
|
sessionId,
|
||||||
|
videoId,
|
||||||
|
occurrenceCount: 4,
|
||||||
|
startedAtMs,
|
||||||
|
});
|
||||||
|
|
||||||
|
const rollupDay = Math.floor(startedAtMs / 86_400_000);
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
INSERT INTO imm_daily_rollups (
|
||||||
|
rollup_day, video_id, total_sessions, total_active_min, total_lines_seen,
|
||||||
|
total_tokens_seen, total_cards
|
||||||
|
) VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||||
|
`,
|
||||||
|
).run(rollupDay, videoId, 2, 2, 8, 12, 0);
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
INSERT INTO imm_monthly_rollups (
|
||||||
|
rollup_month, video_id, total_sessions, total_active_min, total_lines_seen,
|
||||||
|
total_tokens_seen, total_cards
|
||||||
|
) VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||||
|
`,
|
||||||
|
).run(202311, videoId, 2, 3, 12, 18, 0);
|
||||||
|
|
||||||
|
const daily = getDailyRollups(db, 1)[0]!;
|
||||||
|
assert.equal(daily.totalTokensSeen, 12);
|
||||||
|
assert.equal(daily.tokensPerMin, 6);
|
||||||
|
|
||||||
|
const monthly = getMonthlyRollups(db, 1)[0]!;
|
||||||
|
assert.equal(monthly.totalTokensSeen, 18);
|
||||||
|
assert.equal(monthly.tokensPerMin, 6);
|
||||||
} finally {
|
} finally {
|
||||||
db.close();
|
db.close();
|
||||||
cleanupDbPath(dbPath);
|
cleanupDbPath(dbPath);
|
||||||
@@ -1639,6 +1701,41 @@ test('getVocabularyStats filters rows that fail tokenizer vocabulary rules', ()
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test('getVocabularyStats pages past hidden rows until enough visible rows are collected', () => {
|
||||||
|
const dbPath = makeDbPath();
|
||||||
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
|
try {
|
||||||
|
ensureSchema(db);
|
||||||
|
const stmts = createTrackerPreparedStatements(db);
|
||||||
|
|
||||||
|
for (let i = 0; i < 105; i += 1) {
|
||||||
|
stmts.wordUpsertStmt.run(
|
||||||
|
`助詞${i}`,
|
||||||
|
`助詞${i}`,
|
||||||
|
`じょし${i}`,
|
||||||
|
'particle',
|
||||||
|
'助詞',
|
||||||
|
'格助詞',
|
||||||
|
'',
|
||||||
|
10_000 - i,
|
||||||
|
1_000,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
stmts.wordUpsertStmt.run('猫', '猫', 'ねこ', 'noun', '名詞', '一般', '', 1, 1_000);
|
||||||
|
|
||||||
|
const rows = getVocabularyStats(db, 1);
|
||||||
|
|
||||||
|
assert.deepEqual(
|
||||||
|
rows.map((row) => row.headword),
|
||||||
|
['猫'],
|
||||||
|
);
|
||||||
|
} finally {
|
||||||
|
db.close();
|
||||||
|
cleanupDbPath(dbPath);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
test('getVocabularyStats returns empty array when no words exist', () => {
|
test('getVocabularyStats returns empty array when no words exist', () => {
|
||||||
const dbPath = makeDbPath();
|
const dbPath = makeDbPath();
|
||||||
const db = new Database(dbPath);
|
const db = new Database(dbPath);
|
||||||
@@ -2863,6 +2960,96 @@ test('anime library and detail still return lifetime rows without retained sessi
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test('anime and media detail prefer lifetime totals over partial retained sessions', () => {
|
||||||
|
const dbPath = makeDbPath();
|
||||||
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
|
try {
|
||||||
|
ensureSchema(db);
|
||||||
|
|
||||||
|
const animeId = getOrCreateAnimeRecord(db, {
|
||||||
|
parsedTitle: 'Partial History Anime',
|
||||||
|
canonicalTitle: 'Partial History Anime',
|
||||||
|
anilistId: null,
|
||||||
|
titleRomaji: null,
|
||||||
|
titleEnglish: null,
|
||||||
|
titleNative: null,
|
||||||
|
metadataJson: null,
|
||||||
|
});
|
||||||
|
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/partial-history.mkv', {
|
||||||
|
canonicalTitle: 'Partial History Episode',
|
||||||
|
sourcePath: '/tmp/partial-history.mkv',
|
||||||
|
sourceUrl: null,
|
||||||
|
sourceType: SOURCE_TYPE_LOCAL,
|
||||||
|
});
|
||||||
|
linkVideoToAnimeRecord(db, videoId, {
|
||||||
|
animeId,
|
||||||
|
parsedBasename: 'Partial History Episode',
|
||||||
|
parsedTitle: 'Partial History Anime',
|
||||||
|
parsedSeason: 1,
|
||||||
|
parsedEpisode: 1,
|
||||||
|
parserSource: 'fallback',
|
||||||
|
parserConfidence: 1,
|
||||||
|
parseMetadataJson: null,
|
||||||
|
});
|
||||||
|
|
||||||
|
const startedAtMs = 1_700_000_000_000;
|
||||||
|
const { sessionId } = startSessionRecord(db, videoId, startedAtMs);
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
UPDATE imm_sessions
|
||||||
|
SET ended_at_ms = ?, status = 2, active_watched_ms = ?, tokens_seen = ?
|
||||||
|
WHERE session_id = ?
|
||||||
|
`,
|
||||||
|
).run(startedAtMs + 30_000, 30_000, 10, sessionId);
|
||||||
|
|
||||||
|
const now = Date.now();
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
INSERT INTO imm_lifetime_anime (
|
||||||
|
anime_id,
|
||||||
|
total_sessions,
|
||||||
|
total_active_ms,
|
||||||
|
total_cards,
|
||||||
|
total_lines_seen,
|
||||||
|
total_tokens_seen,
|
||||||
|
episodes_started,
|
||||||
|
episodes_completed,
|
||||||
|
first_watched_ms,
|
||||||
|
last_watched_ms,
|
||||||
|
CREATED_DATE,
|
||||||
|
LAST_UPDATE_DATE
|
||||||
|
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
|
`,
|
||||||
|
).run(animeId, 3, 90_000, 1, 12, 100, 1, 0, startedAtMs, startedAtMs, now, now);
|
||||||
|
db.prepare(
|
||||||
|
`
|
||||||
|
INSERT INTO imm_lifetime_media (
|
||||||
|
video_id,
|
||||||
|
total_sessions,
|
||||||
|
total_active_ms,
|
||||||
|
total_cards,
|
||||||
|
total_lines_seen,
|
||||||
|
total_tokens_seen,
|
||||||
|
completed,
|
||||||
|
first_watched_ms,
|
||||||
|
last_watched_ms,
|
||||||
|
CREATED_DATE,
|
||||||
|
LAST_UPDATE_DATE
|
||||||
|
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||||
|
`,
|
||||||
|
).run(videoId, 3, 90_000, 1, 12, 100, 0, startedAtMs, startedAtMs, now, now);
|
||||||
|
|
||||||
|
assert.equal(getAnimeLibrary(db)[0]?.totalTokensSeen, 100);
|
||||||
|
assert.equal(getAnimeDetail(db, animeId)?.totalTokensSeen, 100);
|
||||||
|
assert.equal(getMediaLibrary(db)[0]?.totalTokensSeen, 100);
|
||||||
|
assert.equal(getMediaDetail(db, videoId)?.totalTokensSeen, 100);
|
||||||
|
} finally {
|
||||||
|
db.close();
|
||||||
|
cleanupDbPath(dbPath);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
test('media library and detail queries read lifetime totals', () => {
|
test('media library and detail queries read lifetime totals', () => {
|
||||||
const dbPath = makeDbPath();
|
const dbPath = makeDbPath();
|
||||||
const db = new Database(dbPath);
|
const db = new Database(dbPath);
|
||||||
@@ -4209,7 +4396,7 @@ test('getTrendsDashboard librarySummary returns null lookupsPerHundred when word
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
test('getTrendsDashboard word metrics use filtered persisted occurrences', () => {
|
test('getTrendsDashboard rollup word metrics keep persisted totals over partial session counts', () => {
|
||||||
const dbPath = makeDbPath();
|
const dbPath = makeDbPath();
|
||||||
const db = new Database(dbPath);
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
@@ -4308,16 +4495,16 @@ test('getTrendsDashboard word metrics use filtered persisted occurrences', () =>
|
|||||||
const dashboard = getTrendsDashboard(db, 'all', 'day');
|
const dashboard = getTrendsDashboard(db, 'all', 'day');
|
||||||
assert.deepEqual(
|
assert.deepEqual(
|
||||||
dashboard.activity.words.map((point) => point.value),
|
dashboard.activity.words.map((point) => point.value),
|
||||||
[2, 3],
|
[10, 20],
|
||||||
);
|
);
|
||||||
assert.deepEqual(
|
assert.deepEqual(
|
||||||
dashboard.progress.words.map((point) => point.value),
|
dashboard.progress.words.map((point) => point.value),
|
||||||
[2, 5],
|
[10, 30],
|
||||||
);
|
);
|
||||||
assert.equal(dashboard.ratios.lookupsPerHundred[0]?.value, 200);
|
assert.equal(dashboard.ratios.lookupsPerHundred[0]?.value, 200);
|
||||||
assert.equal(dashboard.librarySummary[0]?.words, 5);
|
assert.equal(dashboard.librarySummary[0]?.words, 30);
|
||||||
assert.equal(dashboard.librarySummary[0]?.lookupsPerHundred, 200);
|
assert.equal(dashboard.librarySummary[0]?.lookupsPerHundred, 33.3);
|
||||||
assert.equal(dashboard.animeCumulative.words.at(-1)?.value, 5);
|
assert.equal(dashboard.animeCumulative.words.at(-1)?.value, 30);
|
||||||
} finally {
|
} finally {
|
||||||
db.close();
|
db.close();
|
||||||
cleanupDbPath(dbPath);
|
cleanupDbPath(dbPath);
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ export function getVocabularyStats(
|
|||||||
limit = 100,
|
limit = 100,
|
||||||
excludePos?: string[],
|
excludePos?: string[],
|
||||||
): VocabularyStatsRow[] {
|
): VocabularyStatsRow[] {
|
||||||
const queryLimit = Math.max(
|
const pageSize = Math.max(
|
||||||
limit,
|
limit,
|
||||||
limit * VOCABULARY_STATS_FILTER_OVERSAMPLE_FACTOR,
|
limit * VOCABULARY_STATS_FILTER_OVERSAMPLE_FACTOR,
|
||||||
limit + VOCABULARY_STATS_FILTER_OVERSAMPLE_MIN,
|
limit + VOCABULARY_STATS_FILTER_OVERSAMPLE_MIN,
|
||||||
@@ -74,12 +74,20 @@ export function getVocabularyStats(
|
|||||||
LEFT JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id AND sl.anime_id IS NOT NULL
|
LEFT JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id AND sl.anime_id IS NOT NULL
|
||||||
${whereClause ? whereClause.replace('part_of_speech', 'w.part_of_speech') : ''}
|
${whereClause ? whereClause.replace('part_of_speech', 'w.part_of_speech') : ''}
|
||||||
GROUP BY w.id
|
GROUP BY w.id
|
||||||
ORDER BY w.frequency DESC LIMIT ?
|
ORDER BY w.frequency DESC LIMIT ? OFFSET ?
|
||||||
`);
|
`);
|
||||||
const params = hasExclude ? [...excludePos, queryLimit] : [queryLimit];
|
const visibleRows: VocabularyStatsRow[] = [];
|
||||||
return (stmt.all(...params) as VocabularyStatsRow[])
|
let offset = 0;
|
||||||
.filter(isVocabularyStatsRowVisible)
|
|
||||||
.slice(0, limit);
|
while (visibleRows.length < limit) {
|
||||||
|
const params = hasExclude ? [...excludePos, pageSize, offset] : [pageSize, offset];
|
||||||
|
const page = stmt.all(...params) as VocabularyStatsRow[];
|
||||||
|
if (page.length === 0) break;
|
||||||
|
visibleRows.push(...page.filter(isVocabularyStatsRowVisible));
|
||||||
|
offset += page.length;
|
||||||
|
}
|
||||||
|
|
||||||
|
return visibleRows.slice(0, limit);
|
||||||
}
|
}
|
||||||
|
|
||||||
export function getStatsExcludedWords(db: DatabaseSync): StatsExcludedWordRow[] {
|
export function getStatsExcludedWords(db: DatabaseSync): StatsExcludedWordRow[] {
|
||||||
|
|||||||
@@ -27,20 +27,9 @@ import {
|
|||||||
} from './query-shared';
|
} from './query-shared';
|
||||||
|
|
||||||
export function getAnimeLibrary(db: DatabaseSync): AnimeLibraryRow[] {
|
export function getAnimeLibrary(db: DatabaseSync): AnimeLibraryRow[] {
|
||||||
const wordsExpr = sessionDisplayWordsExpr('s', 'swc');
|
|
||||||
const rows = db
|
const rows = db
|
||||||
.prepare(
|
.prepare(
|
||||||
`
|
`
|
||||||
${SESSION_WORD_COUNTS_CTE},
|
|
||||||
anime_word_counts AS (
|
|
||||||
SELECT v.anime_id AS animeId, SUM(${wordsExpr}) AS totalTokensSeen
|
|
||||||
FROM imm_sessions s
|
|
||||||
JOIN imm_videos v ON v.video_id = s.video_id
|
|
||||||
LEFT JOIN session_word_counts swc ON swc.sessionId = s.session_id
|
|
||||||
WHERE s.ended_at_ms IS NOT NULL
|
|
||||||
AND v.anime_id IS NOT NULL
|
|
||||||
GROUP BY v.anime_id
|
|
||||||
)
|
|
||||||
SELECT
|
SELECT
|
||||||
a.anime_id AS animeId,
|
a.anime_id AS animeId,
|
||||||
a.canonical_title AS canonicalTitle,
|
a.canonical_title AS canonicalTitle,
|
||||||
@@ -48,14 +37,13 @@ export function getAnimeLibrary(db: DatabaseSync): AnimeLibraryRow[] {
|
|||||||
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
||||||
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
||||||
COALESCE(lm.total_cards, 0) AS totalCards,
|
COALESCE(lm.total_cards, 0) AS totalCards,
|
||||||
COALESCE(awc.totalTokensSeen, lm.total_tokens_seen, 0) AS totalTokensSeen,
|
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
|
||||||
COUNT(DISTINCT v.video_id) AS episodeCount,
|
COUNT(DISTINCT v.video_id) AS episodeCount,
|
||||||
a.episodes_total AS episodesTotal,
|
a.episodes_total AS episodesTotal,
|
||||||
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs
|
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs
|
||||||
FROM imm_anime a
|
FROM imm_anime a
|
||||||
JOIN imm_lifetime_anime lm ON lm.anime_id = a.anime_id
|
JOIN imm_lifetime_anime lm ON lm.anime_id = a.anime_id
|
||||||
JOIN imm_videos v ON v.anime_id = a.anime_id
|
JOIN imm_videos v ON v.anime_id = a.anime_id
|
||||||
LEFT JOIN anime_word_counts awc ON awc.animeId = a.anime_id
|
|
||||||
GROUP BY a.anime_id
|
GROUP BY a.anime_id
|
||||||
ORDER BY totalActiveMs DESC, lm.last_watched_ms DESC, canonicalTitle ASC
|
ORDER BY totalActiveMs DESC, lm.last_watched_ms DESC, canonicalTitle ASC
|
||||||
`,
|
`,
|
||||||
@@ -68,7 +56,6 @@ export function getAnimeLibrary(db: DatabaseSync): AnimeLibraryRow[] {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function getAnimeDetail(db: DatabaseSync, animeId: number): AnimeDetailRow | null {
|
export function getAnimeDetail(db: DatabaseSync, animeId: number): AnimeDetailRow | null {
|
||||||
const wordsExpr = sessionDisplayWordsExpr('s', 'swc', 'COALESCE(asm.tokensSeen, s.tokens_seen)');
|
|
||||||
const row = db
|
const row = db
|
||||||
.prepare(
|
.prepare(
|
||||||
`
|
`
|
||||||
@@ -84,10 +71,7 @@ export function getAnimeDetail(db: DatabaseSync, animeId: number): AnimeDetailRo
|
|||||||
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
||||||
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
||||||
COALESCE(lm.total_cards, 0) AS totalCards,
|
COALESCE(lm.total_cards, 0) AS totalCards,
|
||||||
CASE
|
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
|
||||||
WHEN COUNT(s.session_id) > 0 THEN COALESCE(SUM(${wordsExpr}), 0)
|
|
||||||
ELSE COALESCE(lm.total_tokens_seen, 0)
|
|
||||||
END AS totalTokensSeen,
|
|
||||||
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
|
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
|
||||||
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
|
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
|
||||||
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
|
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
|
||||||
@@ -99,7 +83,6 @@ export function getAnimeDetail(db: DatabaseSync, animeId: number): AnimeDetailRo
|
|||||||
JOIN imm_videos v ON v.anime_id = a.anime_id
|
JOIN imm_videos v ON v.anime_id = a.anime_id
|
||||||
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
|
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
|
||||||
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
||||||
LEFT JOIN session_word_counts swc ON swc.sessionId = s.session_id
|
|
||||||
WHERE a.anime_id = ?
|
WHERE a.anime_id = ?
|
||||||
GROUP BY a.anime_id
|
GROUP BY a.anime_id
|
||||||
`,
|
`,
|
||||||
@@ -219,25 +202,16 @@ export function getAnimeEpisodes(db: DatabaseSync, animeId: number): AnimeEpisod
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] {
|
export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] {
|
||||||
const wordsExpr = sessionDisplayWordsExpr('s', 'swc');
|
|
||||||
const rows = db
|
const rows = db
|
||||||
.prepare(
|
.prepare(
|
||||||
`
|
`
|
||||||
${SESSION_WORD_COUNTS_CTE},
|
|
||||||
media_word_counts AS (
|
|
||||||
SELECT s.video_id AS videoId, SUM(${wordsExpr}) AS totalTokensSeen
|
|
||||||
FROM imm_sessions s
|
|
||||||
LEFT JOIN session_word_counts swc ON swc.sessionId = s.session_id
|
|
||||||
WHERE s.ended_at_ms IS NOT NULL
|
|
||||||
GROUP BY s.video_id
|
|
||||||
)
|
|
||||||
SELECT
|
SELECT
|
||||||
v.video_id AS videoId,
|
v.video_id AS videoId,
|
||||||
v.canonical_title AS canonicalTitle,
|
v.canonical_title AS canonicalTitle,
|
||||||
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
||||||
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
||||||
COALESCE(lm.total_cards, 0) AS totalCards,
|
COALESCE(lm.total_cards, 0) AS totalCards,
|
||||||
COALESCE(mwc.totalTokensSeen, lm.total_tokens_seen, 0) AS totalTokensSeen,
|
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
|
||||||
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs,
|
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs,
|
||||||
yv.youtube_video_id AS youtubeVideoId,
|
yv.youtube_video_id AS youtubeVideoId,
|
||||||
yv.video_url AS videoUrl,
|
yv.video_url AS videoUrl,
|
||||||
@@ -256,7 +230,6 @@ export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] {
|
|||||||
END AS hasCoverArt
|
END AS hasCoverArt
|
||||||
FROM imm_videos v
|
FROM imm_videos v
|
||||||
JOIN imm_lifetime_media lm ON lm.video_id = v.video_id
|
JOIN imm_lifetime_media lm ON lm.video_id = v.video_id
|
||||||
LEFT JOIN media_word_counts mwc ON mwc.videoId = v.video_id
|
|
||||||
LEFT JOIN imm_media_art ma ON ma.video_id = v.video_id
|
LEFT JOIN imm_media_art ma ON ma.video_id = v.video_id
|
||||||
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
|
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
|
||||||
ORDER BY lm.last_watched_ms DESC
|
ORDER BY lm.last_watched_ms DESC
|
||||||
@@ -270,7 +243,6 @@ export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRow | null {
|
export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRow | null {
|
||||||
const wordsExpr = sessionDisplayWordsExpr('s', 'swc', 'COALESCE(asm.tokensSeen, s.tokens_seen)');
|
|
||||||
return db
|
return db
|
||||||
.prepare(
|
.prepare(
|
||||||
`
|
`
|
||||||
@@ -282,10 +254,7 @@ export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRo
|
|||||||
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
COALESCE(lm.total_sessions, 0) AS totalSessions,
|
||||||
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
|
||||||
COALESCE(lm.total_cards, 0) AS totalCards,
|
COALESCE(lm.total_cards, 0) AS totalCards,
|
||||||
CASE
|
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
|
||||||
WHEN COUNT(s.session_id) > 0 THEN COALESCE(SUM(${wordsExpr}), 0)
|
|
||||||
ELSE COALESCE(lm.total_tokens_seen, 0)
|
|
||||||
END AS totalTokensSeen,
|
|
||||||
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
|
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
|
||||||
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
|
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
|
||||||
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
|
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
|
||||||
@@ -306,7 +275,6 @@ export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRo
|
|||||||
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
|
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
|
||||||
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
|
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
|
||||||
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
||||||
LEFT JOIN session_word_counts swc ON swc.sessionId = s.session_id
|
|
||||||
WHERE v.video_id = ?
|
WHERE v.video_id = ?
|
||||||
GROUP BY v.video_id
|
GROUP BY v.video_id
|
||||||
`,
|
`,
|
||||||
@@ -398,11 +366,19 @@ export function getMediaDailyRollups(
|
|||||||
total_sessions AS totalSessions,
|
total_sessions AS totalSessions,
|
||||||
total_active_min AS totalActiveMin,
|
total_active_min AS totalActiveMin,
|
||||||
total_lines_seen AS totalLinesSeen,
|
total_lines_seen AS totalLinesSeen,
|
||||||
COALESCE(dwc.totalTokensSeen, total_tokens_seen) AS totalTokensSeen,
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE total_tokens_seen
|
||||||
|
END AS totalTokensSeen,
|
||||||
total_cards AS totalCards,
|
total_cards AS totalCards,
|
||||||
cards_per_hour AS cardsPerHour,
|
cards_per_hour AS cardsPerHour,
|
||||||
CASE
|
CASE
|
||||||
WHEN total_active_min > 0 THEN COALESCE(dwc.totalTokensSeen, total_tokens_seen) * 1.0 / total_active_min
|
WHEN total_active_min > 0 THEN (
|
||||||
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE total_tokens_seen
|
||||||
|
END
|
||||||
|
) * 1.0 / total_active_min
|
||||||
ELSE NULL
|
ELSE NULL
|
||||||
END AS tokensPerMin,
|
END AS tokensPerMin,
|
||||||
lookup_hit_rate AS lookupHitRate
|
lookup_hit_rate AS lookupHitRate
|
||||||
@@ -454,11 +430,19 @@ export function getAnimeDailyRollups(
|
|||||||
SELECT r.rollup_day AS rollupDayOrMonth, r.video_id AS videoId,
|
SELECT r.rollup_day AS rollupDayOrMonth, r.video_id AS videoId,
|
||||||
r.total_sessions AS totalSessions, r.total_active_min AS totalActiveMin,
|
r.total_sessions AS totalSessions, r.total_active_min AS totalActiveMin,
|
||||||
r.total_lines_seen AS totalLinesSeen,
|
r.total_lines_seen AS totalLinesSeen,
|
||||||
COALESCE(dwc.totalTokensSeen, r.total_tokens_seen) AS totalTokensSeen,
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > r.total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END AS totalTokensSeen,
|
||||||
r.total_cards AS totalCards,
|
r.total_cards AS totalCards,
|
||||||
r.cards_per_hour AS cardsPerHour,
|
r.cards_per_hour AS cardsPerHour,
|
||||||
CASE
|
CASE
|
||||||
WHEN r.total_active_min > 0 THEN COALESCE(dwc.totalTokensSeen, r.total_tokens_seen) * 1.0 / r.total_active_min
|
WHEN r.total_active_min > 0 THEN (
|
||||||
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > r.total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END
|
||||||
|
) * 1.0 / r.total_active_min
|
||||||
ELSE NULL
|
ELSE NULL
|
||||||
END AS tokensPerMin,
|
END AS tokensPerMin,
|
||||||
r.lookup_hit_rate AS lookupHitRate
|
r.lookup_hit_rate AS lookupHitRate
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ import {
|
|||||||
} from './query-shared';
|
} from './query-shared';
|
||||||
|
|
||||||
export function getSessionSummaries(db: DatabaseSync, limit = 50): SessionSummaryQueryRow[] {
|
export function getSessionSummaries(db: DatabaseSync, limit = 50): SessionSummaryQueryRow[] {
|
||||||
const wordsExpr = sessionDisplayWordsExpr('s', 'swc', 'COALESCE(asm.tokensSeen, s.tokens_seen)');
|
|
||||||
const prepared = db.prepare(`
|
const prepared = db.prepare(`
|
||||||
${ACTIVE_SESSION_METRICS_CTE}
|
${ACTIVE_SESSION_METRICS_CTE}
|
||||||
SELECT
|
SELECT
|
||||||
@@ -31,14 +30,13 @@ export function getSessionSummaries(db: DatabaseSync, limit = 50): SessionSummar
|
|||||||
COALESCE(asm.totalWatchedMs, s.total_watched_ms, 0) AS totalWatchedMs,
|
COALESCE(asm.totalWatchedMs, s.total_watched_ms, 0) AS totalWatchedMs,
|
||||||
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
|
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
|
||||||
COALESCE(asm.linesSeen, s.lines_seen, 0) AS linesSeen,
|
COALESCE(asm.linesSeen, s.lines_seen, 0) AS linesSeen,
|
||||||
${wordsExpr} AS tokensSeen,
|
COALESCE(asm.tokensSeen, s.tokens_seen, 0) AS tokensSeen,
|
||||||
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
|
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
|
||||||
COALESCE(asm.lookupCount, s.lookup_count, 0) AS lookupCount,
|
COALESCE(asm.lookupCount, s.lookup_count, 0) AS lookupCount,
|
||||||
COALESCE(asm.lookupHits, s.lookup_hits, 0) AS lookupHits,
|
COALESCE(asm.lookupHits, s.lookup_hits, 0) AS lookupHits,
|
||||||
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
|
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
|
||||||
FROM imm_sessions s
|
FROM imm_sessions s
|
||||||
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
|
||||||
LEFT JOIN session_word_counts swc ON swc.sessionId = s.session_id
|
|
||||||
LEFT JOIN imm_videos v ON v.video_id = s.video_id
|
LEFT JOIN imm_videos v ON v.video_id = s.video_id
|
||||||
LEFT JOIN imm_anime a ON a.anime_id = v.anime_id
|
LEFT JOIN imm_anime a ON a.anime_id = v.anime_id
|
||||||
ORDER BY s.started_at_ms DESC
|
ORDER BY s.started_at_ms DESC
|
||||||
@@ -382,11 +380,19 @@ export function getDailyRollups(db: DatabaseSync, limit = 60): ImmersionSessionR
|
|||||||
r.total_sessions AS totalSessions,
|
r.total_sessions AS totalSessions,
|
||||||
r.total_active_min AS totalActiveMin,
|
r.total_active_min AS totalActiveMin,
|
||||||
r.total_lines_seen AS totalLinesSeen,
|
r.total_lines_seen AS totalLinesSeen,
|
||||||
COALESCE(dwc.totalTokensSeen, r.total_tokens_seen) AS totalTokensSeen,
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > r.total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END AS totalTokensSeen,
|
||||||
r.total_cards AS totalCards,
|
r.total_cards AS totalCards,
|
||||||
r.cards_per_hour AS cardsPerHour,
|
r.cards_per_hour AS cardsPerHour,
|
||||||
CASE
|
CASE
|
||||||
WHEN r.total_active_min > 0 THEN COALESCE(dwc.totalTokensSeen, r.total_tokens_seen) * 1.0 / r.total_active_min
|
WHEN r.total_active_min > 0 THEN (
|
||||||
|
CASE
|
||||||
|
WHEN dwc.totalTokensSeen IS NOT NULL AND dwc.totalTokensSeen > r.total_tokens_seen THEN dwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END
|
||||||
|
) * 1.0 / r.total_active_min
|
||||||
ELSE NULL
|
ELSE NULL
|
||||||
END AS tokensPerMin,
|
END AS tokensPerMin,
|
||||||
r.lookup_hit_rate AS lookupHitRate
|
r.lookup_hit_rate AS lookupHitRate
|
||||||
@@ -432,14 +438,22 @@ export function getMonthlyRollups(db: DatabaseSync, limit = 24): ImmersionSessio
|
|||||||
r.total_sessions AS totalSessions,
|
r.total_sessions AS totalSessions,
|
||||||
r.total_active_min AS totalActiveMin,
|
r.total_active_min AS totalActiveMin,
|
||||||
r.total_lines_seen AS totalLinesSeen,
|
r.total_lines_seen AS totalLinesSeen,
|
||||||
COALESCE(mwc.totalTokensSeen, r.total_tokens_seen) AS totalTokensSeen,
|
CASE
|
||||||
|
WHEN mwc.totalTokensSeen IS NOT NULL AND mwc.totalTokensSeen > r.total_tokens_seen THEN mwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END AS totalTokensSeen,
|
||||||
r.total_cards AS totalCards,
|
r.total_cards AS totalCards,
|
||||||
CASE
|
CASE
|
||||||
WHEN r.total_active_min > 0 THEN (r.total_cards * 60.0) / r.total_active_min
|
WHEN r.total_active_min > 0 THEN (r.total_cards * 60.0) / r.total_active_min
|
||||||
ELSE NULL
|
ELSE NULL
|
||||||
END AS cardsPerHour,
|
END AS cardsPerHour,
|
||||||
CASE
|
CASE
|
||||||
WHEN r.total_active_min > 0 THEN COALESCE(mwc.totalTokensSeen, r.total_tokens_seen) * 1.0 / r.total_active_min
|
WHEN r.total_active_min > 0 THEN (
|
||||||
|
CASE
|
||||||
|
WHEN mwc.totalTokensSeen IS NOT NULL AND mwc.totalTokensSeen > r.total_tokens_seen THEN mwc.totalTokensSeen
|
||||||
|
ELSE r.total_tokens_seen
|
||||||
|
END
|
||||||
|
) * 1.0 / r.total_active_min
|
||||||
ELSE NULL
|
ELSE NULL
|
||||||
END AS tokensPerMin,
|
END AS tokensPerMin,
|
||||||
NULL AS lookupHitRate
|
NULL AS lookupHitRate
|
||||||
|
|||||||
@@ -172,9 +172,7 @@ test('stats excluded words are replaced and read from sqlite storage', () => {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
replaceStatsExcludedWords(db, [{ headword: '犬', word: '犬', reading: 'いぬ' }]);
|
replaceStatsExcludedWords(db, [{ headword: '犬', word: '犬', reading: 'いぬ' }]);
|
||||||
assert.deepEqual(getStatsExcludedWords(db), [
|
assert.deepEqual(getStatsExcludedWords(db), [{ headword: '犬', word: '犬', reading: 'いぬ' }]);
|
||||||
{ headword: '犬', word: '犬', reading: 'いぬ' },
|
|
||||||
]);
|
|
||||||
} finally {
|
} finally {
|
||||||
db.close();
|
db.close();
|
||||||
cleanupDbPath(dbPath);
|
cleanupDbPath(dbPath);
|
||||||
|
|||||||
@@ -452,7 +452,7 @@ export function createStatsApp(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const sortedLineIndices = [...totalLineGroups.keys()].sort((a, b) => a - b);
|
const maxLineIndex = Math.max(...totalLineGroups.keys(), ...knownLineGroups.keys(), -1);
|
||||||
let knownWordsSeen = 0;
|
let knownWordsSeen = 0;
|
||||||
let totalWordsSeen = 0;
|
let totalWordsSeen = 0;
|
||||||
const knownByLinesSeen: Array<{
|
const knownByLinesSeen: Array<{
|
||||||
@@ -461,9 +461,9 @@ export function createStatsApp(
|
|||||||
totalWordsSeen: number;
|
totalWordsSeen: number;
|
||||||
}> = [];
|
}> = [];
|
||||||
|
|
||||||
for (const lineIdx of sortedLineIndices) {
|
for (let lineIdx = 0; lineIdx <= maxLineIndex; lineIdx += 1) {
|
||||||
knownWordsSeen += knownLineGroups.get(lineIdx) ?? 0;
|
knownWordsSeen += knownLineGroups.get(lineIdx) ?? 0;
|
||||||
totalWordsSeen += totalLineGroups.get(lineIdx)!;
|
totalWordsSeen += totalLineGroups.get(lineIdx) ?? 0;
|
||||||
knownByLinesSeen.push({
|
knownByLinesSeen.push({
|
||||||
linesSeen: lineIdx,
|
linesSeen: lineIdx,
|
||||||
knownWordsSeen,
|
knownWordsSeen,
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import { BASE_URL } from '../lib/api-client';
|
|||||||
const STORAGE_KEY = 'subminer-excluded-words';
|
const STORAGE_KEY = 'subminer-excluded-words';
|
||||||
|
|
||||||
function installLocalStorage(initial: Record<string, string> = {}) {
|
function installLocalStorage(initial: Record<string, string> = {}) {
|
||||||
|
const previous = Object.getOwnPropertyDescriptor(globalThis, 'localStorage');
|
||||||
const values = new Map(Object.entries(initial));
|
const values = new Map(Object.entries(initial));
|
||||||
Object.defineProperty(globalThis, 'localStorage', {
|
Object.defineProperty(globalThis, 'localStorage', {
|
||||||
configurable: true,
|
configurable: true,
|
||||||
@@ -20,13 +21,24 @@ function installLocalStorage(initial: Record<string, string> = {}) {
|
|||||||
removeItem: (key: string) => values.delete(key),
|
removeItem: (key: string) => values.delete(key),
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
return values;
|
return {
|
||||||
|
values,
|
||||||
|
restore: () => {
|
||||||
|
if (previous) {
|
||||||
|
Object.defineProperty(globalThis, 'localStorage', previous);
|
||||||
|
} else {
|
||||||
|
delete (globalThis as { localStorage?: unknown }).localStorage;
|
||||||
|
}
|
||||||
|
},
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
test('initializeExcludedWordsStore seeds empty database exclusions from localStorage', async () => {
|
test('initializeExcludedWordsStore seeds empty database exclusions from localStorage', async () => {
|
||||||
resetExcludedWordsStoreForTests();
|
resetExcludedWordsStoreForTests();
|
||||||
const localRows = [{ headword: '猫', word: '猫', reading: 'ねこ' }];
|
const localRows = [{ headword: '猫', word: '猫', reading: 'ねこ' }];
|
||||||
const storage = installLocalStorage({ [STORAGE_KEY]: JSON.stringify(localRows) });
|
const { values: storage, restore } = installLocalStorage({
|
||||||
|
[STORAGE_KEY]: JSON.stringify(localRows),
|
||||||
|
});
|
||||||
const originalFetch = globalThis.fetch;
|
const originalFetch = globalThis.fetch;
|
||||||
const requests: Array<{ url: string; method: string; body: string }> = [];
|
const requests: Array<{ url: string; method: string; body: string }> = [];
|
||||||
globalThis.fetch = (async (input: RequestInfo | URL, init?: RequestInit) => {
|
globalThis.fetch = (async (input: RequestInfo | URL, init?: RequestInit) => {
|
||||||
@@ -59,13 +71,14 @@ test('initializeExcludedWordsStore seeds empty database exclusions from localSto
|
|||||||
assert.equal(storage.get(STORAGE_KEY), JSON.stringify(localRows));
|
assert.equal(storage.get(STORAGE_KEY), JSON.stringify(localRows));
|
||||||
} finally {
|
} finally {
|
||||||
globalThis.fetch = originalFetch;
|
globalThis.fetch = originalFetch;
|
||||||
|
restore();
|
||||||
resetExcludedWordsStoreForTests();
|
resetExcludedWordsStoreForTests();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
test('setExcludedWords updates the database-backed exclusion list', async () => {
|
test('setExcludedWords updates the database-backed exclusion list', async () => {
|
||||||
resetExcludedWordsStoreForTests();
|
resetExcludedWordsStoreForTests();
|
||||||
const storage = installLocalStorage();
|
const { values: storage, restore } = installLocalStorage();
|
||||||
const originalFetch = globalThis.fetch;
|
const originalFetch = globalThis.fetch;
|
||||||
let seenBody = '';
|
let seenBody = '';
|
||||||
globalThis.fetch = (async (_input: RequestInfo | URL, init?: RequestInit) => {
|
globalThis.fetch = (async (_input: RequestInfo | URL, init?: RequestInit) => {
|
||||||
@@ -82,6 +95,66 @@ test('setExcludedWords updates the database-backed exclusion list', async () =>
|
|||||||
assert.equal(storage.get(STORAGE_KEY), JSON.stringify(rows));
|
assert.equal(storage.get(STORAGE_KEY), JSON.stringify(rows));
|
||||||
} finally {
|
} finally {
|
||||||
globalThis.fetch = originalFetch;
|
globalThis.fetch = originalFetch;
|
||||||
|
restore();
|
||||||
|
resetExcludedWordsStoreForTests();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test('setExcludedWords rolls back local state when persistence fails', async () => {
|
||||||
|
resetExcludedWordsStoreForTests();
|
||||||
|
const previousRows = [{ headword: '猫', word: '猫', reading: 'ねこ' }];
|
||||||
|
const nextRows = [{ headword: 'する', word: 'する', reading: 'する' }];
|
||||||
|
const { values: storage, restore } = installLocalStorage({
|
||||||
|
[STORAGE_KEY]: JSON.stringify(previousRows),
|
||||||
|
});
|
||||||
|
const originalFetch = globalThis.fetch;
|
||||||
|
const originalConsoleError = console.error;
|
||||||
|
console.error = () => {};
|
||||||
|
globalThis.fetch = (async () => {
|
||||||
|
return new Response('failed', { status: 500 });
|
||||||
|
}) as typeof globalThis.fetch;
|
||||||
|
|
||||||
|
try {
|
||||||
|
await assert.rejects(() => setExcludedWords(nextRows), /Stats API error: 500/);
|
||||||
|
|
||||||
|
assert.deepEqual(getExcludedWordsSnapshot(), previousRows);
|
||||||
|
assert.equal(storage.get(STORAGE_KEY), JSON.stringify(previousRows));
|
||||||
|
} finally {
|
||||||
|
globalThis.fetch = originalFetch;
|
||||||
|
console.error = originalConsoleError;
|
||||||
|
restore();
|
||||||
|
resetExcludedWordsStoreForTests();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test('initializeExcludedWordsStore retries after transient database load failures', async () => {
|
||||||
|
resetExcludedWordsStoreForTests();
|
||||||
|
const { restore } = installLocalStorage();
|
||||||
|
const originalFetch = globalThis.fetch;
|
||||||
|
const originalConsoleError = console.error;
|
||||||
|
console.error = () => {};
|
||||||
|
let calls = 0;
|
||||||
|
globalThis.fetch = (async () => {
|
||||||
|
calls += 1;
|
||||||
|
if (calls === 1) {
|
||||||
|
return new Response('failed', { status: 500 });
|
||||||
|
}
|
||||||
|
return new Response(JSON.stringify([{ headword: '猫', word: '猫', reading: 'ねこ' }]), {
|
||||||
|
status: 200,
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
});
|
||||||
|
}) as typeof globalThis.fetch;
|
||||||
|
|
||||||
|
try {
|
||||||
|
await initializeExcludedWordsStore();
|
||||||
|
await initializeExcludedWordsStore();
|
||||||
|
|
||||||
|
assert.equal(calls, 2);
|
||||||
|
assert.deepEqual(getExcludedWordsSnapshot(), [{ headword: '猫', word: '猫', reading: 'ねこ' }]);
|
||||||
|
} finally {
|
||||||
|
globalThis.fetch = originalFetch;
|
||||||
|
console.error = originalConsoleError;
|
||||||
|
restore();
|
||||||
resetExcludedWordsStoreForTests();
|
resetExcludedWordsStoreForTests();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -64,12 +64,20 @@ export function getExcludedWordsSnapshot(): ExcludedWord[] {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export async function setExcludedWords(words: ExcludedWord[]): Promise<void> {
|
export async function setExcludedWords(words: ExcludedWord[]): Promise<void> {
|
||||||
revision += 1;
|
const previousWords = [...load()];
|
||||||
|
const previousRevision = revision;
|
||||||
|
const writeRevision = previousRevision + 1;
|
||||||
|
revision = writeRevision;
|
||||||
applyWords(words);
|
applyWords(words);
|
||||||
try {
|
try {
|
||||||
await apiClient.setExcludedWords(words);
|
await apiClient.setExcludedWords(words);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
if (revision === writeRevision) {
|
||||||
|
revision = previousRevision;
|
||||||
|
applyWords(previousWords);
|
||||||
|
}
|
||||||
console.error('Failed to persist excluded words to stats database', error);
|
console.error('Failed to persist excluded words to stats database', error);
|
||||||
|
throw error;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -82,17 +90,25 @@ export function initializeExcludedWordsStore(): Promise<void> {
|
|||||||
try {
|
try {
|
||||||
dbWords = await apiClient.getExcludedWords();
|
dbWords = await apiClient.getExcludedWords();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
initialized = null;
|
||||||
console.error('Failed to load excluded words from stats database', error);
|
console.error('Failed to load excluded words from stats database', error);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (revision !== startRevision) return;
|
if (revision !== startRevision) {
|
||||||
|
initialized = null;
|
||||||
|
return;
|
||||||
|
}
|
||||||
if (dbWords.length > 0) {
|
if (dbWords.length > 0) {
|
||||||
applyWords(dbWords);
|
applyWords(dbWords);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (localWords.length > 0) {
|
if (localWords.length > 0) {
|
||||||
await setExcludedWords(localWords);
|
try {
|
||||||
|
await setExcludedWords(localWords);
|
||||||
|
} catch {
|
||||||
|
initialized = null;
|
||||||
|
}
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
applyWords([]);
|
applyWords([]);
|
||||||
|
|||||||
Reference in New Issue
Block a user