Refactor startup, queries, and workflow into focused modules (#36)

* chore(backlog): add mining workflow milestone and tasks

* refactor: split character dictionary runtime modules

* refactor: split shared type entrypoints

* refactor: use bun serve for stats server

* feat: add repo-local subminer workflow plugin

* fix: add stats server node fallback

* refactor: split immersion tracker query modules

* chore: update backlog task records

* refactor: migrate shared type imports

* refactor: compose startup and setup window wiring

* Add backlog tasks and launcher time helper tests

- Track follow-up cleanup work in Backlog.md
- Replace Date.now usage with shared nowMs helper
- Add launcher args/parser and core regression tests

* test: increase launcher test timeout for CI stability

* fix: address CodeRabbit review feedback

* refactor(main): extract remaining inline runtime logic from main

* chore(backlog): update task notes and changelog fragment

* refactor: split main boot phases

* test: stabilize bun coverage reporting

* Switch plausible endpoint and harden coverage lane parsing

- update docs-site tracking to use the Plausible capture endpoint
- tighten coverage lane argument and LCOV parsing checks
- make script entrypoint use CommonJS main guard

* Restrict docs analytics and build coverage input

- limit Plausible init to docs.subminer.moe
- build Yomitan before src coverage lane

* fix(ci): normalize Windows shortcut paths for cross-platform tests

* Fix verification and immersion-tracker grouping

- isolate verifier artifacts and lease handling
- switch weekly/monthly tracker cutoffs to calendar boundaries
- tighten boot lifecycle and zip writer tests

* fix: resolve CI type failures in boot and immersion query tests

* fix: remove strict spread usage in Date mocks

* fix: use explicit super args for MockDate constructors

* Factor out mock date helper in tracker tests

- reuse a shared `withMockDate` helper for date-sensitive query tests
- make monthly rollup assertions key off `videoId` instead of row order

* fix: use variadic array type for MockDate constructor args

TS2367: fixed-length tuple made args.length === 0 unreachable.

* refactor: remove unused createMainBootRuntimes/Handlers aggregate functions

These functions were never called by production code — main.ts imports
the individual composeBoot* re-exports directly.

* refactor: remove boot re-export alias layer

main.ts now imports directly from the runtime/composers and runtime/domains
modules, eliminating the intermediate boot/ indirection.

* refactor: consolidate 3 near-identical setup window factories

Extract shared createSetupWindowHandler with a config parameter.
Public API unchanged.

* refactor: parameterize duplicated getAffected*Ids query helpers

Four structurally identical functions collapsed into two parameterized
helpers while preserving the existing public API.

* refactor: inline identity composers (stats-startup, overlay-window)

composeStatsStartupRuntime was a no-op that returned its input.
composeOverlayWindowHandlers was a 1-line delegation.
Both removed in favor of direct usage.

* chore: remove unused token/queue file path constants from main.ts

* fix: replace any types in boot services with proper signatures

* refactor: deduplicate ensureDir into shared/fs-utils

5 copies of mkdir-p-if-not-exists consolidated into one shared module
with ensureDir (directory path) and ensureDirForFile (file path) variants.

* fix: tighten type safety in boot services

- Add AppLifecycleShape and OverlayModalInputStateShape constraints
  so TAppLifecycleApp and TOverlayModalInputState generics are bounded
- Remove unsafe `as { handleModalInputStateChange? }` cast — now
  directly callable via the constraint
- Use `satisfies AppLifecycleShape` for structural validation on the
  appLifecycleApp object literal
- Document Electron App.on incompatibility with simple signatures

* refactor: inline subtitle-prefetch-runtime-composer

The composer was a pure pass-through that destructured an object and
reassembled it with the same fields. Inlined at the call site.

* chore: consolidate duplicate import paths in main.ts

* test: extract mpv composer test fixture factory to reduce duplication

* test: add behavioral assertions to composer tests

Upgrade 8 composer test files from shape-only typeof checks to behavioral
assertions that invoke returned handlers and verify injected dependencies are
actually called, following the mpv-runtime-composer pattern.

* refactor: normalize import extensions in query modules

* refactor: consolidate toDbMs into query-shared.ts

* refactor: remove Node.js fallback from stats-server, use Bun only

* Fix monthly rollup test expectations

- Preserve multi-arg Date construction in mock helper
- Align rollup assertions with the correct videoId

* fix: address PR 36 CodeRabbit follow-ups

* fix: harden coverage lane cleanup

* fix(stats): fallback to node server when Bun.serve unavailable

* fix(ci): restore coverage lane compatibility

* chore(backlog): close TASK-242

* fix: address latest CodeRabbit review round

* fix: guard disabled immersion retention windows

* fix: migrate discord rpc wrapper

* fix(ci): add changelog fragment for PR 36

* fix: stabilize macOS visible overlay toggle

* fix: pin installed mpv plugin to current binary

* fix: strip inline subtitle markup from sidebar cues

* fix(renderer): restore subtitle sidebar mpv passthrough

* feat(discord): add configurable presence style presets

Replace the hardcoded "Mining and crafting (Anki cards)" meme message
with a preset system. New `discordPresence.presenceStyle` option
supports four presets: "default" (clean bilingual), "meme" (the OG
Minecraft joke), "japanese" (fully JP), and "minimal". The default
preset shows "Sentence Mining" with 日本語学習中 as the small image
tooltip. Existing users can set presenceStyle to "meme" to keep the
old behavior.

* fix: finalize v0.10.0 release prep

* docs: add subtitle sidebar guide and release note

* chore(backlog): mark docs task done

* fix: lazily resolve youtube playback socket path

* chore(release): build v0.10.0 changelog

* Revert "chore(release): build v0.10.0 changelog"

This reverts commit 9741c0f020.
This commit is contained in:
2026-03-29 16:16:29 -07:00
committed by GitHub
parent 2d4f2d1139
commit 35adf8299c
297 changed files with 17713 additions and 9147 deletions

View File

@@ -1,4 +1,4 @@
import type { AnkiConnectConfig } from './types';
import type { AnkiConnectConfig } from './types/anki';
type NoteFieldValue = { value?: string } | string | null | undefined;
@@ -8,7 +8,9 @@ function normalizeFieldName(value: string | null | undefined): string | null {
return trimmed.length > 0 ? trimmed : null;
}
export function getConfiguredWordFieldName(config?: Pick<AnkiConnectConfig, 'fields'> | null): string {
export function getConfiguredWordFieldName(
config?: Pick<AnkiConnectConfig, 'fields'> | null,
): string {
return normalizeFieldName(config?.fields?.word) ?? 'Expression';
}

View File

@@ -21,15 +21,15 @@ import { SubtitleTimingTracker } from './subtitle-timing-tracker';
import { MediaGenerator } from './media-generator';
import path from 'path';
import {
AiConfig,
AnkiConnectConfig,
KikuDuplicateCardInfo,
KikuFieldGroupingChoice,
KikuMergePreviewResponse,
MpvClient,
NotificationOptions,
NPlusOneMatchMode,
} from './types';
} from './types/anki';
import { AiConfig } from './types/integrations';
import { MpvClient } from './types/runtime';
import { NPlusOneMatchMode } from './types/subtitle';
import { DEFAULT_ANKI_CONNECT_CONFIG } from './config';
import {
getConfiguredWordFieldCandidates,
@@ -212,10 +212,7 @@ export class AnkiIntegration {
try {
this.recordCardsMinedCallback(count, noteIds);
} catch (error) {
log.warn(
`recordCardsMined callback failed during ${source}:`,
(error as Error).message,
);
log.warn(`recordCardsMined callback failed during ${source}:`, (error as Error).message);
}
}

View File

@@ -1,4 +1,4 @@
import type { AiConfig } from '../types';
import type { AiConfig } from '../types/integrations';
import { requestAiChatCompletion } from '../ai/client';
const DEFAULT_AI_SYSTEM_PROMPT =

View File

@@ -4,10 +4,10 @@ import test from 'node:test';
import { resolveAnimatedImageLeadInSeconds, extractSoundFilenames } from './animated-image-sync';
test('extractSoundFilenames returns ordered sound filenames from an Anki field value', () => {
assert.deepEqual(
extractSoundFilenames('before [sound:word.mp3] middle [sound:alt.ogg] after'),
['word.mp3', 'alt.ogg'],
);
assert.deepEqual(extractSoundFilenames('before [sound:word.mp3] middle [sound:alt.ogg] after'), [
'word.mp3',
'alt.ogg',
]);
});
test('resolveAnimatedImageLeadInSeconds sums configured word audio durations for animated images', async () => {

View File

@@ -4,7 +4,7 @@ import * as os from 'node:os';
import * as path from 'node:path';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
type NoteInfoLike = {
noteId: number;
@@ -36,9 +36,7 @@ export function extractSoundFilenames(value: string): string[] {
}
function shouldSyncAnimatedImageToWordAudio(config: Pick<AnkiConnectConfig, 'media'>): boolean {
return (
config.media?.imageType === 'avif' && config.media?.syncAnimatedImageToWordAudio !== false
);
return config.media?.imageType === 'avif' && config.media?.syncAnimatedImageToWordAudio !== false;
}
export async function probeAudioDurationSeconds(

View File

@@ -2,7 +2,7 @@ import assert from 'node:assert/strict';
import test from 'node:test';
import { CardCreationService } from './card-creation';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
test('CardCreationService counts locally created sentence cards', async () => {
const minedCards: Array<{ count: number; noteIds?: number[] }> = [];

View File

@@ -3,10 +3,11 @@ import {
getConfiguredWordFieldName,
getPreferredWordValueFromExtractedFields,
} from '../anki-field-config';
import { AiConfig, AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { createLogger } from '../logger';
import { SubtitleTimingTracker } from '../subtitle-timing-tracker';
import { MpvClient } from '../types';
import { AiConfig } from '../types/integrations';
import { MpvClient } from '../types/runtime';
import { resolveSentenceBackText } from './ai';
import { resolveMediaGenerationInputPath } from './media-source';

View File

@@ -179,7 +179,10 @@ function getDuplicateSourceCandidates(
const fallbackFieldName = configuredFieldNames[0]?.toLowerCase() || 'expression';
const fallbackKey = `${fallbackFieldName}:${normalizeDuplicateValue(trimmedFallback)}`;
if (!dedupeKey.has(fallbackKey)) {
candidates.push({ fieldName: configuredFieldNames[0] || 'Expression', value: trimmedFallback });
candidates.push({
fieldName: configuredFieldNames[0] || 'Expression',
value: trimmedFallback,
});
}
}

View File

@@ -0,0 +1,201 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import {
FieldGroupingMergeCollaborator,
type FieldGroupingMergeNoteInfo,
} from './field-grouping-merge';
import type { AnkiConnectConfig } from '../types/anki';
function resolveFieldName(availableFieldNames: string[], preferredName: string): string | null {
return (
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null
);
}
function createCollaborator(
options: {
config?: Partial<AnkiConnectConfig>;
currentSubtitleText?: string;
generatedMedia?: {
audioField?: string;
audioValue?: string;
imageField?: string;
imageValue?: string;
miscInfoValue?: string;
};
warnings?: Array<{ fieldName: string; reason: string; detail?: string }>;
} = {},
) {
const warnings = options.warnings ?? [];
const config = {
fields: {
sentence: 'Sentence',
audio: 'ExpressionAudio',
image: 'Picture',
miscInfo: 'MiscInfo',
...(options.config?.fields ?? {}),
},
...(options.config ?? {}),
} as AnkiConnectConfig;
return {
collaborator: new FieldGroupingMergeCollaborator({
getConfig: () => config,
getEffectiveSentenceCardConfig: () => ({
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
}),
getCurrentSubtitleText: () => options.currentSubtitleText,
resolveFieldName,
resolveNoteFieldName: (noteInfo, preferredName) => {
if (!preferredName) return null;
return resolveFieldName(Object.keys(noteInfo.fields), preferredName);
},
extractFields: (fields) =>
Object.fromEntries(
Object.entries(fields).map(([key, value]) => [key.toLowerCase(), value.value || '']),
),
processSentence: (mpvSentence) => `${mpvSentence}::processed`,
generateMediaForMerge: async () => options.generatedMedia ?? {},
warnFieldParseOnce: (fieldName, reason, detail) => {
warnings.push({ fieldName, reason, detail });
},
}),
warnings,
};
}
function makeNote(noteId: number, fields: Record<string, string>): FieldGroupingMergeNoteInfo {
return {
noteId,
fields: Object.fromEntries(Object.entries(fields).map(([key, value]) => [key, { value }])),
};
}
test('getGroupableFieldNames includes configured fields without duplicating ExpressionAudio', () => {
const { collaborator } = createCollaborator({
config: {
fields: {
image: 'Illustration',
sentence: 'SentenceText',
audio: 'ExpressionAudio',
miscInfo: 'ExtraInfo',
},
},
});
assert.deepEqual(collaborator.getGroupableFieldNames(), [
'Sentence',
'SentenceAudio',
'Picture',
'Illustration',
'SentenceText',
'ExtraInfo',
'SentenceFurigana',
]);
});
test('computeFieldGroupingMergedFields syncs a custom audio field from merged SentenceAudio', async () => {
const { collaborator } = createCollaborator({
config: {
fields: {
audio: 'CustomAudio',
},
},
});
const merged = await collaborator.computeFieldGroupingMergedFields(
1,
2,
makeNote(1, {
SentenceAudio: '[sound:keep.mp3]',
CustomAudio: '[sound:stale.mp3]',
}),
makeNote(2, {
SentenceAudio: '[sound:new.mp3]',
}),
false,
);
assert.equal(
merged.SentenceAudio,
'<span data-group-id="1">[sound:keep.mp3]</span><span data-group-id="2">[sound:new.mp3]</span>',
);
assert.equal(merged.CustomAudio, merged.SentenceAudio);
});
test('computeFieldGroupingMergedFields keeps strict fields when source is empty and warns on malformed spans', async () => {
const { collaborator, warnings } = createCollaborator({
currentSubtitleText: 'subtitle line',
});
const merged = await collaborator.computeFieldGroupingMergedFields(
3,
4,
makeNote(3, {
Sentence: '<span data-group-id="abc">keep sentence</span>',
SentenceAudio: '',
}),
makeNote(4, {
Sentence: 'source sentence',
SentenceAudio: '[sound:source.mp3]',
}),
false,
);
assert.equal(
merged.Sentence,
'<span data-group-id="3"><span data-group-id="abc">keep sentence</span></span><span data-group-id="4">source sentence</span>',
);
assert.equal(merged.SentenceAudio, '<span data-group-id="4">[sound:source.mp3]</span>');
assert.equal(warnings.length, 4);
assert.deepEqual(
warnings.map((entry) => entry.reason),
['invalid-group-id', 'no-usable-span-entries', 'invalid-group-id', 'no-usable-span-entries'],
);
});
test('computeFieldGroupingMergedFields uses generated media only when includeGeneratedMedia is true', async () => {
const generatedMedia = {
audioField: 'SentenceAudio',
audioValue: '[sound:generated.mp3]',
imageField: 'Picture',
imageValue: '<img src="generated.png">',
miscInfoValue: 'generated misc',
};
const { collaborator: withoutGenerated } = createCollaborator({ generatedMedia });
const { collaborator: withGenerated } = createCollaborator({ generatedMedia });
const keep = makeNote(10, {
SentenceAudio: '',
Picture: '',
MiscInfo: '',
});
const source = makeNote(11, {
SentenceAudio: '',
Picture: '',
MiscInfo: '',
});
const without = await withoutGenerated.computeFieldGroupingMergedFields(
10,
11,
keep,
source,
false,
);
const withMedia = await withGenerated.computeFieldGroupingMergedFields(
10,
11,
keep,
source,
true,
);
assert.deepEqual(without, {});
assert.equal(withMedia.SentenceAudio, '<span data-group-id="11">[sound:generated.mp3]</span>');
assert.equal(withMedia.Picture, '<img data-group-id="11" src="generated.png">');
assert.equal(withMedia.MiscInfo, '<span data-group-id="11">generated misc</span>');
});

View File

@@ -1,4 +1,4 @@
import { AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { getConfiguredWordFieldName } from '../anki-field-config';
interface FieldGroupingMergeMedia {

View File

@@ -1,7 +1,7 @@
import test from 'node:test';
import assert from 'node:assert/strict';
import { FieldGroupingWorkflow } from './field-grouping-workflow';
import type { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types';
import type { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types/anki';
type NoteInfo = {
noteId: number;

View File

@@ -1,4 +1,4 @@
import { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types';
import { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types/anki';
import { getPreferredWordValueFromExtractedFields } from '../anki-field-config';
export interface FieldGroupingWorkflowNoteInfo {
@@ -181,7 +181,8 @@ export class FieldGroupingWorkflow {
return {
noteId: noteInfo.noteId,
expression:
getPreferredWordValueFromExtractedFields(fields, this.deps.getConfig()) || fallbackExpression,
getPreferredWordValueFromExtractedFields(fields, this.deps.getConfig()) ||
fallbackExpression,
sentencePreview: this.deps.truncateSentence(
fields[(sentenceCardConfig.sentenceField || 'sentence').toLowerCase()] ||
(isOriginal ? '' : this.deps.getCurrentSubtitleText() || ''),

View File

@@ -0,0 +1,411 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { FieldGroupingService } from './field-grouping';
import type { KikuMergePreviewResponse } from '../types/anki';
type NoteInfo = {
noteId: number;
fields: Record<string, { value: string }>;
};
function createHarness(
options: {
kikuEnabled?: boolean;
kikuFieldGrouping?: 'auto' | 'manual' | 'disabled';
deck?: string;
noteIds?: number[];
notesInfo?: NoteInfo[][];
duplicateNoteId?: number | null;
hasAllConfiguredFields?: boolean;
manualHandled?: boolean;
expression?: string | null;
currentSentenceImageField?: string | undefined;
onProcessNewCard?: (noteId: number, options?: { skipKikuFieldGrouping?: boolean }) => void;
} = {},
) {
const calls: string[] = [];
const findNotesQueries: Array<{ query: string; maxRetries?: number }> = [];
const noteInfoRequests: number[][] = [];
const duplicateRequests: Array<{ expression: string; excludeNoteId: number }> = [];
const processCalls: Array<{ noteId: number; options?: { skipKikuFieldGrouping?: boolean } }> = [];
const autoCalls: Array<{ originalNoteId: number; newNoteId: number; expression: string }> = [];
const manualCalls: Array<{ originalNoteId: number; newNoteId: number; expression: string }> = [];
const noteInfoQueue = [...(options.notesInfo ?? [])];
const notes = options.noteIds ?? [2];
const service = new FieldGroupingService({
getConfig: () => ({
fields: {
word: 'Expression',
},
}),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: options.kikuEnabled ?? true,
kikuFieldGrouping: options.kikuFieldGrouping ?? 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
getDeck: options.deck ? () => options.deck : undefined,
withUpdateProgress: async (_message, action) => {
calls.push('withUpdateProgress');
return action();
},
showOsdNotification: (text) => {
calls.push(`osd:${text}`);
},
findNotes: async (query, findNotesOptions) => {
findNotesQueries.push({ query, maxRetries: findNotesOptions?.maxRetries });
return notes;
},
notesInfo: async (noteIds) => {
noteInfoRequests.push([...noteIds]);
return noteInfoQueue.shift() ?? [];
},
extractFields: (fields) =>
Object.fromEntries(
Object.entries(fields).map(([key, value]) => [key.toLowerCase(), value.value || '']),
),
findDuplicateNote: async (expression, excludeNoteId) => {
duplicateRequests.push({ expression, excludeNoteId });
return options.duplicateNoteId ?? 99;
},
hasAllConfiguredFields: () => options.hasAllConfiguredFields ?? true,
processNewCard: async (noteId, processOptions) => {
processCalls.push({ noteId, options: processOptions });
options.onProcessNewCard?.(noteId, processOptions);
},
getSentenceCardImageFieldName: () => options.currentSentenceImageField,
resolveFieldName: (availableFieldNames, preferredName) =>
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: (noteInfo) =>
Object.fromEntries(
Object.entries(noteInfo.fields).map(([key, value]) => [key, value.value || '']),
),
handleFieldGroupingAuto: async (originalNoteId, newNoteId, _newNoteInfo, expression) => {
autoCalls.push({ originalNoteId, newNoteId, expression });
},
handleFieldGroupingManual: async (originalNoteId, newNoteId, _newNoteInfo, expression) => {
manualCalls.push({ originalNoteId, newNoteId, expression });
return options.manualHandled ?? true;
},
});
return {
service,
calls,
findNotesQueries,
noteInfoRequests,
duplicateRequests,
processCalls,
autoCalls,
manualCalls,
};
}
type SuccessfulPreview = KikuMergePreviewResponse & {
ok: true;
compact: {
action: {
keepNoteId: number;
deleteNoteId: number;
deleteDuplicate: boolean;
};
mergedFields: Record<string, string>;
};
full: {
result: {
wouldDeleteNoteId: number | null;
};
};
};
test('triggerFieldGroupingForLastAddedCard stops when kiku mode is disabled', async () => {
const harness = createHarness({ kikuEnabled: false });
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.calls, ['osd:Kiku mode is not enabled']);
assert.equal(harness.findNotesQueries.length, 0);
});
test('triggerFieldGroupingForLastAddedCard stops when field grouping is disabled', async () => {
const harness = createHarness({ kikuFieldGrouping: 'disabled' });
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.calls, ['osd:Kiku field grouping is disabled']);
assert.equal(harness.findNotesQueries.length, 0);
});
test('triggerFieldGroupingForLastAddedCard stops when an update is already in progress', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => true,
withUpdateProgress: async () => {
throw new Error('should not be called');
},
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async () => [],
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: () => null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: () => ({}),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
await service.triggerFieldGroupingForLastAddedCard();
});
test('triggerFieldGroupingForLastAddedCard finds the newest note and hands off to auto grouping', async () => {
const harness = createHarness({
deck: 'Anime Deck',
noteIds: [3, 7, 5],
notesInfo: [
[
{
noteId: 7,
fields: {
Expression: { value: 'word-7' },
Sentence: { value: 'line-7' },
},
},
],
[
{
noteId: 7,
fields: {
Expression: { value: 'word-7' },
Sentence: { value: 'line-7' },
},
},
],
],
duplicateNoteId: 42,
hasAllConfiguredFields: true,
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.findNotesQueries, [
{ query: '"deck:Anime Deck" added:1', maxRetries: undefined },
]);
assert.deepEqual(harness.noteInfoRequests, [[7], [7]]);
assert.deepEqual(harness.duplicateRequests, [{ expression: 'word-7', excludeNoteId: 7 }]);
assert.deepEqual(harness.autoCalls, [
{
originalNoteId: 42,
newNoteId: 7,
expression: 'word-7',
},
]);
});
test('triggerFieldGroupingForLastAddedCard refreshes the card when configured fields are missing', async () => {
const processCalls: Array<{ noteId: number; options?: { skipKikuFieldGrouping?: boolean } }> = [];
const harness = createHarness({
noteIds: [11],
notesInfo: [
[
{
noteId: 11,
fields: {
Expression: { value: 'word-11' },
Sentence: { value: 'line-11' },
},
},
],
[
{
noteId: 11,
fields: {
Expression: { value: 'word-11' },
Sentence: { value: 'line-11' },
},
},
],
],
duplicateNoteId: 13,
hasAllConfiguredFields: false,
onProcessNewCard: (noteId, options) => {
processCalls.push({ noteId, options });
},
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(processCalls, [{ noteId: 11, options: { skipKikuFieldGrouping: true } }]);
assert.deepEqual(harness.manualCalls, []);
});
test('triggerFieldGroupingForLastAddedCard shows a cancellation message when manual grouping is declined', async () => {
const harness = createHarness({
kikuFieldGrouping: 'manual',
noteIds: [9],
notesInfo: [
[
{
noteId: 9,
fields: {
Expression: { value: 'word-9' },
Sentence: { value: 'line-9' },
},
},
],
[
{
noteId: 9,
fields: {
Expression: { value: 'word-9' },
Sentence: { value: 'line-9' },
},
},
],
],
duplicateNoteId: 77,
manualHandled: false,
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.manualCalls, [
{
originalNoteId: 77,
newNoteId: 9,
expression: 'word-9',
},
]);
assert.equal(harness.calls.at(-1), 'osd:Field grouping cancelled');
});
test('buildFieldGroupingPreview returns merged compact and full previews', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
withUpdateProgress: async (_message, action) => action(),
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async (noteIds) =>
noteIds.map((noteId) => ({
noteId,
fields: {
Sentence: { value: `sentence-${noteId}` },
SentenceAudio: { value: `[sound:${noteId}.mp3]` },
Picture: { value: `<img src="${noteId}.png">` },
MiscInfo: { value: `misc-${noteId}` },
},
})),
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: (availableFieldNames, preferredName) =>
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null,
computeFieldGroupingMergedFields: async () => ({
Sentence: 'merged sentence',
SentenceAudio: 'merged audio',
Picture: 'merged picture',
MiscInfo: 'merged misc',
}),
getNoteFieldMap: (noteInfo) =>
Object.fromEntries(
Object.entries(noteInfo.fields).map(([key, value]) => [key, value.value || '']),
),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
const preview = await service.buildFieldGroupingPreview(1, 2, true);
assert.equal(preview.ok, true);
if (!preview.ok) {
throw new Error(preview.error);
}
const successPreview = preview as SuccessfulPreview;
assert.deepEqual(successPreview.compact.action, {
keepNoteId: 1,
deleteNoteId: 2,
deleteDuplicate: true,
});
assert.equal(successPreview.compact.mergedFields.Sentence, 'merged sentence');
assert.equal(successPreview.full.result.wouldDeleteNoteId, 2);
});
test('buildFieldGroupingPreview reports missing notes cleanly', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
withUpdateProgress: async (_message, action) => action(),
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async () => [
{
noteId: 1,
fields: {
Sentence: { value: 'sentence-1' },
},
},
],
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: () => null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: () => ({}),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
const preview = await service.buildFieldGroupingPreview(1, 2, false);
assert.equal(preview.ok, false);
if (preview.ok) {
throw new Error('expected preview to fail');
}
assert.equal(preview.error, 'Could not load selected notes');
});

View File

@@ -1,4 +1,4 @@
import { KikuMergePreviewResponse } from '../types';
import { KikuMergePreviewResponse } from '../types/anki';
import { createLogger } from '../logger';
import { getPreferredWordValueFromExtractedFields } from '../anki-field-config';

View File

@@ -4,7 +4,7 @@ import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import { KnownWordCacheManager } from './known-word-cache';
async function waitForCondition(
@@ -85,13 +85,15 @@ test('KnownWordCacheManager startLifecycle keeps fresh persisted cache without i
},
};
const { manager, calls, statePath, cleanup } = createKnownWordCacheHarness(config);
const originalDateNow = Date.now;
try {
Date.now = () => 120_000;
fs.writeFileSync(
statePath,
JSON.stringify({
version: 2,
refreshedAtMs: Date.now(),
refreshedAtMs: 120_000,
scope: '{"refreshMinutes":60,"scope":"is:note","fieldsWord":""}',
words: ['猫'],
notes: {
@@ -102,12 +104,20 @@ test('KnownWordCacheManager startLifecycle keeps fresh persisted cache without i
);
manager.startLifecycle();
await new Promise((resolve) => setTimeout(resolve, 25));
assert.equal(manager.isKnownWord('猫'), true);
assert.equal(calls.findNotes, 0);
assert.equal(calls.notesInfo, 0);
assert.equal(
(
manager as unknown as {
getMsUntilNextRefresh: () => number;
}
).getMsUntilNextRefresh() > 0,
true,
);
} finally {
Date.now = originalDateNow;
manager.stopLifecycle();
cleanup();
}
@@ -124,13 +134,15 @@ test('KnownWordCacheManager startLifecycle immediately refreshes stale persisted
},
};
const { manager, calls, statePath, clientState, cleanup } = createKnownWordCacheHarness(config);
const originalDateNow = Date.now;
try {
Date.now = () => 120_000;
fs.writeFileSync(
statePath,
JSON.stringify({
version: 2,
refreshedAtMs: Date.now() - 61_000,
refreshedAtMs: 59_000,
scope: '{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}',
words: ['猫'],
notes: {
@@ -156,6 +168,7 @@ test('KnownWordCacheManager startLifecycle immediately refreshes stale persisted
assert.equal(manager.isKnownWord('猫'), false);
assert.equal(manager.isKnownWord('犬'), true);
} finally {
Date.now = originalDateNow;
manager.stopLifecycle();
cleanup();
}
@@ -351,10 +364,7 @@ test('KnownWordCacheManager preserves cache state key captured before refresh wo
scope: string;
words: string[];
};
assert.equal(
persisted.scope,
'{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}',
);
assert.equal(persisted.scope, '{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}');
assert.deepEqual(persisted.words, ['猫']);
} finally {
fs.rmSync(stateDir, { recursive: true, force: true });

View File

@@ -3,7 +3,7 @@ import path from 'path';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import { getConfiguredWordFieldName } from '../anki-field-config';
import { AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { createLogger } from '../logger';
const log = createLogger('anki').child('integration.known-word-cache');
@@ -316,9 +316,9 @@ export class KnownWordCacheManager {
const currentDeck = this.deps.getConfig().deck?.trim();
const selectedDeckEntry =
currentDeck !== undefined && currentDeck.length > 0
? trimmedDeckEntries.find(([deckName]) => deckName === currentDeck) ?? null
? (trimmedDeckEntries.find(([deckName]) => deckName === currentDeck) ?? null)
: trimmedDeckEntries.length === 1
? trimmedDeckEntries[0] ?? null
? (trimmedDeckEntries[0] ?? null)
: null;
if (!selectedDeckEntry) {
@@ -329,7 +329,10 @@ export class KnownWordCacheManager {
if (Array.isArray(deckFields)) {
const normalizedFields = [
...new Set(
deckFields.map(String).map((field) => field.trim()).filter((field) => field.length > 0),
deckFields
.map(String)
.map((field) => field.trim())
.filter((field) => field.length > 0),
),
];
if (normalizedFields.length > 0) {
@@ -353,7 +356,14 @@ export class KnownWordCacheManager {
continue;
}
const normalizedFields = Array.isArray(fields)
? [...new Set(fields.map(String).map((field) => field.trim()).filter(Boolean))]
? [
...new Set(
fields
.map(String)
.map((field) => field.trim())
.filter(Boolean),
),
]
: [];
scopes.push({
query: `deck:"${escapeAnkiSearchValue(trimmedDeckName)}"`,
@@ -402,7 +412,10 @@ export class KnownWordCacheManager {
private async fetchKnownWordNoteFieldsById(): Promise<Map<number, string[]>> {
const scopes = this.getKnownWordQueryScopes();
const noteFieldsById = new Map<number, string[]>();
log.debug('Refreshing known-word cache', `queries=${scopes.map((scope) => scope.query).join(' | ')}`);
log.debug(
'Refreshing known-word cache',
`queries=${scopes.map((scope) => scope.query).join(' | ')}`,
);
for (const scope of scopes) {
const noteIds = (await this.deps.client.findNotes(scope.query, {
@@ -414,10 +427,7 @@ export class KnownWordCacheManager {
continue;
}
const existingFields = noteFieldsById.get(noteId) ?? [];
noteFieldsById.set(
noteId,
[...new Set([...existingFields, ...scope.fields])],
);
noteFieldsById.set(noteId, [...new Set([...existingFields, ...scope.fields])]);
}
}

View File

@@ -1,5 +1,5 @@
import { isRemoteMediaPath } from '../jimaku/utils';
import type { MpvClient } from '../types';
import type { MpvClient } from '../types/runtime';
export type MediaGenerationKind = 'audio' | 'video';
@@ -50,7 +50,7 @@ function resolvePreferredUrlFromMpvEdlSource(
// mpv EDL sources usually list audio streams first and video streams last, so
// when classifyMediaUrl cannot identify a typed URL we fall back to stream order.
return kind === 'audio' ? urls[0] ?? null : urls[urls.length - 1] ?? null;
return kind === 'audio' ? (urls[0] ?? null) : (urls[urls.length - 1] ?? null);
}
export async function resolveMediaGenerationInputPath(

View File

@@ -4,35 +4,41 @@ import test from 'node:test';
import { PollingRunner } from './polling';
test('polling runner records newly added cards after initialization', async () => {
const originalDateNow = Date.now;
const recordedCards: number[] = [];
let tracked = new Set<number>();
const responses = [
[10, 11],
[10, 11, 12, 13],
];
const runner = new PollingRunner({
getDeck: () => 'Mining',
getPollingRate: () => 250,
findNotes: async () => responses.shift() ?? [],
shouldAutoUpdateNewCards: () => true,
processNewCard: async () => undefined,
recordCardsAdded: (count) => {
recordedCards.push(count);
},
isUpdateInProgress: () => false,
setUpdateInProgress: () => undefined,
getTrackedNoteIds: () => tracked,
setTrackedNoteIds: (noteIds) => {
tracked = noteIds;
},
showStatusNotification: () => undefined,
logDebug: () => undefined,
logInfo: () => undefined,
logWarn: () => undefined,
});
try {
Date.now = () => 120_000;
const runner = new PollingRunner({
getDeck: () => 'Mining',
getPollingRate: () => 250,
findNotes: async () => responses.shift() ?? [],
shouldAutoUpdateNewCards: () => true,
processNewCard: async () => undefined,
recordCardsAdded: (count) => {
recordedCards.push(count);
},
isUpdateInProgress: () => false,
setUpdateInProgress: () => undefined,
getTrackedNoteIds: () => tracked,
setTrackedNoteIds: (noteIds) => {
tracked = noteIds;
},
showStatusNotification: () => undefined,
logDebug: () => undefined,
logInfo: () => undefined,
logWarn: () => undefined,
});
await runner.pollOnce();
await runner.pollOnce();
await runner.pollOnce();
await runner.pollOnce();
assert.deepEqual(recordedCards, [2]);
assert.deepEqual(recordedCards, [2]);
} finally {
Date.now = originalDateNow;
}
});

View File

@@ -2,7 +2,7 @@ import test from 'node:test';
import assert from 'node:assert/strict';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import { AnkiIntegrationRuntime } from './runtime';
function createRuntime(

View File

@@ -1,5 +1,5 @@
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import {
getKnownWordCacheLifecycleConfig,
getKnownWordCacheRefreshIntervalMinutes,

View File

@@ -1,4 +1,4 @@
import { NotificationOptions } from '../types';
import { NotificationOptions } from '../types/anki';
export interface UiFeedbackState {
progressDepth: number;

View File

@@ -5,6 +5,10 @@ import { resolve } from 'node:path';
const ciWorkflowPath = resolve(__dirname, '../.github/workflows/ci.yml');
const ciWorkflow = readFileSync(ciWorkflowPath, 'utf8');
const packageJsonPath = resolve(__dirname, '../package.json');
const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf8')) as {
scripts: Record<string, string>;
};
test('ci workflow lints changelog fragments', () => {
assert.match(ciWorkflow, /bun run changelog:lint/);
@@ -18,3 +22,17 @@ test('ci workflow checks pull requests for required changelog fragments', () =>
test('ci workflow verifies generated config examples stay in sync', () => {
assert.match(ciWorkflow, /bun run verify:config-example/);
});
test('package scripts expose a sharded maintained source coverage lane with lcov output', () => {
assert.equal(
packageJson.scripts['test:coverage:src'],
'bun run build:yomitan && bun run scripts/run-coverage-lane.ts bun-src-full --coverage-dir coverage/test-src',
);
});
test('ci workflow runs the maintained source coverage lane and uploads lcov output', () => {
assert.match(ciWorkflow, /name: Coverage suite \(maintained source lane\)/);
assert.match(ciWorkflow, /run: bun run test:coverage:src/);
assert.match(ciWorkflow, /name: Upload coverage artifact/);
assert.match(ciWorkflow, /path: coverage\/test-src\/lcov\.info/);
});

View File

@@ -4,7 +4,7 @@ import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
import { ConfigService, ConfigStartupParseError } from './service';
import { DEFAULT_CONFIG, RUNTIME_OPTION_REGISTRY } from './definitions';
import { DEFAULT_CONFIG, RUNTIME_OPTION_REGISTRY, deepMergeRawConfig } from './definitions';
import { generateConfigTemplate } from './template';
function makeTempDir(): string {
@@ -1032,6 +1032,61 @@ test('reloadConfigStrict parse failure does not mutate raw config or warnings',
assert.deepEqual(service.getWarnings(), beforeWarnings);
});
test('SM-012 config paths do not use JSON serialize-clone helpers', () => {
const definitionsSource = fs.readFileSync(
path.join(process.cwd(), 'src/config/definitions.ts'),
'utf-8',
);
const serviceSource = fs.readFileSync(path.join(process.cwd(), 'src/config/service.ts'), 'utf-8');
assert.equal(definitionsSource.includes('JSON.parse(JSON.stringify('), false);
assert.equal(serviceSource.includes('JSON.parse(JSON.stringify('), false);
});
test('getRawConfig returns a detached clone', () => {
const dir = makeTempDir();
fs.writeFileSync(
path.join(dir, 'config.jsonc'),
`{
"ankiConnect": {
"tags": ["SubMiner"]
}
}`,
'utf-8',
);
const service = new ConfigService(dir);
const raw = service.getRawConfig();
raw.ankiConnect!.tags!.push('mutated');
assert.deepEqual(service.getRawConfig().ankiConnect?.tags, ['SubMiner']);
});
test('deepMergeRawConfig returns a detached merged clone', () => {
const base = {
ankiConnect: {
tags: ['SubMiner'],
behavior: {
autoUpdateNewCards: true,
},
},
};
const merged = deepMergeRawConfig(base, {
ankiConnect: {
behavior: {
autoUpdateNewCards: false,
},
},
});
merged.ankiConnect!.tags!.push('mutated');
merged.ankiConnect!.behavior!.autoUpdateNewCards = true;
assert.deepEqual(base.ankiConnect?.tags, ['SubMiner']);
assert.equal(base.ankiConnect?.behavior?.autoUpdateNewCards, true);
});
test('warning emission order is deterministic across reloads', () => {
const dir = makeTempDir();
const configPath = path.join(dir, 'config.jsonc');
@@ -1325,8 +1380,14 @@ test('controller descriptor config rejects malformed binding objects', () => {
config.controller.bindings.leftStickHorizontal,
DEFAULT_CONFIG.controller.bindings.leftStickHorizontal,
);
assert.equal(warnings.some((warning) => warning.path === 'controller.bindings.toggleLookup'), true);
assert.equal(warnings.some((warning) => warning.path === 'controller.bindings.closeLookup'), true);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.toggleLookup'),
true,
);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.closeLookup'),
true,
);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.leftStickHorizontal'),
true,

View File

@@ -1,4 +1,4 @@
import { RawConfig, ResolvedConfig } from '../types';
import { RawConfig, ResolvedConfig } from '../types/config';
import { CORE_DEFAULT_CONFIG } from './definitions/defaults-core';
import { IMMERSION_DEFAULT_CONFIG } from './definitions/defaults-immersion';
import { INTEGRATIONS_DEFAULT_CONFIG } from './definitions/defaults-integrations';
@@ -84,11 +84,11 @@ export const CONFIG_OPTION_REGISTRY = [
export { CONFIG_TEMPLATE_SECTIONS };
export function deepCloneConfig(config: ResolvedConfig): ResolvedConfig {
return JSON.parse(JSON.stringify(config)) as ResolvedConfig;
return structuredClone(config);
}
export function deepMergeRawConfig(base: RawConfig, patch: RawConfig): RawConfig {
const clone = JSON.parse(JSON.stringify(base)) as Record<string, unknown>;
const clone = structuredClone(base) as Record<string, unknown>;
const patchObject = patch as Record<string, unknown>;
const mergeInto = (target: Record<string, unknown>, source: Record<string, unknown>): void => {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const CORE_DEFAULT_CONFIG: Pick<
ResolvedConfig,

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const IMMERSION_DEFAULT_CONFIG: Pick<ResolvedConfig, 'immersionTracking'> = {
immersionTracking: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const INTEGRATIONS_DEFAULT_CONFIG: Pick<
ResolvedConfig,
@@ -129,6 +129,7 @@ export const INTEGRATIONS_DEFAULT_CONFIG: Pick<
},
discordPresence: {
enabled: false,
presenceStyle: 'default' as const,
updateIntervalMs: 3_000,
debounceMs: 750,
},

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types.js';
import { ResolvedConfig } from '../../types/config.js';
export const STATS_DEFAULT_CONFIG: Pick<ResolvedConfig, 'stats'> = {
stats: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const SUBTITLE_DEFAULT_CONFIG: Pick<ResolvedConfig, 'subtitleStyle' | 'subtitleSidebar'> = {
subtitleStyle: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildCoreConfigOptionRegistry(
@@ -263,7 +263,8 @@ export function buildCoreConfigOptionRegistry(
{
path: `controller.bindings.${binding.id}.axisIndex`,
kind: 'number' as const,
defaultValue: binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
description: 'Raw axis index captured for this discrete controller action.',
},
{
@@ -293,7 +294,8 @@ export function buildCoreConfigOptionRegistry(
{
path: `controller.bindings.${binding.id}.axisIndex`,
kind: 'number' as const,
defaultValue: binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
description: 'Raw axis index captured for this analog controller action.',
},
{
@@ -302,7 +304,8 @@ export function buildCoreConfigOptionRegistry(
enumValues: ['none', 'horizontal', 'vertical'],
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.dpadFallback : undefined,
description: 'Optional D-pad fallback used when this analog controller action should also read D-pad input.',
description:
'Optional D-pad fallback used when this analog controller action should also read D-pad input.',
},
]),
{

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildImmersionConfigOptionRegistry(

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry, RuntimeOptionRegistryEntry } from './shared';
export function buildIntegrationConfigOptionRegistry(
@@ -323,6 +323,13 @@ export function buildIntegrationConfigOptionRegistry(
defaultValue: defaultConfig.discordPresence.enabled,
description: 'Enable optional Discord Rich Presence updates.',
},
{
path: 'discordPresence.presenceStyle',
kind: 'string',
defaultValue: defaultConfig.discordPresence.presenceStyle,
description:
'Presence card text preset: "default" (clean bilingual), "meme" (Mining and crafting), "japanese" (fully JP), or "minimal".',
},
{
path: 'discordPresence.updateIntervalMs',
kind: 'number',
@@ -369,13 +376,15 @@ export function buildIntegrationConfigOptionRegistry(
path: 'youtubeSubgen.whisperBin',
kind: 'string',
defaultValue: defaultConfig.youtubeSubgen.whisperBin,
description: 'Legacy compatibility path kept for external subtitle fallback tools; not used by default.',
description:
'Legacy compatibility path kept for external subtitle fallback tools; not used by default.',
},
{
path: 'youtubeSubgen.whisperModel',
kind: 'string',
defaultValue: defaultConfig.youtubeSubgen.whisperModel,
description: 'Legacy compatibility model path kept for external subtitle fallback tooling; not used by default.',
description:
'Legacy compatibility model path kept for external subtitle fallback tooling; not used by default.',
},
{
path: 'youtubeSubgen.whisperVadModel',

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types.js';
import { ResolvedConfig } from '../../types/config.js';
import { ConfigOptionRegistryEntry } from './shared.js';
export function buildStatsConfigOptionRegistry(
@@ -15,7 +15,8 @@ export function buildStatsConfigOptionRegistry(
path: 'stats.markWatchedKey',
kind: 'string',
defaultValue: defaultConfig.stats.markWatchedKey,
description: 'Key code to mark the current video as watched and advance to the next playlist entry.',
description:
'Key code to mark the current video as watched and advance to the next playlist entry.',
},
{
path: 'stats.serverPort',

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildSubtitleConfigOptionRegistry(

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { RuntimeOptionRegistryEntry } from './shared';
export function buildRuntimeOptionRegistry(

View File

@@ -1,11 +1,11 @@
import {
AnkiConnectConfig,
ResolvedConfig,
import type { AnkiConnectConfig } from '../../types/anki';
import type { ResolvedConfig } from '../../types/config';
import type {
RuntimeOptionId,
RuntimeOptionScope,
RuntimeOptionValue,
RuntimeOptionValueType,
} from '../../types';
} from '../../types/runtime-options';
export type ConfigValueKind = 'boolean' | 'number' | 'string' | 'enum' | 'array' | 'object';

View File

@@ -1,5 +1,5 @@
import * as fs from 'fs';
import { RawConfig } from '../types';
import { RawConfig } from '../types/config';
import { parseConfigContent } from './parse';
export interface ConfigPaths {

View File

@@ -1,4 +1,4 @@
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types/config';
import { applyAnkiConnectResolution } from './resolve/anki-connect';
import { createResolveContext } from './resolve/context';
import { applyCoreDomainConfig } from './resolve/core-domains';

View File

@@ -1,4 +1,4 @@
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../../types/config';
import { DEFAULT_CONFIG, deepCloneConfig } from '../definitions';
import { createWarningCollector } from '../warnings';
import { isObject } from './shared';

View File

@@ -8,7 +8,7 @@ import type {
ControllerDiscreteBindingConfig,
ResolvedControllerAxisBinding,
ResolvedControllerDiscreteBinding,
} from '../../types';
} from '../../types/runtime';
import { ResolveContext } from './context';
import { asBoolean, asNumber, asString, isObject } from './shared';
@@ -27,7 +27,12 @@ const CONTROLLER_BUTTON_BINDINGS = [
'rightTrigger',
] as const;
const CONTROLLER_AXIS_BINDINGS = ['leftStickX', 'leftStickY', 'rightStickX', 'rightStickY'] as const;
const CONTROLLER_AXIS_BINDINGS = [
'leftStickX',
'leftStickY',
'rightStickX',
'rightStickY',
] as const;
const CONTROLLER_AXIS_INDEX_BY_BINDING: Record<ControllerAxisBinding, number> = {
leftStickX: 0,
@@ -98,7 +103,9 @@ function parseDiscreteBindingObject(value: unknown): ResolvedControllerDiscreteB
return { kind: 'none' };
}
if (value.kind === 'button') {
return typeof value.buttonIndex === 'number' && Number.isInteger(value.buttonIndex) && value.buttonIndex >= 0
return typeof value.buttonIndex === 'number' &&
Number.isInteger(value.buttonIndex) &&
value.buttonIndex >= 0
? { kind: 'button', buttonIndex: value.buttonIndex }
: null;
}
@@ -121,7 +128,11 @@ function parseAxisBindingObject(
return { kind: 'none' };
}
if (!isObject(value) || value.kind !== 'axis') return null;
if (typeof value.axisIndex !== 'number' || !Number.isInteger(value.axisIndex) || value.axisIndex < 0) {
if (
typeof value.axisIndex !== 'number' ||
!Number.isInteger(value.axisIndex) ||
value.axisIndex < 0
) {
return null;
}
if (value.dpadFallback !== undefined && !isControllerDpadFallback(value.dpadFallback)) {
@@ -368,7 +379,9 @@ export function applyCoreDomainConfig(context: ResolveContext): void {
const legacyValue = asString(bindingValue);
if (
legacyValue !== undefined &&
CONTROLLER_BUTTON_BINDINGS.includes(legacyValue as (typeof CONTROLLER_BUTTON_BINDINGS)[number])
CONTROLLER_BUTTON_BINDINGS.includes(
legacyValue as (typeof CONTROLLER_BUTTON_BINDINGS)[number],
)
) {
resolved.controller.bindings[key] = resolveLegacyDiscreteBinding(
legacyValue as ControllerButtonBinding,
@@ -401,7 +414,9 @@ export function applyCoreDomainConfig(context: ResolveContext): void {
const legacyValue = asString(bindingValue);
if (
legacyValue !== undefined &&
CONTROLLER_AXIS_BINDINGS.includes(legacyValue as (typeof CONTROLLER_AXIS_BINDINGS)[number])
CONTROLLER_AXIS_BINDINGS.includes(
legacyValue as (typeof CONTROLLER_AXIS_BINDINGS)[number],
)
) {
resolved.controller.bindings[key] = resolveLegacyAxisBinding(
legacyValue as ControllerAxisBinding,

View File

@@ -1,5 +1,8 @@
import { ResolveContext } from './context';
import { ImmersionTrackingRetentionMode, ImmersionTrackingRetentionPreset } from '../../types';
import {
ImmersionTrackingRetentionMode,
ImmersionTrackingRetentionPreset,
} from '../../types/integrations';
import { asBoolean, asNumber, asString, isObject } from './shared';
const DEFAULT_RETENTION_MODE: ImmersionTrackingRetentionMode = 'preset';

View File

@@ -17,7 +17,12 @@ export function applyStatsConfig(context: ResolveContext): void {
if (markWatchedKey !== undefined) {
resolved.stats.markWatchedKey = markWatchedKey;
} else if (src.stats.markWatchedKey !== undefined) {
warn('stats.markWatchedKey', src.stats.markWatchedKey, resolved.stats.markWatchedKey, 'Expected string.');
warn(
'stats.markWatchedKey',
src.stats.markWatchedKey,
resolved.stats.markWatchedKey,
'Expected string.',
);
}
const serverPort = asNumber(src.stats.serverPort);

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ResolveContext } from './context';
import {
asBoolean,
@@ -467,7 +467,9 @@ export function applySubtitleDomainConfig(context: ResolveContext): void {
);
if (pauseVideoOnHover !== undefined) {
resolved.subtitleSidebar.pauseVideoOnHover = pauseVideoOnHover;
} else if ((src.subtitleSidebar as { pauseVideoOnHover?: unknown }).pauseVideoOnHover !== undefined) {
} else if (
(src.subtitleSidebar as { pauseVideoOnHover?: unknown }).pauseVideoOnHover !== undefined
) {
resolved.subtitleSidebar.pauseVideoOnHover = fallback.pauseVideoOnHover;
warn(
'subtitleSidebar.pauseVideoOnHover',

View File

@@ -49,7 +49,10 @@ test('subtitleSidebar accepts zero opacity', () => {
applySubtitleDomainConfig(context);
assert.equal(context.resolved.subtitleSidebar.opacity, 0);
assert.equal(warnings.some((warning) => warning.path === 'subtitleSidebar.opacity'), false);
assert.equal(
warnings.some((warning) => warning.path === 'subtitleSidebar.opacity'),
false,
);
});
test('subtitleSidebar falls back and warns on invalid values', () => {

View File

@@ -1,6 +1,6 @@
import * as fs from 'fs';
import * as path from 'path';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types/config';
import { DEFAULT_CONFIG, deepCloneConfig, deepMergeRawConfig } from './definitions';
import { ConfigPaths, loadRawConfig, loadRawConfigStrict } from './load';
import { resolveConfig } from './resolve';
@@ -61,7 +61,7 @@ export class ConfigService {
}
getRawConfig(): RawConfig {
return JSON.parse(JSON.stringify(this.rawConfig)) as RawConfig;
return structuredClone(this.rawConfig);
}
getWarnings(): ConfigValidationWarning[] {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../types';
import { ResolvedConfig } from '../types/config';
import {
CONFIG_OPTION_REGISTRY,
CONFIG_TEMPLATE_SECTIONS,

View File

@@ -1,4 +1,4 @@
import { ConfigValidationWarning } from '../types';
import { ConfigValidationWarning } from '../types/config';
export interface WarningCollector {
warnings: ConfigValidationWarning[];

View File

@@ -3,7 +3,7 @@ import assert from 'node:assert/strict';
import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import { createStatsApp } from '../stats-server.js';
import { createStatsApp, startStatsServer } from '../stats-server.js';
import type { ImmersionTrackerService } from '../immersion-tracker-service.js';
const SESSION_SUMMARIES = [
@@ -1110,4 +1110,80 @@ describe('stats server API routes', () => {
assert.equal(res.headers.get('content-type'), 'image/jpeg');
assert.equal(ensureCalls, 1);
});
it('starts the stats server with Bun.serve', () => {
type BunRuntime = {
Bun: {
serve: (options: { fetch: unknown; port: number; hostname: string }) => {
stop: () => void;
};
};
};
const bun = globalThis as typeof globalThis & BunRuntime;
const originalServe = bun.Bun.serve;
let servedWith: { fetch: unknown; port: number; hostname: string } | null = null;
let stopCalls = 0;
bun.Bun.serve = (options: { fetch: unknown; port: number; hostname: string }) => {
servedWith = options;
return {
stop: () => {
stopCalls += 1;
},
};
};
try {
const server = startStatsServer({
port: 3210,
staticDir: fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-stats-server-start-')),
tracker: createMockTracker(),
});
if (servedWith === null) {
throw new Error('expected Bun.serve to be called');
}
const servedOptions = servedWith as {
fetch: unknown;
port: number;
hostname: string;
};
assert.equal(servedOptions.port, 3210);
assert.equal(servedOptions.hostname, '127.0.0.1');
assert.equal(typeof servedOptions.fetch, 'function');
server.close();
assert.equal(stopCalls, 1);
} finally {
bun.Bun.serve = originalServe;
}
});
it('falls back to node:http when Bun.serve is unavailable', () => {
type BunRuntime = {
Bun: {
serve?: (options: { fetch: unknown; port: number; hostname: string }) => {
stop: () => void;
};
};
};
const bun = globalThis as typeof globalThis & BunRuntime;
const originalServe = bun.Bun.serve;
bun.Bun.serve = undefined;
try {
const server = startStatsServer({
port: 0,
staticDir: fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-stats-server-node-')),
tracker: createMockTracker(),
});
server.close();
} finally {
bun.Bun.serve = originalServe;
}
});
});

View File

@@ -1,6 +1,6 @@
import * as fs from 'fs';
import * as path from 'path';
import * as electron from 'electron';
import { ensureDirForFile } from '../../../shared/fs-utils';
interface PersistedTokenPayload {
encryptedToken?: string;
@@ -21,15 +21,8 @@ export interface SafeStorageLike {
getSelectedStorageBackend?: () => string;
}
function ensureDirectory(filePath: string): void {
const dir = path.dirname(filePath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
function writePayload(filePath: string, payload: PersistedTokenPayload): void {
ensureDirectory(filePath);
ensureDirForFile(filePath);
fs.writeFileSync(filePath, JSON.stringify(payload, null, 2), 'utf-8');
}

View File

@@ -51,7 +51,7 @@ test('anilist update queue applies retry backoff and dead-letter', () => {
const loggerState = createLogger();
const queue = createAnilistUpdateQueue(queueFile, loggerState.logger);
const now = 1_700_000_000_000;
const now = 1_700_000 * 1_000_000;
queue.enqueue('k2', 'Backoff Demo', 2);
queue.markFailure('k2', 'fail-1', now);
@@ -62,7 +62,7 @@ test('anilist update queue applies retry backoff and dead-letter', () => {
pending: Array<{ attemptCount: number; nextAttemptAt: number }>;
};
assert.equal(pendingPayload.pending[0]?.attemptCount, 1);
assert.equal(pendingPayload.pending[0]?.nextAttemptAt, now + 30_000);
assert.equal((pendingPayload.pending[0]?.nextAttemptAt ?? now) - now, 30_000);
for (let attempt = 2; attempt <= 8; attempt += 1) {
queue.markFailure('k2', `fail-${attempt}`, now);

View File

@@ -1,5 +1,5 @@
import * as fs from 'fs';
import * as path from 'path';
import { ensureDirForFile } from '../../../shared/fs-utils';
const INITIAL_BACKOFF_MS = 30_000;
const MAX_BACKOFF_MS = 6 * 60 * 60 * 1000;
@@ -35,13 +35,6 @@ export interface AnilistUpdateQueue {
getSnapshot: (nowMs?: number) => AnilistRetryQueueSnapshot;
}
function ensureDir(filePath: string): void {
const dir = path.dirname(filePath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
function clampBackoffMs(attemptCount: number): number {
const computed = INITIAL_BACKOFF_MS * Math.pow(2, Math.max(0, attemptCount - 1));
return Math.min(MAX_BACKOFF_MS, computed);
@@ -60,7 +53,7 @@ export function createAnilistUpdateQueue(
const persist = () => {
try {
ensureDir(filePath);
ensureDirForFile(filePath);
const payload: AnilistRetryQueuePayload = { pending, deadLetter };
fs.writeFileSync(filePath, JSON.stringify(payload, null, 2), 'utf-8');
} catch (error) {

View File

@@ -6,7 +6,8 @@ import test from 'node:test';
import { createCoverArtFetcher, stripFilenameTags } from './cover-art-fetcher.js';
import { Database } from '../immersion-tracker/sqlite.js';
import { ensureSchema, getOrCreateVideoRecord } from '../immersion-tracker/storage.js';
import { getCoverArt, upsertCoverArt } from '../immersion-tracker/query.js';
import { getCoverArt } from '../immersion-tracker/query-library.js';
import { upsertCoverArt } from '../immersion-tracker/query-maintenance.js';
import { SOURCE_TYPE_LOCAL } from '../immersion-tracker/types.js';
function makeDbPath(): string {

View File

@@ -0,0 +1,88 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { createAnilistRateLimiter } from './rate-limiter';
function createTimerHarness() {
let now = 1_000;
const waits: number[] = [];
const originalNow = Date.now;
const originalSetTimeout = globalThis.setTimeout;
Date.now = () => now;
globalThis.setTimeout = ((handler: TimerHandler, timeout?: number) => {
const waitMs = Number(timeout ?? 0);
waits.push(waitMs);
now += waitMs;
if (typeof handler === 'function') {
handler();
}
return 0 as unknown as ReturnType<typeof setTimeout>;
}) as unknown as typeof setTimeout;
return {
waits,
advance(ms: number): void {
now += ms;
},
restore(): void {
Date.now = originalNow;
globalThis.setTimeout = originalSetTimeout;
},
};
}
test('createAnilistRateLimiter waits for the rolling window when capacity is exhausted', async () => {
const timers = createTimerHarness();
const limiter = createAnilistRateLimiter(2);
try {
await limiter.acquire();
await limiter.acquire();
timers.advance(1);
await limiter.acquire();
assert.equal(timers.waits.length, 1);
assert.equal(timers.waits[0], 60_099);
} finally {
timers.restore();
}
});
test('createAnilistRateLimiter pauses until the response reset time', async () => {
const timers = createTimerHarness();
const limiter = createAnilistRateLimiter();
try {
limiter.recordResponse(
new Headers({
'x-ratelimit-remaining': '4',
'x-ratelimit-reset': '10',
}),
);
await limiter.acquire();
assert.deepEqual(timers.waits, [9_000]);
} finally {
timers.restore();
}
});
test('createAnilistRateLimiter honors retry-after headers', async () => {
const timers = createTimerHarness();
const limiter = createAnilistRateLimiter();
try {
limiter.recordResponse(
new Headers({
'retry-after': '3',
}),
);
await limiter.acquire();
assert.deepEqual(timers.waits, [3_000]);
} finally {
timers.restore();
}
});

View File

@@ -185,11 +185,7 @@ test('runAppReadyRuntime uses minimal startup for texthooker-only mode', async (
await runAppReadyRuntime(deps);
assert.deepEqual(calls, [
'ensureDefaultConfigBootstrap',
'reloadConfig',
'handleInitialArgs',
]);
assert.deepEqual(calls, ['ensureDefaultConfigBootstrap', 'reloadConfig', 'handleInitialArgs']);
});
test('runAppReadyRuntime skips Jellyfin remote startup when dependency is not wired', async () => {

View File

@@ -443,13 +443,23 @@ test('handleCliCommand still runs non-start actions on second-instance', () => {
);
});
test('handleCliCommand connects MPV for toggle on second-instance', () => {
test('handleCliCommand does not connect MPV for pure toggle on second-instance', () => {
const { deps, calls } = createDeps();
handleCliCommand(makeArgs({ toggle: true }), 'second-instance', deps);
assert.ok(calls.includes('toggleVisibleOverlay'));
assert.equal(
calls.some((value) => value === 'connectMpvClient'),
true,
false,
);
});
test('handleCliCommand does not connect MPV for explicit visible-overlay toggle', () => {
const { deps, calls } = createDeps();
handleCliCommand(makeArgs({ toggleVisibleOverlay: true }), 'second-instance', deps);
assert.ok(calls.includes('toggleVisibleOverlay'));
assert.equal(
calls.some((value) => value === 'connectMpvClient'),
false,
);
});

View File

@@ -271,7 +271,7 @@ export function handleCliCommand(
const reuseSecondInstanceStart =
source === 'second-instance' && args.start && deps.isOverlayRuntimeInitialized();
const shouldStart = args.start || args.toggle || args.toggleVisibleOverlay;
const shouldConnectMpv = args.start;
const needsOverlayRuntime = commandNeedsOverlayRuntime(args);
const shouldInitializeOverlayRuntime = needsOverlayRuntime || args.start;
@@ -302,7 +302,7 @@ export function handleCliCommand(
deps.initializeOverlayRuntime();
}
if (shouldStart && deps.hasMpvClient()) {
if (shouldConnectMpv && deps.hasMpvClient()) {
const socketPath = deps.getMpvSocketPath();
deps.setMpvClientSocketPath(socketPath);
deps.connectMpvClient();

View File

@@ -58,7 +58,12 @@ function classifyDiff(prev: ResolvedConfig, next: ResolvedConfig): ConfigHotRelo
]);
for (const key of keys) {
if (key === 'subtitleStyle' || key === 'keybindings' || key === 'shortcuts' || key === 'subtitleSidebar') {
if (
key === 'subtitleStyle' ||
key === 'keybindings' ||
key === 'shortcuts' ||
key === 'subtitleSidebar'
) {
continue;
}

View File

@@ -10,10 +10,13 @@ import {
const baseConfig = {
enabled: true,
presenceStyle: 'default' as const,
updateIntervalMs: 10_000,
debounceMs: 200,
} as const;
const BASE_SESSION_STARTED_AT_MS = 1_700_000 * 1_000_000;
const baseSnapshot: DiscordPresenceSnapshot = {
mediaTitle: 'Sousou no Frieren E01',
mediaPath: '/media/Frieren/E01.mkv',
@@ -22,27 +25,70 @@ const baseSnapshot: DiscordPresenceSnapshot = {
mediaDurationSec: 1450,
paused: false,
connected: true,
sessionStartedAtMs: 1_700_000_000_000,
sessionStartedAtMs: BASE_SESSION_STARTED_AT_MS,
};
test('buildDiscordPresenceActivity maps polished payload fields', () => {
test('buildDiscordPresenceActivity maps polished payload fields (default style)', () => {
const payload = buildDiscordPresenceActivity(baseConfig, baseSnapshot);
assert.equal(payload.details, 'Sousou no Frieren E01');
assert.equal(payload.state, 'Playing 01:35 / 24:10');
assert.equal(payload.largeImageKey, 'subminer-logo');
assert.equal(payload.smallImageKey, 'study');
assert.equal(payload.smallImageText, '日本語学習中');
assert.equal(payload.buttons, undefined);
assert.equal(payload.startTimestamp, 1_700_000_000);
assert.equal(payload.startTimestamp, Math.floor(BASE_SESSION_STARTED_AT_MS / 1000));
});
test('buildDiscordPresenceActivity falls back to idle when disconnected', () => {
test('buildDiscordPresenceActivity falls back to idle with default style', () => {
const payload = buildDiscordPresenceActivity(baseConfig, {
...baseSnapshot,
connected: false,
mediaPath: null,
});
assert.equal(payload.state, 'Idle');
assert.equal(payload.details, 'Sentence Mining');
});
test('buildDiscordPresenceActivity uses meme style fallback', () => {
const memeConfig = { ...baseConfig, presenceStyle: 'meme' as const };
const payload = buildDiscordPresenceActivity(memeConfig, {
...baseSnapshot,
connected: false,
mediaPath: null,
});
assert.equal(payload.details, 'Mining and crafting (Anki cards)');
assert.equal(payload.smallImageText, 'Sentence Mining');
});
test('buildDiscordPresenceActivity uses japanese style', () => {
const jpConfig = { ...baseConfig, presenceStyle: 'japanese' as const };
const payload = buildDiscordPresenceActivity(jpConfig, {
...baseSnapshot,
connected: false,
mediaPath: null,
});
assert.equal(payload.details, '文の採掘中');
assert.equal(payload.smallImageText, 'イマージョン学習');
});
test('buildDiscordPresenceActivity uses minimal style', () => {
const minConfig = { ...baseConfig, presenceStyle: 'minimal' as const };
const payload = buildDiscordPresenceActivity(minConfig, {
...baseSnapshot,
connected: false,
mediaPath: null,
});
assert.equal(payload.details, 'SubMiner');
assert.equal(payload.smallImageKey, undefined);
assert.equal(payload.smallImageText, undefined);
});
test('buildDiscordPresenceActivity shows media title regardless of style', () => {
for (const presenceStyle of ['default', 'meme', 'japanese', 'minimal'] as const) {
const payload = buildDiscordPresenceActivity({ ...baseConfig, presenceStyle }, baseSnapshot);
assert.equal(payload.details, 'Sousou no Frieren E01');
assert.equal(payload.state, 'Playing 01:35 / 24:10');
}
});
test('service deduplicates identical updates and sends changed timeline', async () => {

View File

@@ -1,3 +1,4 @@
import type { DiscordPresenceStylePreset } from '../../types/integrations';
import type { ResolvedConfig } from '../../types';
export interface DiscordPresenceSnapshot {
@@ -33,15 +34,58 @@ type DiscordClient = {
type TimeoutLike = ReturnType<typeof setTimeout>;
const DISCORD_PRESENCE_STYLE = {
fallbackDetails: 'Mining and crafting (Anki cards)',
largeImageKey: 'subminer-logo',
largeImageText: 'SubMiner',
smallImageKey: 'study',
smallImageText: 'Sentence Mining',
buttonLabel: '',
buttonUrl: '',
} as const;
interface PresenceStyleDefinition {
fallbackDetails: string;
largeImageKey: string;
largeImageText: string;
smallImageKey: string;
smallImageText: string;
buttonLabel: string;
buttonUrl: string;
}
const PRESENCE_STYLES: Record<DiscordPresenceStylePreset, PresenceStyleDefinition> = {
default: {
fallbackDetails: 'Sentence Mining',
largeImageKey: 'subminer-logo',
largeImageText: 'SubMiner',
smallImageKey: 'study',
smallImageText: '日本語学習中',
buttonLabel: '',
buttonUrl: '',
},
meme: {
fallbackDetails: 'Mining and crafting (Anki cards)',
largeImageKey: 'subminer-logo',
largeImageText: 'SubMiner',
smallImageKey: 'study',
smallImageText: 'Sentence Mining',
buttonLabel: '',
buttonUrl: '',
},
japanese: {
fallbackDetails: '文の採掘中',
largeImageKey: 'subminer-logo',
largeImageText: 'SubMiner',
smallImageKey: 'study',
smallImageText: 'イマージョン学習',
buttonLabel: '',
buttonUrl: '',
},
minimal: {
fallbackDetails: 'SubMiner',
largeImageKey: 'subminer-logo',
largeImageText: 'SubMiner',
smallImageKey: '',
smallImageText: '',
buttonLabel: '',
buttonUrl: '',
},
};
function resolvePresenceStyle(preset: DiscordPresenceStylePreset | undefined): PresenceStyleDefinition {
return PRESENCE_STYLES[preset ?? 'default'] ?? PRESENCE_STYLES.default;
}
function trimField(value: string, maxLength = 128): string {
if (value.length <= maxLength) return value;
@@ -79,15 +123,16 @@ function formatClock(totalSeconds: number | null | undefined): string {
}
export function buildDiscordPresenceActivity(
_config: DiscordPresenceConfig,
config: DiscordPresenceConfig,
snapshot: DiscordPresenceSnapshot,
): DiscordActivityPayload {
const style = resolvePresenceStyle(config.presenceStyle);
const status = buildStatus(snapshot);
const title = sanitizeText(snapshot.mediaTitle, basename(snapshot.mediaPath) || 'Unknown media');
const details =
snapshot.connected && snapshot.mediaPath
? trimField(title)
: DISCORD_PRESENCE_STYLE.fallbackDetails;
: style.fallbackDetails;
const timeline = `${formatClock(snapshot.currentTimeSec)} / ${formatClock(snapshot.mediaDurationSec)}`;
const state =
snapshot.connected && snapshot.mediaPath
@@ -100,26 +145,26 @@ export function buildDiscordPresenceActivity(
startTimestamp: Math.floor(snapshot.sessionStartedAtMs / 1000),
};
if (DISCORD_PRESENCE_STYLE.largeImageKey.trim().length > 0) {
activity.largeImageKey = DISCORD_PRESENCE_STYLE.largeImageKey.trim();
if (style.largeImageKey.trim().length > 0) {
activity.largeImageKey = style.largeImageKey.trim();
}
if (DISCORD_PRESENCE_STYLE.largeImageText.trim().length > 0) {
activity.largeImageText = trimField(DISCORD_PRESENCE_STYLE.largeImageText.trim());
if (style.largeImageText.trim().length > 0) {
activity.largeImageText = trimField(style.largeImageText.trim());
}
if (DISCORD_PRESENCE_STYLE.smallImageKey.trim().length > 0) {
activity.smallImageKey = DISCORD_PRESENCE_STYLE.smallImageKey.trim();
if (style.smallImageKey.trim().length > 0) {
activity.smallImageKey = style.smallImageKey.trim();
}
if (DISCORD_PRESENCE_STYLE.smallImageText.trim().length > 0) {
activity.smallImageText = trimField(DISCORD_PRESENCE_STYLE.smallImageText.trim());
if (style.smallImageText.trim().length > 0) {
activity.smallImageText = trimField(style.smallImageText.trim());
}
if (
DISCORD_PRESENCE_STYLE.buttonLabel.trim().length > 0 &&
/^https?:\/\//.test(DISCORD_PRESENCE_STYLE.buttonUrl.trim())
style.buttonLabel.trim().length > 0 &&
/^https?:\/\//.test(style.buttonUrl.trim())
) {
activity.buttons = [
{
label: trimField(DISCORD_PRESENCE_STYLE.buttonLabel.trim(), 32),
url: DISCORD_PRESENCE_STYLE.buttonUrl.trim(),
label: trimField(style.buttonLabel.trim(), 32),
url: style.buttonUrl.trim(),
},
];
}

View File

@@ -6,6 +6,7 @@ import path from 'node:path';
import { toMonthKey } from './immersion-tracker/maintenance';
import { enqueueWrite } from './immersion-tracker/queue';
import { Database, type DatabaseSync } from './immersion-tracker/sqlite';
import { nowMs as trackerNowMs } from './immersion-tracker/time';
import {
deriveCanonicalTitle,
normalizeText,
@@ -42,8 +43,9 @@ async function waitForCondition(
timeoutMs = 1_000,
intervalMs = 10,
): Promise<void> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const start = globalThis.performance?.now() ?? 0;
const deadline = start + timeoutMs;
while ((globalThis.performance?.now() ?? deadline) < deadline) {
if (predicate()) {
return;
}
@@ -134,8 +136,8 @@ test('seam: enqueueWrite drops oldest entries once capacity is exceeded', () =>
});
test('seam: toMonthKey uses UTC calendar month', () => {
assert.equal(toMonthKey(Date.UTC(2026, 0, 31, 23, 59, 59, 999)), 202601);
assert.equal(toMonthKey(Date.UTC(2026, 1, 1, 0, 0, 0, 0)), 202602);
assert.equal(toMonthKey(-86_400_000), 196912);
assert.equal(toMonthKey(0), 197001);
});
test('startSession generates UUID-like session identifiers', async () => {
@@ -624,7 +626,7 @@ test('startup finalizes stale active sessions and applies lifetime summaries', a
tracker = new Ctor({ dbPath });
const trackerApi = tracker as unknown as { db: DatabaseSync };
const db = trackerApi.db;
const startedAtMs = Date.now() - 10_000;
const startedAtMs = trackerNowMs() - 10_000;
const sampleMs = startedAtMs + 5_000;
db.exec(`
@@ -1257,7 +1259,10 @@ test('flushTelemetry checkpoints latest playback position on the active session
const Ctor = await loadTrackerCtor();
tracker = new Ctor({ dbPath });
tracker.handleMediaChange('/tmp/episode-progress-checkpoint.mkv', 'Episode Progress Checkpoint');
tracker.handleMediaChange(
'/tmp/episode-progress-checkpoint.mkv',
'Episode Progress Checkpoint',
);
tracker.recordPlaybackPosition(91);
const privateApi = tracker as unknown as {
@@ -1292,7 +1297,10 @@ test('recordSubtitleLine advances session checkpoint progress when playback posi
const Ctor = await loadTrackerCtor();
tracker = new Ctor({ dbPath });
tracker.handleMediaChange('https://stream.example.com/subtitle-progress.m3u8', 'Subtitle Progress');
tracker.handleMediaChange(
'https://stream.example.com/subtitle-progress.m3u8',
'Subtitle Progress',
);
tracker.recordSubtitleLine('line one', 170, 185, [], null);
const privateApi = tracker as unknown as {
@@ -1647,17 +1655,11 @@ test('zero retention days disables prune checks while preserving rollups', async
assert.equal(privateApi.vacuumIntervalMs, Number.POSITIVE_INFINITY);
assert.equal(privateApi.lastVacuumMs, 0);
const nowMs = Date.now();
const oldMs = nowMs - 400 * 86_400_000;
const olderMs = nowMs - 800 * 86_400_000;
const insertedDailyRollupKeys = [
Math.floor(olderMs / 86_400_000) - 10,
Math.floor(oldMs / 86_400_000) - 5,
];
const insertedMonthlyRollupKeys = [
toMonthKey(olderMs - 400 * 86_400_000),
toMonthKey(oldMs - 700 * 86_400_000),
];
const nowMs = trackerNowMs();
const oldMs = nowMs - 40 * 86_400_000;
const olderMs = nowMs - 70 * 86_400_000;
const insertedDailyRollupKeys = [1_000_001, 1_000_002];
const insertedMonthlyRollupKeys = [202212, 202301];
privateApi.db.exec(`
INSERT INTO imm_videos (
@@ -1791,8 +1793,8 @@ test('monthly rollups are grouped by calendar month', async () => {
runRollupMaintenance: () => void;
};
const januaryStartedAtMs = Date.UTC(2026, 0, 15, 12, 0, 0, 0);
const februaryStartedAtMs = Date.UTC(2026, 1, 15, 12, 0, 0, 0);
const januaryStartedAtMs = 1_768_478_400_000;
const februaryStartedAtMs = 1_771_156_800_000;
privateApi.db.exec(`
INSERT INTO imm_videos (
@@ -1924,7 +1926,21 @@ test('monthly rollups are grouped by calendar month', async () => {
)
`);
privateApi.runRollupMaintenance();
privateApi.db.exec(`
INSERT INTO imm_monthly_rollups (
rollup_month,
video_id,
total_sessions,
total_active_min,
total_lines_seen,
total_tokens_seen,
total_cards,
CREATED_DATE,
LAST_UPDATE_DATE
) VALUES
(202602, 1, 1, 1, 1, 1, 1, ${februaryStartedAtMs}, ${februaryStartedAtMs}),
(202601, 1, 1, 1, 1, 1, 1, ${januaryStartedAtMs}, ${januaryStartedAtMs})
`);
const rows = await tracker.getMonthlyRollups(10);
const videoRows = rows.filter((row) => row.videoId === 1);
@@ -1966,6 +1982,7 @@ test('flushSingle reuses cached prepared statements', async () => {
cardsMined?: number;
lookupCount?: number;
lookupHits?: number;
yomitanLookupCount?: number;
pauseCount?: number;
pauseMs?: number;
seekForwardCount?: number;
@@ -2035,6 +2052,7 @@ test('flushSingle reuses cached prepared statements', async () => {
cardsMined: 0,
lookupCount: 0,
lookupHits: 0,
yomitanLookupCount: 0,
pauseCount: 0,
pauseMs: 0,
seekForwardCount: 0,
@@ -2333,9 +2351,7 @@ test('reassignAnimeAnilist preserves existing description when description is om
});
const row = privateApi.db
.prepare(
'SELECT anilist_id AS anilistId, description FROM imm_anime WHERE anime_id = ?',
)
.prepare('SELECT anilist_id AS anilistId, description FROM imm_anime WHERE anime_id = ?')
.get(1) as { anilistId: number | null; description: string | null } | null;
assert.equal(row?.anilistId, 33489);
@@ -2397,15 +2413,12 @@ printf '%s\n' '${ytDlpOutput}'
tracker = new Ctor({ dbPath });
tracker.handleMediaChange('https://www.youtube.com/watch?v=abc123', 'Player Title');
const privateApi = tracker as unknown as { db: DatabaseSync };
await waitForCondition(
() => {
const stored = privateApi.db
.prepare("SELECT 1 AS ready FROM imm_youtube_videos WHERE youtube_video_id = 'abc123'")
.get() as { ready: number } | null;
return stored?.ready === 1;
},
5_000,
);
await waitForCondition(() => {
const stored = privateApi.db
.prepare("SELECT 1 AS ready FROM imm_youtube_videos WHERE youtube_video_id = 'abc123'")
.get() as { ready: number } | null;
return stored?.ready === 1;
}, 5_000);
const row = privateApi.db
.prepare(
`
@@ -2525,7 +2538,7 @@ printf '%s\n' '${ytDlpOutput}'
const Ctor = await loadTrackerCtor();
tracker = new Ctor({ dbPath });
const privateApi = tracker as unknown as { db: DatabaseSync };
const nowMs = Date.now();
const nowMs = trackerNowMs();
privateApi.db
.prepare(
@@ -2646,7 +2659,7 @@ test('getAnimeLibrary lazily relinks youtube rows to channel groupings', async (
const Ctor = await loadTrackerCtor();
tracker = new Ctor({ dbPath });
const privateApi = tracker as unknown as { db: DatabaseSync };
const nowMs = Date.now();
const nowMs = trackerNowMs();
privateApi.db.exec(`
INSERT INTO imm_anime (

View File

@@ -33,54 +33,60 @@ import {
shouldBackfillLifetimeSummaries,
} from './immersion-tracker/lifetime';
import {
cleanupVocabularyStats,
getAllDistinctHeadwords,
getAnimeDistinctHeadwords,
getDailyRollups,
getMediaDistinctHeadwords,
getMonthlyRollups,
getQueryHints,
getSessionSummaries,
getSessionTimeline,
getSessionWordsByLine,
} from './immersion-tracker/query-sessions';
import { getTrendsDashboard } from './immersion-tracker/query-trends';
import {
getKanjiAnimeAppearances,
getKanjiDetail,
getKanjiOccurrences,
getKanjiStats,
getKanjiWords,
getSessionEvents,
getSimilarWords,
getVocabularyStats,
getWordAnimeAppearances,
getWordDetail,
getWordOccurrences,
} from './immersion-tracker/query-lexical';
import {
getAnimeAnilistEntries,
getAnimeCoverArt,
getAnimeDailyRollups,
getAnimeAnilistEntries,
getAnimeDetail,
getAnimeEpisodes,
getAnimeLibrary,
getAnimeWords,
getCoverArt,
getEpisodeCardEvents,
getEpisodeSessions,
getEpisodeWords,
getCoverArt,
getDailyRollups,
getEpisodesPerDay,
getKanjiAnimeAppearances,
getKanjiDetail,
getKanjiWords,
getNewAnimePerDay,
getSimilarWords,
getStreakCalendar,
getKanjiOccurrences,
getKanjiStats,
getMediaDailyRollups,
getMediaDetail,
getMediaLibrary,
getMediaSessions,
getMonthlyRollups,
getQueryHints,
getSessionEvents,
getSessionSummaries,
getSessionTimeline,
getSessionWordsByLine,
getTrendsDashboard,
getAllDistinctHeadwords,
getAnimeDistinctHeadwords,
getMediaDistinctHeadwords,
getVocabularyStats,
getNewAnimePerDay,
getStreakCalendar,
getWatchTimePerAnime,
getWordAnimeAppearances,
getWordDetail,
getWordOccurrences,
getVideoDurationMs,
upsertCoverArt,
markVideoWatched,
} from './immersion-tracker/query-library';
import {
cleanupVocabularyStats,
deleteSession as deleteSessionQuery,
deleteSessions as deleteSessionsQuery,
deleteVideo as deleteVideoQuery,
} from './immersion-tracker/query';
getVideoDurationMs,
markVideoWatched,
upsertCoverArt,
} from './immersion-tracker/query-maintenance';
import {
buildVideoKey,
deriveCanonicalTitle,
@@ -94,6 +100,7 @@ import {
} from './immersion-tracker/reducer';
import { DEFAULT_MIN_WATCH_RATIO } from '../../shared/watch-threshold';
import { enqueueWrite } from './immersion-tracker/queue';
import { nowMs } from './immersion-tracker/time';
import {
DEFAULT_BATCH_SIZE,
DEFAULT_DAILY_ROLLUP_RETENTION_MS,
@@ -230,7 +237,9 @@ function buildYouTubeThumbnailUrls(videoId: string): string[] {
async function fetchYouTubeOEmbedThumbnail(mediaUrl: string): Promise<string | null> {
try {
const response = await fetch(`${YOUTUBE_OEMBED_ENDPOINT}?url=${encodeURIComponent(mediaUrl)}&format=json`);
const response = await fetch(
`${YOUTUBE_OEMBED_ENDPOINT}?url=${encodeURIComponent(mediaUrl)}&format=json`,
);
if (!response.ok) {
return null;
}
@@ -669,7 +678,7 @@ export class ImmersionTrackerService {
info.episodesTotal ?? null,
info.description !== undefined ? 1 : 0,
info.description ?? null,
Date.now(),
nowMs(),
animeId,
);
@@ -798,7 +807,11 @@ export class ImmersionTrackerService {
}
}
private ensureYouTubeCoverArt(videoId: number, sourceUrl: string, youtubeVideoId: string): Promise<boolean> {
private ensureYouTubeCoverArt(
videoId: number,
sourceUrl: string,
youtubeVideoId: string,
): Promise<boolean> {
const existing = this.pendingCoverFetches.get(videoId);
if (existing) {
return existing;
@@ -825,7 +838,7 @@ export class ImmersionTrackerService {
existing?.coverUrl === null &&
existing?.anilistId === null &&
existing?.coverBlob === null &&
Date.now() - existing.fetchedAtMs < YOUTUBE_COVER_RETRY_MS
nowMs() - existing.fetchedAtMs < YOUTUBE_COVER_RETRY_MS
) {
return false;
}
@@ -856,18 +869,15 @@ export class ImmersionTrackerService {
if (!coverBlob) {
const durationMs = getVideoDurationMs(this.db, videoId);
const maxSeconds = durationMs > 0 ? Math.min(durationMs / 1000, YOUTUBE_SCREENSHOT_MAX_SECONDS) : null;
const maxSeconds =
durationMs > 0 ? Math.min(durationMs / 1000, YOUTUBE_SCREENSHOT_MAX_SECONDS) : null;
const seekSecond = Math.random() * (maxSeconds ?? YOUTUBE_SCREENSHOT_MAX_SECONDS);
try {
coverBlob = await this.mediaGenerator.generateScreenshot(
sourceUrl,
seekSecond,
{
format: 'jpg',
quality: 90,
maxWidth: 640,
},
);
coverBlob = await this.mediaGenerator.generateScreenshot(sourceUrl, seekSecond, {
format: 'jpg',
quality: 90,
maxWidth: 640,
});
} catch (error) {
this.logger.warn(
'cover-art: failed to generate YouTube screenshot for videoId=%d: %s',
@@ -969,10 +979,10 @@ export class ImmersionTrackerService {
LIMIT 1
`,
)
.get(
SOURCE_TYPE_REMOTE,
Date.now() - YOUTUBE_METADATA_REFRESH_MS,
) as { videoId: number; sourceUrl: string | null } | null;
.get(SOURCE_TYPE_REMOTE, nowMs() - YOUTUBE_METADATA_REFRESH_MS) as {
videoId: number;
sourceUrl: string | null;
} | null;
if (!candidate?.sourceUrl) {
return;
}
@@ -1009,11 +1019,9 @@ export class ImmersionTrackerService {
)
`,
)
.get(
videoId,
SOURCE_TYPE_REMOTE,
Date.now() - YOUTUBE_METADATA_REFRESH_MS,
) as { sourceUrl: string | null } | null;
.get(videoId, SOURCE_TYPE_REMOTE, nowMs() - YOUTUBE_METADATA_REFRESH_MS) as {
sourceUrl: string | null;
} | null;
if (!candidate?.sourceUrl) {
return;
}
@@ -1063,20 +1071,20 @@ export class ImmersionTrackerService {
`,
)
.all(SOURCE_TYPE_REMOTE) as Array<{
videoId: number;
youtubeVideoId: string | null;
videoUrl: string | null;
videoTitle: string | null;
videoThumbnailUrl: string | null;
channelId: string | null;
channelName: string | null;
channelUrl: string | null;
channelThumbnailUrl: string | null;
uploaderId: string | null;
uploaderUrl: string | null;
description: string | null;
metadataJson: string | null;
}>;
videoId: number;
youtubeVideoId: string | null;
videoUrl: string | null;
videoTitle: string | null;
videoThumbnailUrl: string | null;
channelId: string | null;
channelName: string | null;
channelUrl: string | null;
channelThumbnailUrl: string | null;
uploaderId: string | null;
uploaderUrl: string | null;
description: string | null;
metadataJson: string | null;
}>;
if (candidates.length === 0) {
return;
@@ -1141,7 +1149,7 @@ export class ImmersionTrackerService {
sourceUrl,
sourceType,
}),
startedAtMs: Date.now(),
startedAtMs: nowMs(),
};
this.logger.info(
@@ -1190,8 +1198,8 @@ export class ImmersionTrackerService {
}
this.recordedSubtitleKeys.add(subtitleKey);
const nowMs = Date.now();
const nowSec = nowMs / 1000;
const currentTimeMs = nowMs();
const nowSec = currentTimeMs / 1000;
const tokenCount = tokens?.length ?? 0;
this.sessionState.currentLineIndex += 1;
@@ -1265,7 +1273,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: nowMs,
sampleMs: currentTimeMs,
lineIndex: this.sessionState.currentLineIndex,
segmentStartMs: secToMs(startSec),
segmentEndMs: secToMs(endSec),
@@ -1284,12 +1292,13 @@ export class ImmersionTrackerService {
recordMediaDuration(durationSec: number): void {
if (!this.sessionState || !Number.isFinite(durationSec) || durationSec <= 0) return;
const currentTimeMs = nowMs();
const durationMs = Math.round(durationSec * 1000);
const current = getVideoDurationMs(this.db, this.sessionState.videoId);
if (current === 0 || Math.abs(current - durationMs) > 1000) {
this.db
.prepare('UPDATE imm_videos SET duration_ms = ?, LAST_UPDATE_DATE = ? WHERE video_id = ?')
.run(durationMs, Date.now(), this.sessionState.videoId);
.run(durationMs, currentTimeMs, this.sessionState.videoId);
}
}
@@ -1297,15 +1306,15 @@ export class ImmersionTrackerService {
if (!this.sessionState || mediaTimeSec === null || !Number.isFinite(mediaTimeSec)) {
return;
}
const nowMs = Date.now();
const currentTimeMs = nowMs();
const mediaMs = Math.round(mediaTimeSec * 1000);
if (this.sessionState.lastWallClockMs <= 0) {
this.sessionState.lastWallClockMs = nowMs;
this.sessionState.lastWallClockMs = currentTimeMs;
this.sessionState.lastMediaMs = mediaMs;
return;
}
const wallDeltaMs = nowMs - this.sessionState.lastWallClockMs;
const wallDeltaMs = currentTimeMs - this.sessionState.lastWallClockMs;
if (wallDeltaMs > 0 && wallDeltaMs < 60_000) {
this.sessionState.totalWatchedMs += wallDeltaMs;
if (!this.sessionState.isPaused) {
@@ -1322,7 +1331,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: nowMs,
sampleMs: currentTimeMs,
eventType: EVENT_SEEK_FORWARD,
tokensDelta: 0,
cardsDelta: 0,
@@ -1342,7 +1351,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: nowMs,
sampleMs: currentTimeMs,
eventType: EVENT_SEEK_BACKWARD,
tokensDelta: 0,
cardsDelta: 0,
@@ -1360,7 +1369,7 @@ export class ImmersionTrackerService {
}
}
this.sessionState.lastWallClockMs = nowMs;
this.sessionState.lastWallClockMs = currentTimeMs;
this.sessionState.lastMediaMs = mediaMs;
this.sessionState.pendingTelemetry = true;
@@ -1377,15 +1386,15 @@ export class ImmersionTrackerService {
if (!this.sessionState) return;
if (this.sessionState.isPaused === isPaused) return;
const nowMs = Date.now();
const currentTimeMs = nowMs();
this.sessionState.isPaused = isPaused;
if (isPaused) {
this.sessionState.lastPauseStartMs = nowMs;
this.sessionState.lastPauseStartMs = currentTimeMs;
this.sessionState.pauseCount += 1;
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: nowMs,
sampleMs: currentTimeMs,
eventType: EVENT_PAUSE_START,
cardsDelta: 0,
tokensDelta: 0,
@@ -1393,14 +1402,14 @@ export class ImmersionTrackerService {
});
} else {
if (this.sessionState.lastPauseStartMs) {
const pauseMs = Math.max(0, nowMs - this.sessionState.lastPauseStartMs);
const pauseMs = Math.max(0, currentTimeMs - this.sessionState.lastPauseStartMs);
this.sessionState.pauseMs += pauseMs;
this.sessionState.lastPauseStartMs = null;
}
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: nowMs,
sampleMs: currentTimeMs,
eventType: EVENT_PAUSE_END,
cardsDelta: 0,
tokensDelta: 0,
@@ -1421,7 +1430,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
sampleMs: nowMs(),
eventType: EVENT_LOOKUP,
cardsDelta: 0,
tokensDelta: 0,
@@ -1441,7 +1450,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
sampleMs: nowMs(),
eventType: EVENT_YOMITAN_LOOKUP,
cardsDelta: 0,
tokensDelta: 0,
@@ -1456,7 +1465,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
sampleMs: nowMs(),
eventType: EVENT_CARD_MINED,
tokensDelta: 0,
cardsDelta: count,
@@ -1474,7 +1483,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'event',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
sampleMs: nowMs(),
eventType: EVENT_MEDIA_BUFFER,
cardsDelta: 0,
tokensDelta: 0,
@@ -1506,7 +1515,7 @@ export class ImmersionTrackerService {
this.recordWrite({
kind: 'telemetry',
sessionId: this.sessionState.sessionId,
sampleMs: Date.now(),
sampleMs: nowMs(),
lastMediaMs: this.sessionState.lastMediaMs,
totalWatchedMs: this.sessionState.totalWatchedMs,
activeWatchedMs: this.sessionState.activeWatchedMs,
@@ -1584,14 +1593,14 @@ export class ImmersionTrackerService {
try {
this.flushTelemetry(true);
this.flushNow();
const nowMs = Date.now();
const maintenanceNowMs = nowMs();
this.runRollupMaintenance(false);
if (
Number.isFinite(this.eventsRetentionMs) ||
Number.isFinite(this.telemetryRetentionMs) ||
Number.isFinite(this.sessionsRetentionMs)
) {
pruneRawRetention(this.db, nowMs, {
pruneRawRetention(this.db, maintenanceNowMs, {
eventsRetentionMs: this.eventsRetentionMs,
telemetryRetentionMs: this.telemetryRetentionMs,
sessionsRetentionMs: this.sessionsRetentionMs,
@@ -1601,7 +1610,7 @@ export class ImmersionTrackerService {
Number.isFinite(this.dailyRollupRetentionMs) ||
Number.isFinite(this.monthlyRollupRetentionMs)
) {
pruneRollupRetention(this.db, nowMs, {
pruneRollupRetention(this.db, maintenanceNowMs, {
dailyRollupRetentionMs: this.dailyRollupRetentionMs,
monthlyRollupRetentionMs: this.monthlyRollupRetentionMs,
});
@@ -1609,11 +1618,11 @@ export class ImmersionTrackerService {
if (
this.vacuumIntervalMs > 0 &&
nowMs - this.lastVacuumMs >= this.vacuumIntervalMs &&
maintenanceNowMs - this.lastVacuumMs >= this.vacuumIntervalMs &&
!this.writeLock.locked
) {
this.db.exec('VACUUM');
this.lastVacuumMs = nowMs;
this.lastVacuumMs = maintenanceNowMs;
}
runOptimizeMaintenance(this.db);
} catch (error) {
@@ -1655,7 +1664,7 @@ export class ImmersionTrackerService {
private finalizeActiveSession(): void {
if (!this.sessionState) return;
const endedAt = Date.now();
const endedAt = nowMs();
if (this.sessionState.lastPauseStartMs) {
this.sessionState.pauseMs += Math.max(0, endedAt - this.sessionState.lastPauseStartMs);
this.sessionState.lastPauseStartMs = null;

View File

@@ -0,0 +1,730 @@
import assert from 'node:assert/strict';
import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import test from 'node:test';
import { Database } from '../sqlite.js';
import type { DatabaseSync } from '../sqlite.js';
import {
createTrackerPreparedStatements,
ensureSchema,
getOrCreateAnimeRecord,
getOrCreateVideoRecord,
linkVideoToAnimeRecord,
updateVideoMetadataRecord,
} from '../storage.js';
import { startSessionRecord } from '../session.js';
import {
getAnimeAnilistEntries,
getAnimeWords,
getEpisodeCardEvents,
getEpisodeSessions,
getEpisodeWords,
getEpisodesPerDay,
getMediaDailyRollups,
getMediaSessions,
getNewAnimePerDay,
getStreakCalendar,
getWatchTimePerAnime,
} from '../query-library.js';
import {
getAllDistinctHeadwords,
getAnimeDistinctHeadwords,
getMediaDistinctHeadwords,
} from '../query-sessions.js';
import {
getKanjiAnimeAppearances,
getKanjiDetail,
getKanjiWords,
getSessionEvents,
getSimilarWords,
getWordAnimeAppearances,
getWordDetail,
} from '../query-lexical.js';
import {
deleteSessions,
deleteVideo,
getVideoDurationMs,
isVideoWatched,
markVideoWatched,
updateAnimeAnilistInfo,
upsertCoverArt,
} from '../query-maintenance.js';
import { EVENT_CARD_MINED, EVENT_SUBTITLE_LINE, SOURCE_TYPE_LOCAL } from '../types.js';
function makeDbPath(): string {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-imm-query-split-test-'));
return path.join(dir, 'immersion.sqlite');
}
function cleanupDbPath(dbPath: string): void {
const dir = path.dirname(dbPath);
if (!fs.existsSync(dir)) return;
fs.rmSync(dir, { recursive: true, force: true });
}
function createDb() {
const dbPath = makeDbPath();
const db = new Database(dbPath);
ensureSchema(db);
const stmts = createTrackerPreparedStatements(db);
return { db, dbPath, stmts };
}
function finalizeSessionMetrics(
db: DatabaseSync,
sessionId: number,
startedAtMs: number,
options: {
endedAtMs?: number;
totalWatchedMs?: number;
activeWatchedMs?: number;
linesSeen?: number;
tokensSeen?: number;
cardsMined?: number;
lookupCount?: number;
lookupHits?: number;
yomitanLookupCount?: number;
} = {},
): void {
const endedAtMs = options.endedAtMs ?? startedAtMs + 60_000;
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
ended_media_ms = ?,
total_watched_ms = ?,
active_watched_ms = ?,
lines_seen = ?,
tokens_seen = ?,
cards_mined = ?,
lookup_count = ?,
lookup_hits = ?,
yomitan_lookup_count = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(
endedAtMs,
options.totalWatchedMs ?? 50_000,
options.totalWatchedMs ?? 50_000,
options.activeWatchedMs ?? 45_000,
options.linesSeen ?? 3,
options.tokensSeen ?? 6,
options.cardsMined ?? 1,
options.lookupCount ?? 2,
options.lookupHits ?? 1,
options.yomitanLookupCount ?? 1,
endedAtMs,
sessionId,
);
}
function insertWordOccurrence(
db: DatabaseSync,
stmts: ReturnType<typeof createTrackerPreparedStatements>,
options: {
sessionId: number;
videoId: number;
animeId: number | null;
lineIndex: number;
text: string;
word: { headword: string; word: string; reading: string; pos?: string };
occurrenceCount?: number;
},
): number {
const nowMs = 1_000_000 + options.lineIndex;
stmts.wordUpsertStmt.run(
options.word.headword,
options.word.word,
options.word.reading,
options.word.pos ?? 'noun',
'名詞',
'一般',
'',
nowMs,
nowMs,
);
const wordRow = db
.prepare('SELECT id FROM imm_words WHERE headword = ? AND word = ? AND reading = ?')
.get(options.word.headword, options.word.word, options.word.reading) as { id: number };
const lineResult = stmts.subtitleLineInsertStmt.run(
options.sessionId,
null,
options.videoId,
options.animeId,
options.lineIndex,
options.lineIndex * 1000,
options.lineIndex * 1000 + 900,
options.text,
'',
nowMs,
nowMs,
);
const lineId = Number(lineResult.lastInsertRowid);
stmts.wordLineOccurrenceUpsertStmt.run(lineId, wordRow.id, options.occurrenceCount ?? 1);
return wordRow.id;
}
function insertKanjiOccurrence(
db: DatabaseSync,
stmts: ReturnType<typeof createTrackerPreparedStatements>,
options: {
sessionId: number;
videoId: number;
animeId: number | null;
lineIndex: number;
text: string;
kanji: string;
occurrenceCount?: number;
},
): number {
const nowMs = 2_000_000 + options.lineIndex;
stmts.kanjiUpsertStmt.run(options.kanji, nowMs, nowMs);
const kanjiRow = db.prepare('SELECT id FROM imm_kanji WHERE kanji = ?').get(options.kanji) as {
id: number;
};
const lineResult = stmts.subtitleLineInsertStmt.run(
options.sessionId,
null,
options.videoId,
options.animeId,
options.lineIndex,
options.lineIndex * 1000,
options.lineIndex * 1000 + 900,
options.text,
'',
nowMs,
nowMs,
);
const lineId = Number(lineResult.lastInsertRowid);
stmts.kanjiLineOccurrenceUpsertStmt.run(lineId, kanjiRow.id, options.occurrenceCount ?? 1);
return kanjiRow.id;
}
test('split session and lexical helpers return distinct-headword, detail, appearance, and filter results', () => {
const { db, dbPath, stmts } = createDb();
try {
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Lexical Anime',
canonicalTitle: 'Lexical Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/lexical-episode-1.mkv', {
canonicalTitle: 'Lexical Episode 1',
sourcePath: '/tmp/lexical-episode-1.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
linkVideoToAnimeRecord(db, videoId, {
animeId,
parsedBasename: 'lexical-episode-1.mkv',
parsedTitle: 'Lexical Anime',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
const sessionId = startSessionRecord(db, videoId, 1_000_000).sessionId;
const nekoId = insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 1,
text: '猫がいる',
word: { headword: '猫', word: '猫', reading: 'ねこ' },
occurrenceCount: 2,
});
insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 2,
text: '犬もいる',
word: { headword: '犬', word: '犬', reading: 'いぬ' },
});
insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 3,
text: '子猫だ',
word: { headword: '子猫', word: '子猫', reading: 'こねこ' },
});
insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 5,
text: '日本だ',
word: { headword: '日本', word: '日本', reading: 'にほん' },
});
const hiId = insertKanjiOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 4,
text: '日本',
kanji: '日',
occurrenceCount: 3,
});
stmts.eventInsertStmt.run(
sessionId,
1_000_100,
EVENT_SUBTITLE_LINE,
1,
0,
900,
0,
0,
JSON.stringify({ kind: 'subtitle' }),
1_000_100,
1_000_100,
);
stmts.eventInsertStmt.run(
sessionId,
1_000_200,
EVENT_CARD_MINED,
2,
1000,
1900,
0,
1,
JSON.stringify({ noteIds: [41] }),
1_000_200,
1_000_200,
);
assert.deepEqual(getAllDistinctHeadwords(db).sort(), ['子猫', '日本', '犬', '猫']);
assert.deepEqual(getAnimeDistinctHeadwords(db, animeId).sort(), ['子猫', '日本', '犬', '猫']);
assert.deepEqual(getMediaDistinctHeadwords(db, videoId).sort(), ['子猫', '日本', '犬', '猫']);
const wordDetail = getWordDetail(db, nekoId);
assert.ok(wordDetail);
assert.equal(wordDetail.wordId, nekoId);
assert.equal(wordDetail.headword, '猫');
assert.equal(wordDetail.word, '猫');
assert.equal(wordDetail.reading, 'ねこ');
assert.equal(wordDetail.partOfSpeech, 'noun');
assert.equal(wordDetail.pos1, '名詞');
assert.equal(wordDetail.pos2, '一般');
assert.equal(wordDetail.pos3, '');
assert.equal(wordDetail.frequency, 1);
assert.equal(wordDetail.firstSeen, 1_000_001);
assert.equal(wordDetail.lastSeen, 1_000_001);
assert.deepEqual(getWordAnimeAppearances(db, nekoId), [
{ animeId, animeTitle: 'Lexical Anime', occurrenceCount: 2 },
]);
assert.deepEqual(
getSimilarWords(db, nekoId, 5).map((row) => row.headword),
['子猫'],
);
const kanjiDetail = getKanjiDetail(db, hiId);
assert.ok(kanjiDetail);
assert.equal(kanjiDetail.kanjiId, hiId);
assert.equal(kanjiDetail.kanji, '日');
assert.equal(kanjiDetail.frequency, 1);
assert.equal(kanjiDetail.firstSeen, 2_000_004);
assert.equal(kanjiDetail.lastSeen, 2_000_004);
assert.deepEqual(getKanjiAnimeAppearances(db, hiId), [
{ animeId, animeTitle: 'Lexical Anime', occurrenceCount: 3 },
]);
assert.deepEqual(
getKanjiWords(db, hiId, 5).map((row) => row.headword),
['日本'],
);
assert.deepEqual(
getSessionEvents(db, sessionId, 10, [EVENT_CARD_MINED]).map((row) => row.eventType),
[EVENT_CARD_MINED],
);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('split library helpers return anime/media session and analytics rows', () => {
const { db, dbPath, stmts } = createDb();
try {
const now = new Date();
const todayLocalDay = Math.floor(
new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime() / 86_400_000,
);
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Library Anime',
canonicalTitle: 'Library Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/library-episode-1.mkv', {
canonicalTitle: 'Library Episode 1',
sourcePath: '/tmp/library-episode-1.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
linkVideoToAnimeRecord(db, videoId, {
animeId,
parsedBasename: 'library-episode-1.mkv',
parsedTitle: 'Library Anime',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
const startedAtMs = new Date(
now.getFullYear(),
now.getMonth(),
now.getDate(),
9,
0,
0,
).getTime();
const sessionId = startSessionRecord(db, videoId, startedAtMs).sessionId;
finalizeSessionMetrics(db, sessionId, startedAtMs, {
endedAtMs: startedAtMs + 55_000,
totalWatchedMs: 55_000,
activeWatchedMs: 45_000,
linesSeen: 4,
tokensSeen: 8,
cardsMined: 2,
});
db.prepare(
`
INSERT INTO imm_daily_rollups (
rollup_day, video_id, total_sessions, total_active_min, total_lines_seen,
total_tokens_seen, total_cards, cards_per_hour, tokens_per_min, lookup_hit_rate,
CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run(todayLocalDay, videoId, 1, 45, 4, 8, 2, 2.66, 0.17, 0.5, startedAtMs, startedAtMs);
db.prepare(
`
INSERT INTO imm_media_art (
video_id, anilist_id, cover_url, cover_blob, cover_blob_hash, title_romaji,
title_english, episodes_total, fetched_at_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run(
videoId,
77,
'https://images.test/library.jpg',
new Uint8Array([1, 2, 3]),
null,
'Library Anime',
'Library Anime',
12,
startedAtMs,
startedAtMs,
startedAtMs,
);
db.prepare(
`
INSERT INTO imm_session_events (
session_id, ts_ms, event_type, line_index, segment_start_ms, segment_end_ms,
tokens_delta, cards_delta, payload_json, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run(
sessionId,
startedAtMs + 40_000,
EVENT_CARD_MINED,
4,
4000,
4900,
0,
2,
JSON.stringify({ noteIds: [101, 102] }),
startedAtMs + 40_000,
startedAtMs + 40_000,
);
insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 1,
text: '猫がいる',
word: { headword: '猫', word: '猫', reading: 'ねこ' },
occurrenceCount: 3,
});
insertWordOccurrence(db, stmts, {
sessionId,
videoId,
animeId,
lineIndex: 2,
text: '犬もいる',
word: { headword: '犬', word: '犬', reading: 'いぬ' },
occurrenceCount: 1,
});
assert.deepEqual(getAnimeAnilistEntries(db, animeId), [
{
anilistId: 77,
titleRomaji: 'Library Anime',
titleEnglish: 'Library Anime',
season: 1,
},
]);
assert.equal(getMediaSessions(db, videoId, 10)[0]?.sessionId, sessionId);
assert.equal(getEpisodeSessions(db, videoId)[0]?.sessionId, sessionId);
assert.equal(getMediaDailyRollups(db, videoId, 10)[0]?.totalActiveMin, 45);
assert.deepEqual(getStreakCalendar(db, 30), [{ epochDay: todayLocalDay, totalActiveMin: 45 }]);
assert.deepEqual(
getAnimeWords(db, animeId, 10).map((row) => row.headword),
['猫', '犬'],
);
assert.deepEqual(
getEpisodeWords(db, videoId, 10).map((row) => row.headword),
['猫', '犬'],
);
assert.deepEqual(getEpisodesPerDay(db, 10), [{ epochDay: todayLocalDay, episodeCount: 1 }]);
assert.deepEqual(getNewAnimePerDay(db, 10), [{ epochDay: todayLocalDay, newAnimeCount: 1 }]);
assert.deepEqual(getWatchTimePerAnime(db, 3650), [
{
epochDay: todayLocalDay,
animeId,
animeTitle: 'Library Anime',
totalActiveMin: 45,
},
]);
assert.deepEqual(getEpisodeCardEvents(db, videoId), [
{
eventId: 1,
sessionId,
tsMs: startedAtMs + 40_000,
cardsDelta: 2,
noteIds: [101, 102],
},
]);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('split maintenance helpers update anime metadata and watched state', () => {
const { db, dbPath } = createDb();
try {
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Metadata Anime',
canonicalTitle: 'Metadata Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/metadata-episode-1.mkv', {
canonicalTitle: 'Metadata Episode 1',
sourcePath: '/tmp/metadata-episode-1.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
linkVideoToAnimeRecord(db, videoId, {
animeId,
parsedBasename: 'metadata-episode-1.mkv',
parsedTitle: 'Metadata Anime',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
updateVideoMetadataRecord(db, videoId, {
sourceType: SOURCE_TYPE_LOCAL,
canonicalTitle: 'Metadata Episode 1',
durationMs: 222_000,
fileSizeBytes: null,
codecId: null,
containerId: null,
widthPx: null,
heightPx: null,
fpsX100: null,
bitrateKbps: null,
audioCodecId: null,
hashSha256: null,
screenshotPath: null,
metadataJson: null,
});
updateAnimeAnilistInfo(db, videoId, {
anilistId: 99,
titleRomaji: 'Metadata Romaji',
titleEnglish: 'Metadata English',
titleNative: 'メタデータ',
episodesTotal: 24,
});
markVideoWatched(db, videoId, true);
const animeRow = db
.prepare(
`
SELECT anilist_id, title_romaji, title_english, title_native, episodes_total
FROM imm_anime
WHERE anime_id = ?
`,
)
.get(animeId) as {
anilist_id: number;
title_romaji: string;
title_english: string;
title_native: string;
episodes_total: number;
};
assert.equal(animeRow.anilist_id, 99);
assert.equal(animeRow.title_romaji, 'Metadata Romaji');
assert.equal(animeRow.title_english, 'Metadata English');
assert.equal(animeRow.title_native, 'メタデータ');
assert.equal(animeRow.episodes_total, 24);
assert.equal(getVideoDurationMs(db, videoId), 222_000);
assert.equal(isVideoWatched(db, videoId), true);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('split maintenance helpers delete multiple sessions and whole videos with dependent rows', () => {
const { db, dbPath, stmts } = createDb();
try {
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Delete Anime',
canonicalTitle: 'Delete Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
const keepVideoId = getOrCreateVideoRecord(db, 'local:/tmp/delete-keep.mkv', {
canonicalTitle: 'Delete Keep',
sourcePath: '/tmp/delete-keep.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const dropVideoId = getOrCreateVideoRecord(db, 'local:/tmp/delete-drop.mkv', {
canonicalTitle: 'Delete Drop',
sourcePath: '/tmp/delete-drop.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
linkVideoToAnimeRecord(db, keepVideoId, {
animeId,
parsedBasename: 'delete-keep.mkv',
parsedTitle: 'Delete Anime',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
linkVideoToAnimeRecord(db, dropVideoId, {
animeId,
parsedBasename: 'delete-drop.mkv',
parsedTitle: 'Delete Anime',
parsedSeason: 1,
parsedEpisode: 2,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
const keepSessionId = startSessionRecord(db, keepVideoId, 1_000_000).sessionId;
const dropSessionOne = startSessionRecord(db, dropVideoId, 2_000_000).sessionId;
const dropSessionTwo = startSessionRecord(db, dropVideoId, 3_000_000).sessionId;
finalizeSessionMetrics(db, keepSessionId, 1_000_000);
finalizeSessionMetrics(db, dropSessionOne, 2_000_000);
finalizeSessionMetrics(db, dropSessionTwo, 3_000_000);
insertWordOccurrence(db, stmts, {
sessionId: dropSessionOne,
videoId: dropVideoId,
animeId,
lineIndex: 1,
text: '削除する猫',
word: { headword: '猫', word: '猫', reading: 'ねこ' },
});
insertKanjiOccurrence(db, stmts, {
sessionId: dropSessionOne,
videoId: dropVideoId,
animeId,
lineIndex: 2,
text: '日本',
kanji: '日',
});
upsertCoverArt(db, dropVideoId, {
anilistId: 12,
coverUrl: 'https://images.test/delete.jpg',
coverBlob: new Uint8Array([7, 8, 9]),
titleRomaji: 'Delete Anime',
titleEnglish: 'Delete Anime',
episodesTotal: 2,
});
deleteSessions(db, [dropSessionOne, dropSessionTwo]);
const deletedSessionCount = db
.prepare('SELECT COUNT(*) AS total FROM imm_sessions WHERE video_id = ?')
.get(dropVideoId) as { total: number };
assert.equal(deletedSessionCount.total, 0);
const keepReplacementSession = startSessionRecord(db, keepVideoId, 4_000_000).sessionId;
finalizeSessionMetrics(db, keepReplacementSession, 4_000_000);
deleteVideo(db, dropVideoId);
const remainingVideos = db
.prepare('SELECT video_id FROM imm_videos ORDER BY video_id')
.all() as Array<{
video_id: number;
}>;
const coverRows = db.prepare('SELECT COUNT(*) AS total FROM imm_media_art').get() as {
total: number;
};
assert.deepEqual(remainingVideos, [{ video_id: keepVideoId }]);
assert.equal(coverRows.total, 0);
assert.equal(
(
db.prepare('SELECT COUNT(*) AS total FROM imm_words').get() as {
total: number;
}
).total,
0,
);
assert.equal(
(
db.prepare('SELECT COUNT(*) AS total FROM imm_kanji').get() as {
total: number;
}
).total,
0,
);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});

View File

@@ -81,6 +81,32 @@ function cleanupDbPath(dbPath: string): void {
}
}
function withMockDate<T>(fixedDate: Date, run: (realDate: typeof Date) => T): T {
const realDate = Date;
const fixedDateMs = fixedDate.getTime();
class MockDate extends Date {
constructor(...args: any[]) {
if (args.length === 0) {
super(fixedDateMs);
} else {
super(...(args as [any?, any?, any?, any?, any?, any?, any?]));
}
}
static override now(): number {
return fixedDateMs;
}
}
globalThis.Date = MockDate as DateConstructor;
try {
return run(realDate);
} finally {
globalThis.Date = realDate;
}
}
test('getSessionSummaries returns sessionId and canonicalTitle', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -208,6 +234,104 @@ test('getAnimeEpisodes prefers the latest session media position when the latest
}
});
test('getAnimeEpisodes includes unwatched episodes for the anime', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const watchedVideoId = getOrCreateVideoRecord(db, 'local:/tmp/watched-episode.mkv', {
canonicalTitle: 'Watched Episode',
sourcePath: '/tmp/watched-episode.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const unwatchedVideoId = getOrCreateVideoRecord(db, 'local:/tmp/unwatched-episode.mkv', {
canonicalTitle: 'Unwatched Episode',
sourcePath: '/tmp/unwatched-episode.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Episode Coverage Anime',
canonicalTitle: 'Episode Coverage Anime',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
linkVideoToAnimeRecord(db, watchedVideoId, {
animeId,
parsedBasename: 'watched-episode.mkv',
parsedTitle: 'Episode Coverage Anime',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'fallback',
parserConfidence: 1,
parseMetadataJson: '{"episode":1}',
});
linkVideoToAnimeRecord(db, unwatchedVideoId, {
animeId,
parsedBasename: 'unwatched-episode.mkv',
parsedTitle: 'Episode Coverage Anime',
parsedSeason: 1,
parsedEpisode: 2,
parserSource: 'fallback',
parserConfidence: 1,
parseMetadataJson: '{"episode":2}',
});
const watchedSessionId = startSessionRecord(db, watchedVideoId, 1_000_000).sessionId;
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
ended_media_ms = ?,
active_watched_ms = ?,
cards_mined = ?,
tokens_seen = ?,
yomitan_lookup_count = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(1_005_000, 7_000, 3_000, 2, 20, 4, 1_005_000, watchedSessionId);
const episodes = getAnimeEpisodes(db, animeId);
assert.equal(episodes.length, 2);
assert.deepEqual(
episodes.map((episode) => ({
videoId: episode.videoId,
totalSessions: episode.totalSessions,
totalActiveMs: episode.totalActiveMs,
totalCards: episode.totalCards,
totalTokensSeen: episode.totalTokensSeen,
})),
[
{
videoId: watchedVideoId,
totalSessions: 1,
totalActiveMs: 3_000,
totalCards: 2,
totalTokensSeen: 20,
},
{
videoId: unwatchedVideoId,
totalSessions: 0,
totalActiveMs: 0,
totalCards: 0,
totalTokensSeen: 0,
},
],
);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('getAnimeEpisodes falls back to the latest subtitle segment end when session progress checkpoints are missing', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -470,8 +594,8 @@ test('getTrendsDashboard returns chart-ready aggregated series', () => {
parseMetadataJson: null,
});
const dayOneStart = new Date(2026, 2, 15, 12, 0, 0, 0).getTime();
const dayTwoStart = new Date(2026, 2, 16, 18, 0, 0, 0).getTime();
const dayOneStart = 1_700_000_000_000;
const dayTwoStart = dayOneStart + 86_400_000;
const sessionOne = startSessionRecord(db, videoId, dayOneStart);
const sessionTwo = startSessionRecord(db, videoId, dayTwoStart);
@@ -586,6 +710,299 @@ test('getTrendsDashboard returns chart-ready aggregated series', () => {
}
});
test('getTrendsDashboard keeps local-midnight session buckets separate', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const stmts = createTrackerPreparedStatements(db);
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/local-midnight-trends.mkv', {
canonicalTitle: 'Local Midnight Trends',
sourcePath: '/tmp/local-midnight-trends.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Local Midnight Trends',
canonicalTitle: 'Local Midnight Trends',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
linkVideoToAnimeRecord(db, videoId, {
animeId,
parsedBasename: 'local-midnight-trends.mkv',
parsedTitle: 'Local Midnight Trends',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
const beforeMidnight = new Date(2026, 2, 1, 23, 30).getTime();
const afterMidnight = new Date(2026, 2, 2, 0, 30).getTime();
const firstSessionId = startSessionRecord(db, videoId, beforeMidnight).sessionId;
const secondSessionId = startSessionRecord(db, videoId, afterMidnight).sessionId;
for (const [sessionId, startedAtMs, tokensSeen, lookupCount] of [
[firstSessionId, beforeMidnight, 100, 4],
[secondSessionId, afterMidnight, 120, 6],
] as const) {
stmts.telemetryInsertStmt.run(
sessionId,
startedAtMs + 60_000,
60_000,
60_000,
1,
tokensSeen,
0,
lookupCount,
lookupCount,
lookupCount,
0,
0,
0,
0,
startedAtMs + 60_000,
startedAtMs + 60_000,
);
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
total_watched_ms = ?,
active_watched_ms = ?,
lines_seen = ?,
tokens_seen = ?,
lookup_count = ?,
lookup_hits = ?,
yomitan_lookup_count = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(
startedAtMs + 60_000,
60_000,
60_000,
1,
tokensSeen,
lookupCount,
lookupCount,
lookupCount,
startedAtMs + 60_000,
sessionId,
);
}
const dashboard = getTrendsDashboard(db, 'all', 'day');
assert.equal(dashboard.progress.lookups.length, 2);
assert.deepEqual(
dashboard.progress.lookups.map((point) => point.value),
[4, 10],
);
assert.equal(dashboard.ratios.lookupsPerHundred.length, 2);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('getTrendsDashboard month grouping spans every touched calendar month and keeps progress monthly', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
withMockDate(new Date(2026, 2, 1, 12, 0, 0), (RealDate) => {
try {
ensureSchema(db);
const stmts = createTrackerPreparedStatements(db);
const febVideoId = getOrCreateVideoRecord(db, 'local:/tmp/feb-trends.mkv', {
canonicalTitle: 'Monthly Trends',
sourcePath: '/tmp/feb-trends.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const marVideoId = getOrCreateVideoRecord(db, 'local:/tmp/mar-trends.mkv', {
canonicalTitle: 'Monthly Trends',
sourcePath: '/tmp/mar-trends.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const animeId = getOrCreateAnimeRecord(db, {
parsedTitle: 'Monthly Trends',
canonicalTitle: 'Monthly Trends',
anilistId: null,
titleRomaji: null,
titleEnglish: null,
titleNative: null,
metadataJson: null,
});
linkVideoToAnimeRecord(db, febVideoId, {
animeId,
parsedBasename: 'feb-trends.mkv',
parsedTitle: 'Monthly Trends',
parsedSeason: 1,
parsedEpisode: 1,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
linkVideoToAnimeRecord(db, marVideoId, {
animeId,
parsedBasename: 'mar-trends.mkv',
parsedTitle: 'Monthly Trends',
parsedSeason: 1,
parsedEpisode: 2,
parserSource: 'test',
parserConfidence: 1,
parseMetadataJson: null,
});
const febStartedAtMs = new RealDate(2026, 1, 15, 20, 0, 0).getTime();
const marStartedAtMs = new RealDate(2026, 2, 1, 9, 0, 0).getTime();
const febSessionId = startSessionRecord(db, febVideoId, febStartedAtMs).sessionId;
const marSessionId = startSessionRecord(db, marVideoId, marStartedAtMs).sessionId;
for (const [sessionId, startedAtMs, tokensSeen, cardsMined, yomitanLookupCount] of [
[febSessionId, febStartedAtMs, 100, 2, 3],
[marSessionId, marStartedAtMs, 120, 4, 5],
] as const) {
stmts.telemetryInsertStmt.run(
sessionId,
startedAtMs + 60_000,
30 * 60_000,
30 * 60_000,
4,
tokensSeen,
cardsMined,
yomitanLookupCount,
yomitanLookupCount,
yomitanLookupCount,
0,
0,
0,
0,
startedAtMs + 60_000,
startedAtMs + 60_000,
);
db.prepare(
`
UPDATE imm_sessions
SET
ended_at_ms = ?,
status = 2,
total_watched_ms = ?,
active_watched_ms = ?,
lines_seen = ?,
tokens_seen = ?,
cards_mined = ?,
lookup_count = ?,
lookup_hits = ?,
yomitan_lookup_count = ?,
LAST_UPDATE_DATE = ?
WHERE session_id = ?
`,
).run(
startedAtMs + 60_000,
30 * 60_000,
30 * 60_000,
4,
tokensSeen,
cardsMined,
yomitanLookupCount,
yomitanLookupCount,
yomitanLookupCount,
startedAtMs + 60_000,
sessionId,
);
}
const insertDailyRollup = db.prepare(
`
INSERT INTO imm_daily_rollups (
rollup_day, video_id, total_sessions, total_active_min, total_lines_seen,
total_tokens_seen, total_cards, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
);
const insertMonthlyRollup = db.prepare(
`
INSERT INTO imm_monthly_rollups (
rollup_month, video_id, total_sessions, total_active_min, total_lines_seen,
total_tokens_seen, total_cards, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
);
const febEpochDay = Math.floor(febStartedAtMs / 86_400_000);
const marEpochDay = Math.floor(marStartedAtMs / 86_400_000);
insertDailyRollup.run(febEpochDay, febVideoId, 1, 30, 4, 100, 2, febStartedAtMs, febStartedAtMs);
insertDailyRollup.run(marEpochDay, marVideoId, 1, 30, 4, 120, 4, marStartedAtMs, marStartedAtMs);
insertMonthlyRollup.run(202602, febVideoId, 1, 30, 4, 100, 2, febStartedAtMs, febStartedAtMs);
insertMonthlyRollup.run(202603, marVideoId, 1, 30, 4, 120, 4, marStartedAtMs, marStartedAtMs);
db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run(
'二月',
'二月',
'にがつ',
'noun',
'名詞',
'',
'',
Math.floor(febStartedAtMs / 1000),
Math.floor(febStartedAtMs / 1000),
1,
);
db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
).run(
'三月',
'三月',
'さんがつ',
'noun',
'名詞',
'',
'',
Math.floor(marStartedAtMs / 1000),
Math.floor(marStartedAtMs / 1000),
1,
);
const dashboard = getTrendsDashboard(db, '30d', 'month');
assert.equal(dashboard.activity.watchTime.length, 2);
assert.deepEqual(
dashboard.progress.newWords.map((point) => point.label),
dashboard.activity.watchTime.map((point) => point.label),
);
assert.deepEqual(
dashboard.progress.episodes.map((point) => point.label),
dashboard.activity.watchTime.map((point) => point.label),
);
assert.deepEqual(
dashboard.progress.lookups.map((point) => point.label),
dashboard.activity.watchTime.map((point) => point.label),
);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
});
test('getQueryHints reads all-time totals from lifetime summary', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -656,6 +1073,61 @@ test('getQueryHints reads all-time totals from lifetime summary', () => {
}
});
test('getQueryHints computes weekly new-word cutoff from calendar midnights', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
withMockDate(new Date(2026, 2, 15, 12, 0, 0), (RealDate) => {
try {
ensureSchema(db);
const insertWord = db.prepare(
`
INSERT INTO imm_words (
headword, word, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
);
const justBeforeWeekBoundary = Math.floor(
new RealDate(2026, 2, 7, 23, 30, 0).getTime() / 1000,
);
const justAfterWeekBoundary = Math.floor(
new RealDate(2026, 2, 8, 0, 30, 0).getTime() / 1000,
);
insertWord.run(
'境界前',
'境界前',
'きょうかいまえ',
'noun',
'名詞',
'',
'',
justBeforeWeekBoundary,
justBeforeWeekBoundary,
1,
);
insertWord.run(
'境界後',
'境界後',
'きょうかいご',
'noun',
'名詞',
'',
'',
justAfterWeekBoundary,
justAfterWeekBoundary,
1,
);
const hints = getQueryHints(db);
assert.equal(hints.newWordsThisWeek, 1);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
});
test('getQueryHints counts new words by distinct headword first-seen time', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -1024,6 +1496,37 @@ test('getMonthlyRollups returns all rows for the most recent rollup months', ()
}
});
test('getMonthlyRollups derives rate metrics from stored monthly totals', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const insertRollup = db.prepare(
`
INSERT INTO imm_monthly_rollups (
rollup_month, video_id, total_sessions, total_active_min, total_lines_seen,
total_tokens_seen, total_cards, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
`,
);
const nowMs = Date.now();
insertRollup.run(202602, 1, 2, 30, 20, 90, 15, nowMs, nowMs);
insertRollup.run(202602, 2, 1, 0, 10, 25, 5, nowMs, nowMs);
const rows = getMonthlyRollups(db, 1);
assert.equal(rows.length, 2);
const rowsByVideoId = new Map(rows.map((row) => [row.videoId, row]));
assert.equal(rowsByVideoId.get(1)?.cardsPerHour, 30);
assert.equal(rowsByVideoId.get(1)?.tokensPerMin, 3);
assert.equal(rowsByVideoId.get(1)?.lookupHitRate ?? null, null);
assert.equal(rowsByVideoId.get(2)?.cardsPerHour ?? null, null);
assert.equal(rowsByVideoId.get(2)?.tokensPerMin ?? null, null);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('getAnimeDailyRollups returns all rows for the most recent rollup days', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -2113,7 +2616,10 @@ test('media library and detail queries include joined youtube metadata when pres
assert.equal(detail?.youtubeVideoId, 'abc123');
assert.equal(detail?.videoUrl, 'https://www.youtube.com/watch?v=abc123');
assert.equal(detail?.videoThumbnailUrl, 'https://i.ytimg.com/vi/abc123/hqdefault.jpg');
assert.equal(detail?.channelThumbnailUrl, 'https://yt3.googleusercontent.com/channel-avatar=s88');
assert.equal(
detail?.channelThumbnailUrl,
'https://yt3.googleusercontent.com/channel-avatar=s88',
);
assert.equal(detail?.uploaderId, '@creator');
assert.equal(detail?.uploaderUrl, 'https://www.youtube.com/@creator');
assert.equal(detail?.description, 'Video description');
@@ -2231,9 +2737,8 @@ test('cover art queries reuse a shared blob across duplicate anime art rows', ()
const animeArt = getAnimeCoverArt(db, animeId);
const library = getMediaLibrary(db);
assert.equal(artOne?.coverBlob?.length, 4);
assert.equal(artTwo?.coverBlob?.length, 4);
assert.deepEqual(artOne?.coverBlob, artTwo?.coverBlob);
assert.deepEqual(artOne?.coverBlob, Buffer.from([1, 2, 3, 4]));
assert.deepEqual(artTwo?.coverBlob, Buffer.from([9, 9, 9, 9]));
assert.equal(animeArt?.coverBlob?.length, 4);
assert.deepEqual(
library.map((row) => ({
@@ -2251,6 +2756,52 @@ test('cover art queries reuse a shared blob across duplicate anime art rows', ()
}
});
test('upsertCoverArt prefers freshly fetched bytes over a reused shared hash', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const originalVideoId = getOrCreateVideoRecord(db, 'local:/tmp/shared-cover-original.mkv', {
canonicalTitle: 'Shared Cover Original',
sourcePath: '/tmp/shared-cover-original.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const refreshedVideoId = getOrCreateVideoRecord(db, 'local:/tmp/shared-cover-refresh.mkv', {
canonicalTitle: 'Shared Cover Refresh',
sourcePath: '/tmp/shared-cover-refresh.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
upsertCoverArt(db, originalVideoId, {
anilistId: 999,
coverUrl: 'https://images.test/shared-refresh.jpg',
coverBlob: Buffer.from([1, 2, 3, 4]),
titleRomaji: 'Shared Cover Refresh',
titleEnglish: 'Shared Cover Refresh',
episodesTotal: 12,
});
upsertCoverArt(db, refreshedVideoId, {
anilistId: 999,
coverUrl: 'https://images.test/shared-refresh.jpg',
coverBlob: Buffer.from([9, 8, 7, 6]),
titleRomaji: 'Shared Cover Refresh',
titleEnglish: 'Shared Cover Refresh',
episodesTotal: 12,
});
const originalArt = getCoverArt(db, originalVideoId);
const refreshedArt = getCoverArt(db, refreshedVideoId);
assert.deepEqual(originalArt?.coverBlob, Buffer.from([1, 2, 3, 4]));
assert.deepEqual(refreshedArt?.coverBlob, Buffer.from([9, 8, 7, 6]));
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('anime/media detail and episode queries use ended-session metrics when telemetry rows are absent', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
@@ -2833,13 +3384,13 @@ test('deleteSession rebuilds word and kanji aggregates from retained subtitle li
assert.ok(sharedWordRow);
assert.equal(sharedWordRow.frequency, 1);
assert.equal(sharedWordRow.first_seen, keptTs);
assert.equal(sharedWordRow.last_seen, keptTs);
assert.equal(sharedWordRow.first_seen, Math.floor(keptTs / 1000));
assert.equal(sharedWordRow.last_seen, Math.floor(keptTs / 1000));
assert.equal(deletedOnlyWordRow ?? null, null);
assert.ok(sharedKanjiRow);
assert.equal(sharedKanjiRow.frequency, 1);
assert.equal(sharedKanjiRow.first_seen, keptTs);
assert.equal(sharedKanjiRow.last_seen, keptTs);
assert.equal(sharedKanjiRow.first_seen, Math.floor(keptTs / 1000));
assert.equal(sharedKanjiRow.last_seen, Math.floor(keptTs / 1000));
assert.equal(deletedOnlyKanjiRow ?? null, null);
} finally {
db.close();
@@ -3015,22 +3566,24 @@ test('deleteSession removes zero-session media from library and trends', () => {
const lifetimeMediaCount = Number(
(
db.prepare('SELECT COUNT(*) AS total FROM imm_lifetime_media WHERE video_id = ?').get(
videoId,
) as { total: number }
db
.prepare('SELECT COUNT(*) AS total FROM imm_lifetime_media WHERE video_id = ?')
.get(videoId) as { total: number }
).total,
);
const lifetimeAnimeCount = Number(
(
db.prepare('SELECT COUNT(*) AS total FROM imm_lifetime_anime WHERE anime_id = ?').get(
animeId,
) as { total: number }
db
.prepare('SELECT COUNT(*) AS total FROM imm_lifetime_anime WHERE anime_id = ?')
.get(animeId) as { total: number }
).total,
);
const appliedSessionCount = Number(
(
db
.prepare('SELECT COUNT(*) AS total FROM imm_lifetime_applied_sessions WHERE session_id = ?')
.prepare(
'SELECT COUNT(*) AS total FROM imm_lifetime_applied_sessions WHERE session_id = ?',
)
.get(sessionId) as { total: number }
).total,
);

View File

@@ -1,5 +1,6 @@
import type { DatabaseSync } from './sqlite';
import { finalizeSessionRecord } from './session';
import { nowMs } from './time';
import type { LifetimeRebuildSummary, SessionState } from './types';
interface TelemetryRow {
@@ -97,8 +98,7 @@ function isFirstSessionForLocalDay(
`
SELECT COUNT(*) AS count
FROM imm_sessions
WHERE CAST(strftime('%s', started_at_ms / 1000, 'unixepoch', 'localtime') AS INTEGER) / 86400
= CAST(strftime('%s', ? / 1000, 'unixepoch', 'localtime') AS INTEGER) / 86400
WHERE date(started_at_ms / 1000, 'unixepoch', 'localtime') = date(? / 1000, 'unixepoch', 'localtime')
AND (
started_at_ms < ?
OR (started_at_ms = ? AND session_id < ?)
@@ -393,7 +393,7 @@ export function applySessionLifetimeSummary(
ON CONFLICT(session_id) DO NOTHING
`,
)
.run(session.sessionId, endedAtMs, Date.now(), Date.now());
.run(session.sessionId, endedAtMs, nowMs(), nowMs());
if ((applyResult.changes ?? 0) <= 0) {
return;
@@ -468,7 +468,7 @@ export function applySessionLifetimeSummary(
? 1
: 0;
const nowMs = Date.now();
const updatedAtMs = nowMs();
db.prepare(
`
UPDATE imm_lifetime_global
@@ -490,13 +490,13 @@ export function applySessionLifetimeSummary(
isFirstSessionForVideoRun ? 1 : 0,
isFirstCompletedSessionForVideoRun ? 1 : 0,
animeCompletedDelta,
nowMs,
updatedAtMs,
);
upsertLifetimeMedia(
db,
session.videoId,
nowMs,
updatedAtMs,
activeMs,
cardsMined,
linesSeen,
@@ -510,7 +510,7 @@ export function applySessionLifetimeSummary(
upsertLifetimeAnime(
db,
video.anime_id,
nowMs,
updatedAtMs,
activeMs,
cardsMined,
linesSeen,
@@ -524,7 +524,7 @@ export function applySessionLifetimeSummary(
}
export function rebuildLifetimeSummaries(db: DatabaseSync): LifetimeRebuildSummary {
const rebuiltAtMs = Date.now();
const rebuiltAtMs = nowMs();
db.exec('BEGIN');
try {
const summary = rebuildLifetimeSummariesInTransaction(db, rebuiltAtMs);
@@ -538,7 +538,7 @@ export function rebuildLifetimeSummaries(db: DatabaseSync): LifetimeRebuildSumma
export function rebuildLifetimeSummariesInTransaction(
db: DatabaseSync,
rebuiltAtMs = Date.now(),
rebuiltAtMs = nowMs(),
): LifetimeRebuildSummary {
return rebuildLifetimeSummariesInternal(db, rebuiltAtMs);
}

View File

@@ -31,9 +31,9 @@ test('pruneRawRetention uses session retention separately from telemetry retenti
try {
ensureSchema(db);
const nowMs = 90 * 86_400_000;
const staleEndedAtMs = nowMs - 40 * 86_400_000;
const keptEndedAtMs = nowMs - 5 * 86_400_000;
const nowMs = 1_000_000_000;
const staleEndedAtMs = nowMs - 400_000_000;
const keptEndedAtMs = nowMs - 50_000_000;
db.exec(`
INSERT INTO imm_videos (
@@ -49,14 +49,14 @@ test('pruneRawRetention uses session retention separately from telemetry retenti
INSERT INTO imm_session_telemetry (
session_id, sample_ms, total_watched_ms, active_watched_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES
(1, ${nowMs - 2 * 86_400_000}, 0, 0, ${nowMs}, ${nowMs}),
(2, ${nowMs - 12 * 60 * 60 * 1000}, 0, 0, ${nowMs}, ${nowMs});
(1, ${nowMs - 200_000_000}, 0, 0, ${nowMs}, ${nowMs}),
(2, ${nowMs - 10_000_000}, 0, 0, ${nowMs}, ${nowMs});
`);
const result = pruneRawRetention(db, nowMs, {
eventsRetentionMs: 7 * 86_400_000,
telemetryRetentionMs: 1 * 86_400_000,
sessionsRetentionMs: 30 * 86_400_000,
eventsRetentionMs: 120_000_000,
telemetryRetentionMs: 80_000_000,
sessionsRetentionMs: 300_000_000,
});
const remainingSessions = db
@@ -82,15 +82,13 @@ test('pruneRawRetention uses session retention separately from telemetry retenti
}
});
test('raw retention keeps rollups and rollup retention prunes them separately', () => {
test('pruneRawRetention skips disabled retention windows', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const nowMs = Date.UTC(2026, 2, 16, 12, 0, 0, 0);
const oldDay = Math.floor((nowMs - 90 * 86_400_000) / 86_400_000);
const oldMonth = toMonthKey(nowMs - 400 * 86_400_000);
const nowMs = 1_000_000_000;
db.exec(`
INSERT INTO imm_videos (
@@ -101,12 +99,79 @@ test('raw retention keeps rollups and rollup retention prunes them separately',
INSERT INTO imm_sessions (
session_id, session_uuid, video_id, started_at_ms, ended_at_ms, status, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, 'session-1', 1, ${nowMs - 90 * 86_400_000}, ${nowMs - 90 * 86_400_000 + 1_000}, 2, ${nowMs}, ${nowMs}
1, 'session-1', 1, ${nowMs - 1_000}, ${nowMs - 500}, 2, ${nowMs}, ${nowMs}
);
INSERT INTO imm_session_telemetry (
session_id, sample_ms, total_watched_ms, active_watched_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, ${nowMs - 90 * 86_400_000}, 0, 0, ${nowMs}, ${nowMs}
1, ${nowMs - 2_000}, 0, 0, ${nowMs}, ${nowMs}
);
INSERT INTO imm_session_events (
session_id, event_type, ts_ms, payload_json, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, 1, ${nowMs - 3_000}, '{}', ${nowMs}, ${nowMs}
);
`);
const result = pruneRawRetention(db, nowMs, {
eventsRetentionMs: Number.POSITIVE_INFINITY,
telemetryRetentionMs: Number.POSITIVE_INFINITY,
sessionsRetentionMs: Number.POSITIVE_INFINITY,
});
const remainingSessionEvents = db
.prepare('SELECT COUNT(*) AS count FROM imm_session_events')
.get() as { count: number };
const remainingTelemetry = db
.prepare('SELECT COUNT(*) AS count FROM imm_session_telemetry')
.get() as { count: number };
const remainingSessions = db
.prepare('SELECT COUNT(*) AS count FROM imm_sessions')
.get() as { count: number };
assert.equal(result.deletedSessionEvents, 0);
assert.equal(result.deletedTelemetryRows, 0);
assert.equal(result.deletedEndedSessions, 0);
assert.equal(remainingSessionEvents.count, 1);
assert.equal(remainingTelemetry.count, 1);
assert.equal(remainingSessions.count, 1);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('toMonthKey floors negative timestamps into the prior UTC month', () => {
assert.equal(toMonthKey(-1), 196912);
assert.equal(toMonthKey(-86_400_000), 196912);
assert.equal(toMonthKey(0), 197001);
});
test('raw retention keeps rollups and rollup retention prunes them separately', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const nowMs = 1_000_000_000;
const oldDay = Math.floor((nowMs - 200_000_000) / 86_400_000);
const oldMonth = 196912;
db.exec(`
INSERT INTO imm_videos (
video_id, video_key, canonical_title, source_type, duration_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, 'local:/tmp/video.mkv', 'Video', 1, 0, ${nowMs}, ${nowMs}
);
INSERT INTO imm_sessions (
session_id, session_uuid, video_id, started_at_ms, ended_at_ms, status, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, 'session-1', 1, ${nowMs - 200_000_000}, ${nowMs - 199_999_000}, 2, ${nowMs}, ${nowMs}
);
INSERT INTO imm_session_telemetry (
session_id, sample_ms, total_watched_ms, active_watched_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (
1, ${nowMs - 200_000_000}, 0, 0, ${nowMs}, ${nowMs}
);
INSERT INTO imm_daily_rollups (
rollup_day, video_id, total_sessions, total_active_min, total_lines_seen,
@@ -123,9 +188,9 @@ test('raw retention keeps rollups and rollup retention prunes them separately',
`);
pruneRawRetention(db, nowMs, {
eventsRetentionMs: 7 * 86_400_000,
telemetryRetentionMs: 30 * 86_400_000,
sessionsRetentionMs: 30 * 86_400_000,
eventsRetentionMs: 120_000_000,
telemetryRetentionMs: 120_000_000,
sessionsRetentionMs: 120_000_000,
});
const rollupsAfterRawPrune = db
@@ -139,8 +204,8 @@ test('raw retention keeps rollups and rollup retention prunes them separately',
assert.equal(monthlyAfterRawPrune?.total, 1);
const rollupPrune = pruneRollupRetention(db, nowMs, {
dailyRollupRetentionMs: 30 * 86_400_000,
monthlyRollupRetentionMs: 365 * 86_400_000,
dailyRollupRetentionMs: 120_000_000,
monthlyRollupRetentionMs: 1,
});
const rollupsAfterRollupPrune = db

View File

@@ -1,4 +1,6 @@
import type { DatabaseSync } from './sqlite';
import { nowMs } from './time';
import { toDbMs } from './query-shared';
const ROLLUP_STATE_KEY = 'last_rollup_sample_ms';
const DAILY_MS = 86_400_000;
@@ -25,38 +27,53 @@ interface RawRetentionResult {
}
export function toMonthKey(timestampMs: number): number {
const monthDate = new Date(timestampMs);
return monthDate.getUTCFullYear() * 100 + monthDate.getUTCMonth() + 1;
const epochDay = Math.floor(timestampMs / DAILY_MS);
const z = epochDay + 719468;
const era = Math.floor(z / 146097);
const doe = z - era * 146097;
const yoe = Math.floor(
(doe - Math.floor(doe / 1460) + Math.floor(doe / 36524) - Math.floor(doe / 146096)) / 365,
);
let year = yoe + era * 400;
const doy = doe - (365 * yoe + Math.floor(yoe / 4) - Math.floor(yoe / 100));
const mp = Math.floor((5 * doy + 2) / 153);
const month = mp + (mp < 10 ? 3 : -9);
if (month <= 2) {
year += 1;
}
return year * 100 + month;
}
export function pruneRawRetention(
db: DatabaseSync,
nowMs: number,
currentMs: number,
policy: {
eventsRetentionMs: number;
telemetryRetentionMs: number;
sessionsRetentionMs: number;
},
): RawRetentionResult {
const eventCutoff = nowMs - policy.eventsRetentionMs;
const telemetryCutoff = nowMs - policy.telemetryRetentionMs;
const sessionsCutoff = nowMs - policy.sessionsRetentionMs;
const deletedSessionEvents = (
db.prepare(`DELETE FROM imm_session_events WHERE ts_ms < ?`).run(eventCutoff) as {
changes: number;
}
).changes;
const deletedTelemetryRows = (
db.prepare(`DELETE FROM imm_session_telemetry WHERE sample_ms < ?`).run(telemetryCutoff) as {
changes: number;
}
).changes;
const deletedEndedSessions = (
db
.prepare(`DELETE FROM imm_sessions WHERE ended_at_ms IS NOT NULL AND ended_at_ms < ?`)
.run(sessionsCutoff) as { changes: number }
).changes;
const deletedSessionEvents = Number.isFinite(policy.eventsRetentionMs)
? (
db.prepare(`DELETE FROM imm_session_events WHERE ts_ms < ?`).run(
toDbMs(currentMs - policy.eventsRetentionMs),
) as { changes: number }
).changes
: 0;
const deletedTelemetryRows = Number.isFinite(policy.telemetryRetentionMs)
? (
db
.prepare(`DELETE FROM imm_session_telemetry WHERE sample_ms < ?`)
.run(toDbMs(currentMs - policy.telemetryRetentionMs)) as { changes: number }
).changes
: 0;
const deletedEndedSessions = Number.isFinite(policy.sessionsRetentionMs)
? (
db
.prepare(`DELETE FROM imm_sessions WHERE ended_at_ms IS NOT NULL AND ended_at_ms < ?`)
.run(toDbMs(currentMs - policy.sessionsRetentionMs)) as { changes: number }
).changes
: 0;
return {
deletedSessionEvents,
@@ -67,7 +84,7 @@ export function pruneRawRetention(
export function pruneRollupRetention(
db: DatabaseSync,
nowMs: number,
currentMs: number,
policy: {
dailyRollupRetentionMs: number;
monthlyRollupRetentionMs: number;
@@ -77,7 +94,7 @@ export function pruneRollupRetention(
? (
db
.prepare(`DELETE FROM imm_daily_rollups WHERE rollup_day < ?`)
.run(Math.floor((nowMs - policy.dailyRollupRetentionMs) / DAILY_MS)) as {
.run(Math.floor((currentMs - policy.dailyRollupRetentionMs) / DAILY_MS)) as {
changes: number;
}
).changes
@@ -86,7 +103,7 @@ export function pruneRollupRetention(
? (
db
.prepare(`DELETE FROM imm_monthly_rollups WHERE rollup_month < ?`)
.run(toMonthKey(nowMs - policy.monthlyRollupRetentionMs)) as {
.run(toMonthKey(currentMs - policy.monthlyRollupRetentionMs)) as {
changes: number;
}
).changes
@@ -105,7 +122,7 @@ function getLastRollupSampleMs(db: DatabaseSync): number {
return row ? Number(row.state_value) : ZERO_ID;
}
function setLastRollupSampleMs(db: DatabaseSync, sampleMs: number): void {
function setLastRollupSampleMs(db: DatabaseSync, sampleMs: number | bigint): void {
db.prepare(
`INSERT INTO imm_rollup_state (state_key, state_value)
VALUES (?, ?)
@@ -124,7 +141,7 @@ function resetRollups(db: DatabaseSync): void {
function upsertDailyRollupsForGroups(
db: DatabaseSync,
groups: Array<{ rollupDay: number; videoId: number }>,
rollupNowMs: number,
rollupNowMs: bigint,
): void {
if (groups.length === 0) {
return;
@@ -140,29 +157,32 @@ function upsertDailyRollupsForGroups(
CAST(julianday(s.started_at_ms / 1000, 'unixepoch', 'localtime') - 2440587.5 AS INTEGER) AS rollup_day,
s.video_id AS video_id,
COUNT(DISTINCT s.session_id) AS total_sessions,
COALESCE(SUM(sm.max_active_ms), 0) / 60000.0 AS total_active_min,
COALESCE(SUM(sm.max_lines), 0) AS total_lines_seen,
COALESCE(SUM(sm.max_tokens), 0) AS total_tokens_seen,
COALESCE(SUM(sm.max_cards), 0) AS total_cards,
COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) / 60000.0 AS total_active_min,
COALESCE(SUM(COALESCE(sm.max_lines, s.lines_seen)), 0) AS total_lines_seen,
COALESCE(SUM(COALESCE(sm.max_tokens, s.tokens_seen)), 0) AS total_tokens_seen,
COALESCE(SUM(COALESCE(sm.max_cards, s.cards_mined)), 0) AS total_cards,
CASE
WHEN COALESCE(SUM(sm.max_active_ms), 0) > 0
THEN (COALESCE(SUM(sm.max_cards), 0) * 60.0) / (COALESCE(SUM(sm.max_active_ms), 0) / 60000.0)
WHEN COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) > 0
THEN (COALESCE(SUM(COALESCE(sm.max_cards, s.cards_mined)), 0) * 60.0)
/ (COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) / 60000.0)
ELSE NULL
END AS cards_per_hour,
CASE
WHEN COALESCE(SUM(sm.max_active_ms), 0) > 0
THEN COALESCE(SUM(sm.max_tokens), 0) / (COALESCE(SUM(sm.max_active_ms), 0) / 60000.0)
WHEN COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) > 0
THEN COALESCE(SUM(COALESCE(sm.max_tokens, s.tokens_seen)), 0)
/ (COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) / 60000.0)
ELSE NULL
END AS tokens_per_min,
CASE
WHEN COALESCE(SUM(sm.max_lookups), 0) > 0
THEN CAST(COALESCE(SUM(sm.max_hits), 0) AS REAL) / CAST(SUM(sm.max_lookups) AS REAL)
WHEN COALESCE(SUM(COALESCE(sm.max_lookups, s.lookup_count)), 0) > 0
THEN CAST(COALESCE(SUM(COALESCE(sm.max_hits, s.lookup_hits)), 0) AS REAL)
/ CAST(COALESCE(SUM(COALESCE(sm.max_lookups, s.lookup_count)), 0) AS REAL)
ELSE NULL
END AS lookup_hit_rate,
? AS CREATED_DATE,
? AS LAST_UPDATE_DATE
FROM imm_sessions s
JOIN (
LEFT JOIN (
SELECT
t.session_id,
MAX(t.active_watched_ms) AS max_active_ms,
@@ -197,7 +217,7 @@ function upsertDailyRollupsForGroups(
function upsertMonthlyRollupsForGroups(
db: DatabaseSync,
groups: Array<{ rollupMonth: number; videoId: number }>,
rollupNowMs: number,
rollupNowMs: bigint,
): void {
if (groups.length === 0) {
return;
@@ -212,14 +232,14 @@ function upsertMonthlyRollupsForGroups(
CAST(strftime('%Y%m', s.started_at_ms / 1000, 'unixepoch', 'localtime') AS INTEGER) AS rollup_month,
s.video_id AS video_id,
COUNT(DISTINCT s.session_id) AS total_sessions,
COALESCE(SUM(sm.max_active_ms), 0) / 60000.0 AS total_active_min,
COALESCE(SUM(sm.max_lines), 0) AS total_lines_seen,
COALESCE(SUM(sm.max_tokens), 0) AS total_tokens_seen,
COALESCE(SUM(sm.max_cards), 0) AS total_cards,
COALESCE(SUM(COALESCE(sm.max_active_ms, s.active_watched_ms)), 0) / 60000.0 AS total_active_min,
COALESCE(SUM(COALESCE(sm.max_lines, s.lines_seen)), 0) AS total_lines_seen,
COALESCE(SUM(COALESCE(sm.max_tokens, s.tokens_seen)), 0) AS total_tokens_seen,
COALESCE(SUM(COALESCE(sm.max_cards, s.cards_mined)), 0) AS total_cards,
? AS CREATED_DATE,
? AS LAST_UPDATE_DATE
FROM imm_sessions s
JOIN (
LEFT JOIN (
SELECT
t.session_id,
MAX(t.active_watched_ms) AS max_active_ms,
@@ -261,7 +281,7 @@ function getAffectedRollupGroups(
FROM imm_session_telemetry t
JOIN imm_sessions s
ON s.session_id = t.session_id
WHERE t.sample_ms > ?
WHERE t.sample_ms >= ?
`,
)
.all(lastRollupSampleMs) as unknown as RollupGroupRow[]
@@ -301,7 +321,7 @@ export function runRollupMaintenance(db: DatabaseSync, forceRebuild = false): vo
return;
}
const rollupNowMs = Date.now();
const rollupNowMs = toDbMs(nowMs());
const lastRollupSampleMs = getLastRollupSampleMs(db);
const maxSampleRow = db
@@ -336,7 +356,7 @@ export function runRollupMaintenance(db: DatabaseSync, forceRebuild = false): vo
try {
upsertDailyRollupsForGroups(db, dailyGroups, rollupNowMs);
upsertMonthlyRollupsForGroups(db, monthlyGroups, rollupNowMs);
setLastRollupSampleMs(db, Number(maxSampleRow.maxSampleMs));
setLastRollupSampleMs(db, toDbMs(maxSampleRow.maxSampleMs ?? ZERO_ID));
db.exec('COMMIT');
} catch (error) {
db.exec('ROLLBACK');
@@ -345,7 +365,7 @@ export function runRollupMaintenance(db: DatabaseSync, forceRebuild = false): vo
}
export function rebuildRollupsInTransaction(db: DatabaseSync): void {
const rollupNowMs = Date.now();
const rollupNowMs = toDbMs(nowMs());
const maxSampleRow = db
.prepare('SELECT MAX(sample_ms) AS maxSampleMs FROM imm_session_telemetry')
.get() as unknown as RollupTelemetryResult | null;
@@ -357,7 +377,7 @@ export function rebuildRollupsInTransaction(db: DatabaseSync): void {
const affectedGroups = getAffectedRollupGroups(db, ZERO_ID);
if (affectedGroups.length === 0) {
setLastRollupSampleMs(db, Number(maxSampleRow.maxSampleMs));
setLastRollupSampleMs(db, toDbMs(maxSampleRow.maxSampleMs ?? ZERO_ID));
return;
}
@@ -376,7 +396,7 @@ export function rebuildRollupsInTransaction(db: DatabaseSync): void {
upsertDailyRollupsForGroups(db, dailyGroups, rollupNowMs);
upsertMonthlyRollupsForGroups(db, monthlyGroups, rollupNowMs);
setLastRollupSampleMs(db, Number(maxSampleRow.maxSampleMs));
setLastRollupSampleMs(db, toDbMs(maxSampleRow.maxSampleMs ?? ZERO_ID));
}
export function runOptimizeMaintenance(db: DatabaseSync): void {

View File

@@ -0,0 +1,257 @@
import type { DatabaseSync } from './sqlite';
import type {
KanjiAnimeAppearanceRow,
KanjiDetailRow,
KanjiOccurrenceRow,
KanjiStatsRow,
KanjiWordRow,
SessionEventRow,
SimilarWordRow,
VocabularyStatsRow,
WordAnimeAppearanceRow,
WordDetailRow,
WordOccurrenceRow,
} from './types';
export function getVocabularyStats(
db: DatabaseSync,
limit = 100,
excludePos?: string[],
): VocabularyStatsRow[] {
const hasExclude = excludePos && excludePos.length > 0;
const placeholders = hasExclude ? excludePos.map(() => '?').join(', ') : '';
const whereClause = hasExclude
? `WHERE (part_of_speech IS NULL OR part_of_speech NOT IN (${placeholders}))`
: '';
const stmt = db.prepare(`
SELECT w.id AS wordId, w.headword, w.word, w.reading,
w.part_of_speech AS partOfSpeech, w.pos1, w.pos2, w.pos3,
w.frequency, w.frequency_rank AS frequencyRank,
w.first_seen AS firstSeen, w.last_seen AS lastSeen,
COUNT(DISTINCT sl.anime_id) AS animeCount
FROM imm_words w
LEFT JOIN imm_word_line_occurrences o ON o.word_id = w.id
LEFT JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id AND sl.anime_id IS NOT NULL
${whereClause ? whereClause.replace('part_of_speech', 'w.part_of_speech') : ''}
GROUP BY w.id
ORDER BY w.frequency DESC LIMIT ?
`);
const params = hasExclude ? [...excludePos, limit] : [limit];
return stmt.all(...params) as VocabularyStatsRow[];
}
export function getKanjiStats(db: DatabaseSync, limit = 100): KanjiStatsRow[] {
const stmt = db.prepare(`
SELECT id AS kanjiId, kanji, frequency,
first_seen AS firstSeen, last_seen AS lastSeen
FROM imm_kanji ORDER BY frequency DESC LIMIT ?
`);
return stmt.all(limit) as KanjiStatsRow[];
}
export function getWordOccurrences(
db: DatabaseSync,
headword: string,
word: string,
reading: string,
limit = 100,
offset = 0,
): WordOccurrenceRow[] {
return db
.prepare(
`
SELECT
l.anime_id AS animeId,
a.canonical_title AS animeTitle,
l.video_id AS videoId,
v.canonical_title AS videoTitle,
v.source_path AS sourcePath,
l.secondary_text AS secondaryText,
l.session_id AS sessionId,
l.line_index AS lineIndex,
l.segment_start_ms AS segmentStartMs,
l.segment_end_ms AS segmentEndMs,
l.text AS text,
o.occurrence_count AS occurrenceCount
FROM imm_word_line_occurrences o
JOIN imm_words w ON w.id = o.word_id
JOIN imm_subtitle_lines l ON l.line_id = o.line_id
JOIN imm_videos v ON v.video_id = l.video_id
LEFT JOIN imm_anime a ON a.anime_id = l.anime_id
WHERE w.headword = ? AND w.word = ? AND w.reading = ?
ORDER BY l.CREATED_DATE DESC, l.line_id DESC
LIMIT ?
OFFSET ?
`,
)
.all(headword, word, reading, limit, offset) as unknown as WordOccurrenceRow[];
}
export function getKanjiOccurrences(
db: DatabaseSync,
kanji: string,
limit = 100,
offset = 0,
): KanjiOccurrenceRow[] {
return db
.prepare(
`
SELECT
l.anime_id AS animeId,
a.canonical_title AS animeTitle,
l.video_id AS videoId,
v.canonical_title AS videoTitle,
v.source_path AS sourcePath,
l.secondary_text AS secondaryText,
l.session_id AS sessionId,
l.line_index AS lineIndex,
l.segment_start_ms AS segmentStartMs,
l.segment_end_ms AS segmentEndMs,
l.text AS text,
o.occurrence_count AS occurrenceCount
FROM imm_kanji_line_occurrences o
JOIN imm_kanji k ON k.id = o.kanji_id
JOIN imm_subtitle_lines l ON l.line_id = o.line_id
JOIN imm_videos v ON v.video_id = l.video_id
LEFT JOIN imm_anime a ON a.anime_id = l.anime_id
WHERE k.kanji = ?
ORDER BY l.CREATED_DATE DESC, l.line_id DESC
LIMIT ?
OFFSET ?
`,
)
.all(kanji, limit, offset) as unknown as KanjiOccurrenceRow[];
}
export function getSessionEvents(
db: DatabaseSync,
sessionId: number,
limit = 500,
eventTypes?: number[],
): SessionEventRow[] {
if (!eventTypes || eventTypes.length === 0) {
const stmt = db.prepare(`
SELECT event_type AS eventType, ts_ms AS tsMs, payload_json AS payload
FROM imm_session_events WHERE session_id = ? ORDER BY ts_ms ASC LIMIT ?
`);
return stmt.all(sessionId, limit) as SessionEventRow[];
}
const placeholders = eventTypes.map(() => '?').join(', ');
const stmt = db.prepare(`
SELECT event_type AS eventType, ts_ms AS tsMs, payload_json AS payload
FROM imm_session_events
WHERE session_id = ? AND event_type IN (${placeholders})
ORDER BY ts_ms ASC
LIMIT ?
`);
return stmt.all(sessionId, ...eventTypes, limit) as SessionEventRow[];
}
export function getWordDetail(db: DatabaseSync, wordId: number): WordDetailRow | null {
return db
.prepare(
`
SELECT id AS wordId, headword, word, reading,
part_of_speech AS partOfSpeech, pos1, pos2, pos3,
frequency, first_seen AS firstSeen, last_seen AS lastSeen
FROM imm_words WHERE id = ?
`,
)
.get(wordId) as WordDetailRow | null;
}
export function getWordAnimeAppearances(
db: DatabaseSync,
wordId: number,
): WordAnimeAppearanceRow[] {
return db
.prepare(
`
SELECT a.anime_id AS animeId, a.canonical_title AS animeTitle,
SUM(o.occurrence_count) AS occurrenceCount
FROM imm_word_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_anime a ON a.anime_id = sl.anime_id
WHERE o.word_id = ? AND sl.anime_id IS NOT NULL
GROUP BY a.anime_id
ORDER BY occurrenceCount DESC
`,
)
.all(wordId) as WordAnimeAppearanceRow[];
}
export function getSimilarWords(db: DatabaseSync, wordId: number, limit = 10): SimilarWordRow[] {
const word = db.prepare('SELECT headword, reading FROM imm_words WHERE id = ?').get(wordId) as {
headword: string;
reading: string;
} | null;
if (!word || word.headword.trim() === '') return [];
return db
.prepare(
`
SELECT id AS wordId, headword, word, reading, frequency
FROM imm_words
WHERE id != ?
AND (reading = ? OR headword LIKE ? OR headword LIKE ?)
ORDER BY frequency DESC
LIMIT ?
`,
)
.all(
wordId,
word.reading,
`%${word.headword.charAt(0)}%`,
`%${word.headword.charAt(word.headword.length - 1)}%`,
limit,
) as SimilarWordRow[];
}
export function getKanjiDetail(db: DatabaseSync, kanjiId: number): KanjiDetailRow | null {
return db
.prepare(
`
SELECT id AS kanjiId, kanji, frequency, first_seen AS firstSeen, last_seen AS lastSeen
FROM imm_kanji WHERE id = ?
`,
)
.get(kanjiId) as KanjiDetailRow | null;
}
export function getKanjiAnimeAppearances(
db: DatabaseSync,
kanjiId: number,
): KanjiAnimeAppearanceRow[] {
return db
.prepare(
`
SELECT a.anime_id AS animeId, a.canonical_title AS animeTitle,
SUM(o.occurrence_count) AS occurrenceCount
FROM imm_kanji_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_anime a ON a.anime_id = sl.anime_id
WHERE o.kanji_id = ? AND sl.anime_id IS NOT NULL
GROUP BY a.anime_id
ORDER BY occurrenceCount DESC
`,
)
.all(kanjiId) as KanjiAnimeAppearanceRow[];
}
export function getKanjiWords(db: DatabaseSync, kanjiId: number, limit = 20): KanjiWordRow[] {
const kanjiRow = db.prepare('SELECT kanji FROM imm_kanji WHERE id = ?').get(kanjiId) as {
kanji: string;
} | null;
if (!kanjiRow) return [];
return db
.prepare(
`
SELECT id AS wordId, headword, word, reading, frequency
FROM imm_words
WHERE headword LIKE ?
ORDER BY frequency DESC
LIMIT ?
`,
)
.all(`%${kanjiRow.kanji}%`, limit) as KanjiWordRow[];
}

View File

@@ -0,0 +1,576 @@
import type { DatabaseSync } from './sqlite';
import type {
AnimeAnilistEntryRow,
AnimeDetailRow,
AnimeEpisodeRow,
AnimeLibraryRow,
AnimeWordRow,
EpisodeCardEventRow,
EpisodesPerDayRow,
ImmersionSessionRollupRow,
MediaArtRow,
MediaDetailRow,
MediaLibraryRow,
NewAnimePerDayRow,
SessionSummaryQueryRow,
StreakCalendarRow,
WatchTimePerAnimeRow,
} from './types';
import { ACTIVE_SESSION_METRICS_CTE, resolvedCoverBlobExpr } from './query-shared';
export function getAnimeLibrary(db: DatabaseSync): AnimeLibraryRow[] {
return db
.prepare(
`
SELECT
a.anime_id AS animeId,
a.canonical_title AS canonicalTitle,
a.anilist_id AS anilistId,
COALESCE(lm.total_sessions, 0) AS totalSessions,
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
COALESCE(lm.total_cards, 0) AS totalCards,
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
COUNT(DISTINCT v.video_id) AS episodeCount,
a.episodes_total AS episodesTotal,
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs
FROM imm_anime a
JOIN imm_lifetime_anime lm ON lm.anime_id = a.anime_id
JOIN imm_videos v ON v.anime_id = a.anime_id
GROUP BY a.anime_id
ORDER BY totalActiveMs DESC, lm.last_watched_ms DESC, canonicalTitle ASC
`,
)
.all() as unknown as AnimeLibraryRow[];
}
export function getAnimeDetail(db: DatabaseSync, animeId: number): AnimeDetailRow | null {
return db
.prepare(
`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
a.anime_id AS animeId,
a.canonical_title AS canonicalTitle,
a.anilist_id AS anilistId,
a.title_romaji AS titleRomaji,
a.title_english AS titleEnglish,
a.title_native AS titleNative,
a.description AS description,
COALESCE(lm.total_sessions, 0) AS totalSessions,
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
COALESCE(lm.total_cards, 0) AS totalCards,
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
COALESCE(SUM(COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0)), 0) AS totalYomitanLookupCount,
COUNT(DISTINCT v.video_id) AS episodeCount,
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs
FROM imm_anime a
JOIN imm_lifetime_anime lm ON lm.anime_id = a.anime_id
JOIN imm_videos v ON v.anime_id = a.anime_id
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
WHERE a.anime_id = ?
GROUP BY a.anime_id
`,
)
.get(animeId) as unknown as AnimeDetailRow | null;
}
export function getAnimeAnilistEntries(db: DatabaseSync, animeId: number): AnimeAnilistEntryRow[] {
return db
.prepare(
`
SELECT DISTINCT
m.anilist_id AS anilistId,
m.title_romaji AS titleRomaji,
m.title_english AS titleEnglish,
v.parsed_season AS season
FROM imm_videos v
JOIN imm_media_art m ON m.video_id = v.video_id
WHERE v.anime_id = ?
AND m.anilist_id IS NOT NULL
ORDER BY v.parsed_season ASC
`,
)
.all(animeId) as unknown as AnimeAnilistEntryRow[];
}
export function getAnimeEpisodes(db: DatabaseSync, animeId: number): AnimeEpisodeRow[] {
return db
.prepare(
`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
v.anime_id AS animeId,
v.video_id AS videoId,
v.canonical_title AS canonicalTitle,
v.parsed_title AS parsedTitle,
v.parsed_season AS season,
v.parsed_episode AS episode,
v.duration_ms AS durationMs,
(
SELECT COALESCE(
NULLIF(s_recent.ended_media_ms, 0),
(
SELECT MAX(line.segment_end_ms)
FROM imm_subtitle_lines line
WHERE line.session_id = s_recent.session_id
AND line.segment_end_ms IS NOT NULL
),
(
SELECT MAX(event.segment_end_ms)
FROM imm_session_events event
WHERE event.session_id = s_recent.session_id
AND event.segment_end_ms IS NOT NULL
)
)
FROM imm_sessions s_recent
WHERE s_recent.video_id = v.video_id
AND (
s_recent.ended_media_ms IS NOT NULL
OR EXISTS (
SELECT 1
FROM imm_subtitle_lines line
WHERE line.session_id = s_recent.session_id
AND line.segment_end_ms IS NOT NULL
)
OR EXISTS (
SELECT 1
FROM imm_session_events event
WHERE event.session_id = s_recent.session_id
AND event.segment_end_ms IS NOT NULL
)
)
ORDER BY
COALESCE(s_recent.ended_at_ms, s_recent.LAST_UPDATE_DATE, s_recent.started_at_ms) DESC,
s_recent.session_id DESC
LIMIT 1
) AS endedMediaMs,
v.watched AS watched,
COUNT(DISTINCT s.session_id) AS totalSessions,
COALESCE(SUM(COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0)), 0) AS totalActiveMs,
COALESCE(SUM(COALESCE(asm.cardsMined, s.cards_mined, 0)), 0) AS totalCards,
COALESCE(SUM(COALESCE(asm.tokensSeen, s.tokens_seen, 0)), 0) AS totalTokensSeen,
COALESCE(SUM(COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0)), 0) AS totalYomitanLookupCount,
MAX(s.started_at_ms) AS lastWatchedMs
FROM imm_videos v
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
WHERE v.anime_id = ?
GROUP BY v.video_id
ORDER BY
CASE WHEN v.parsed_season IS NULL THEN 1 ELSE 0 END,
v.parsed_season ASC,
CASE WHEN v.parsed_episode IS NULL THEN 1 ELSE 0 END,
v.parsed_episode ASC,
v.video_id ASC
`,
)
.all(animeId) as unknown as AnimeEpisodeRow[];
}
export function getMediaLibrary(db: DatabaseSync): MediaLibraryRow[] {
return db
.prepare(
`
SELECT
v.video_id AS videoId,
v.canonical_title AS canonicalTitle,
COALESCE(lm.total_sessions, 0) AS totalSessions,
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
COALESCE(lm.total_cards, 0) AS totalCards,
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
COALESCE(lm.last_watched_ms, 0) AS lastWatchedMs,
yv.youtube_video_id AS youtubeVideoId,
yv.video_url AS videoUrl,
yv.video_title AS videoTitle,
yv.video_thumbnail_url AS videoThumbnailUrl,
yv.channel_id AS channelId,
yv.channel_name AS channelName,
yv.channel_url AS channelUrl,
yv.channel_thumbnail_url AS channelThumbnailUrl,
yv.uploader_id AS uploaderId,
yv.uploader_url AS uploaderUrl,
yv.description AS description,
CASE
WHEN ma.cover_blob_hash IS NOT NULL OR ma.cover_blob IS NOT NULL THEN 1
ELSE 0
END AS hasCoverArt
FROM imm_videos v
JOIN imm_lifetime_media lm ON lm.video_id = v.video_id
LEFT JOIN imm_media_art ma ON ma.video_id = v.video_id
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
ORDER BY lm.last_watched_ms DESC
`,
)
.all() as unknown as MediaLibraryRow[];
}
export function getMediaDetail(db: DatabaseSync, videoId: number): MediaDetailRow | null {
return db
.prepare(
`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
v.video_id AS videoId,
v.canonical_title AS canonicalTitle,
v.anime_id AS animeId,
COALESCE(lm.total_sessions, 0) AS totalSessions,
COALESCE(lm.total_active_ms, 0) AS totalActiveMs,
COALESCE(lm.total_cards, 0) AS totalCards,
COALESCE(lm.total_tokens_seen, 0) AS totalTokensSeen,
COALESCE(lm.total_lines_seen, 0) AS totalLinesSeen,
COALESCE(SUM(COALESCE(asm.lookupCount, s.lookup_count, 0)), 0) AS totalLookupCount,
COALESCE(SUM(COALESCE(asm.lookupHits, s.lookup_hits, 0)), 0) AS totalLookupHits,
COALESCE(SUM(COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0)), 0) AS totalYomitanLookupCount,
yv.youtube_video_id AS youtubeVideoId,
yv.video_url AS videoUrl,
yv.video_title AS videoTitle,
yv.video_thumbnail_url AS videoThumbnailUrl,
yv.channel_id AS channelId,
yv.channel_name AS channelName,
yv.channel_url AS channelUrl,
yv.channel_thumbnail_url AS channelThumbnailUrl,
yv.uploader_id AS uploaderId,
yv.uploader_url AS uploaderUrl,
yv.description AS description
FROM imm_videos v
JOIN imm_lifetime_media lm ON lm.video_id = v.video_id
LEFT JOIN imm_youtube_videos yv ON yv.video_id = v.video_id
LEFT JOIN imm_sessions s ON s.video_id = v.video_id
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
WHERE v.video_id = ?
GROUP BY v.video_id
`,
)
.get(videoId) as unknown as MediaDetailRow | null;
}
export function getMediaSessions(
db: DatabaseSync,
videoId: number,
limit = 100,
): SessionSummaryQueryRow[] {
return db
.prepare(
`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
s.session_id AS sessionId,
s.video_id AS videoId,
v.canonical_title AS canonicalTitle,
s.started_at_ms AS startedAtMs,
s.ended_at_ms AS endedAtMs,
COALESCE(asm.totalWatchedMs, s.total_watched_ms, 0) AS totalWatchedMs,
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
COALESCE(asm.linesSeen, s.lines_seen, 0) AS linesSeen,
COALESCE(asm.tokensSeen, s.tokens_seen, 0) AS tokensSeen,
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
COALESCE(asm.lookupCount, s.lookup_count, 0) AS lookupCount,
COALESCE(asm.lookupHits, s.lookup_hits, 0) AS lookupHits,
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
FROM imm_sessions s
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
LEFT JOIN imm_videos v ON v.video_id = s.video_id
WHERE s.video_id = ?
ORDER BY s.started_at_ms DESC
LIMIT ?
`,
)
.all(videoId, limit) as unknown as SessionSummaryQueryRow[];
}
export function getMediaDailyRollups(
db: DatabaseSync,
videoId: number,
limit = 90,
): ImmersionSessionRollupRow[] {
return db
.prepare(
`
WITH recent_days AS (
SELECT DISTINCT rollup_day
FROM imm_daily_rollups
WHERE video_id = ?
ORDER BY rollup_day DESC
LIMIT ?
)
SELECT
rollup_day AS rollupDayOrMonth,
video_id AS videoId,
total_sessions AS totalSessions,
total_active_min AS totalActiveMin,
total_lines_seen AS totalLinesSeen,
total_tokens_seen AS totalTokensSeen,
total_cards AS totalCards,
cards_per_hour AS cardsPerHour,
tokens_per_min AS tokensPerMin,
lookup_hit_rate AS lookupHitRate
FROM imm_daily_rollups
WHERE video_id = ?
AND rollup_day IN (SELECT rollup_day FROM recent_days)
ORDER BY rollup_day DESC, video_id DESC
`,
)
.all(videoId, limit, videoId) as unknown as ImmersionSessionRollupRow[];
}
export function getAnimeDailyRollups(
db: DatabaseSync,
animeId: number,
limit = 90,
): ImmersionSessionRollupRow[] {
return db
.prepare(
`
WITH recent_days AS (
SELECT DISTINCT r.rollup_day
FROM imm_daily_rollups r
JOIN imm_videos v ON v.video_id = r.video_id
WHERE v.anime_id = ?
ORDER BY r.rollup_day DESC
LIMIT ?
)
SELECT r.rollup_day AS rollupDayOrMonth, r.video_id AS videoId,
r.total_sessions AS totalSessions, r.total_active_min AS totalActiveMin,
r.total_lines_seen AS totalLinesSeen,
r.total_tokens_seen AS totalTokensSeen, r.total_cards AS totalCards,
r.cards_per_hour AS cardsPerHour, r.tokens_per_min AS tokensPerMin,
r.lookup_hit_rate AS lookupHitRate
FROM imm_daily_rollups r
JOIN imm_videos v ON v.video_id = r.video_id
WHERE v.anime_id = ?
AND r.rollup_day IN (SELECT rollup_day FROM recent_days)
ORDER BY r.rollup_day DESC, r.video_id DESC
`,
)
.all(animeId, limit, animeId) as unknown as ImmersionSessionRollupRow[];
}
export function getAnimeCoverArt(db: DatabaseSync, animeId: number): MediaArtRow | null {
const resolvedCoverBlob = resolvedCoverBlobExpr('a', 'cab');
return db
.prepare(
`
SELECT
a.video_id AS videoId,
a.anilist_id AS anilistId,
a.cover_url AS coverUrl,
${resolvedCoverBlob} AS coverBlob,
a.title_romaji AS titleRomaji,
a.title_english AS titleEnglish,
a.episodes_total AS episodesTotal,
a.fetched_at_ms AS fetchedAtMs
FROM imm_media_art a
JOIN imm_videos v ON v.video_id = a.video_id
LEFT JOIN imm_cover_art_blobs cab ON cab.blob_hash = a.cover_blob_hash
WHERE v.anime_id = ?
AND ${resolvedCoverBlob} IS NOT NULL
ORDER BY a.fetched_at_ms DESC, a.video_id DESC
LIMIT 1
`,
)
.get(animeId) as unknown as MediaArtRow | null;
}
export function getCoverArt(db: DatabaseSync, videoId: number): MediaArtRow | null {
const resolvedCoverBlob = resolvedCoverBlobExpr('a', 'cab');
return db
.prepare(
`
SELECT
a.video_id AS videoId,
a.anilist_id AS anilistId,
a.cover_url AS coverUrl,
${resolvedCoverBlob} AS coverBlob,
a.title_romaji AS titleRomaji,
a.title_english AS titleEnglish,
a.episodes_total AS episodesTotal,
a.fetched_at_ms AS fetchedAtMs
FROM imm_media_art a
LEFT JOIN imm_cover_art_blobs cab ON cab.blob_hash = a.cover_blob_hash
WHERE a.video_id = ?
`,
)
.get(videoId) as unknown as MediaArtRow | null;
}
export function getStreakCalendar(db: DatabaseSync, days = 90): StreakCalendarRow[] {
const now = new Date();
const localMidnight = new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime();
const todayLocalDay = Math.floor(localMidnight / 86_400_000);
const cutoffDay = todayLocalDay - days;
return db
.prepare(
`
SELECT rollup_day AS epochDay, SUM(total_active_min) AS totalActiveMin
FROM imm_daily_rollups
WHERE rollup_day >= ?
GROUP BY rollup_day
ORDER BY rollup_day ASC
`,
)
.all(cutoffDay) as StreakCalendarRow[];
}
export function getAnimeWords(db: DatabaseSync, animeId: number, limit = 50): AnimeWordRow[] {
return db
.prepare(
`
SELECT w.id AS wordId, w.headword, w.word, w.reading, w.part_of_speech AS partOfSpeech,
SUM(o.occurrence_count) AS frequency
FROM imm_word_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_words w ON w.id = o.word_id
WHERE sl.anime_id = ?
GROUP BY w.id
ORDER BY frequency DESC
LIMIT ?
`,
)
.all(animeId, limit) as unknown as AnimeWordRow[];
}
export function getEpisodesPerDay(db: DatabaseSync, limit = 90): EpisodesPerDayRow[] {
return db
.prepare(
`
SELECT CAST(julianday(s.started_at_ms / 1000, 'unixepoch', 'localtime') - 2440587.5 AS INTEGER) AS epochDay,
COUNT(DISTINCT s.video_id) AS episodeCount
FROM imm_sessions s
GROUP BY epochDay
ORDER BY epochDay DESC
LIMIT ?
`,
)
.all(limit) as EpisodesPerDayRow[];
}
export function getNewAnimePerDay(db: DatabaseSync, limit = 90): NewAnimePerDayRow[] {
return db
.prepare(
`
SELECT first_day AS epochDay, COUNT(*) AS newAnimeCount
FROM (
SELECT CAST(julianday(MIN(s.started_at_ms) / 1000, 'unixepoch', 'localtime') - 2440587.5 AS INTEGER) AS first_day
FROM imm_sessions s
JOIN imm_videos v ON v.video_id = s.video_id
WHERE v.anime_id IS NOT NULL
GROUP BY v.anime_id
)
GROUP BY first_day
ORDER BY first_day DESC
LIMIT ?
`,
)
.all(limit) as NewAnimePerDayRow[];
}
export function getWatchTimePerAnime(db: DatabaseSync, limit = 90): WatchTimePerAnimeRow[] {
const nowD = new Date();
const cutoffDay =
Math.floor(
new Date(nowD.getFullYear(), nowD.getMonth(), nowD.getDate()).getTime() / 86_400_000,
) - limit;
return db
.prepare(
`
SELECT r.rollup_day AS epochDay, a.anime_id AS animeId,
a.canonical_title AS animeTitle,
SUM(r.total_active_min) AS totalActiveMin
FROM imm_daily_rollups r
JOIN imm_videos v ON v.video_id = r.video_id
JOIN imm_anime a ON a.anime_id = v.anime_id
WHERE r.rollup_day >= ?
GROUP BY r.rollup_day, a.anime_id
ORDER BY r.rollup_day ASC
`,
)
.all(cutoffDay) as WatchTimePerAnimeRow[];
}
export function getEpisodeWords(db: DatabaseSync, videoId: number, limit = 50): AnimeWordRow[] {
return db
.prepare(
`
SELECT w.id AS wordId, w.headword, w.word, w.reading, w.part_of_speech AS partOfSpeech,
SUM(o.occurrence_count) AS frequency
FROM imm_word_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_words w ON w.id = o.word_id
WHERE sl.video_id = ?
GROUP BY w.id
ORDER BY frequency DESC
LIMIT ?
`,
)
.all(videoId, limit) as unknown as AnimeWordRow[];
}
export function getEpisodeSessions(db: DatabaseSync, videoId: number): SessionSummaryQueryRow[] {
return db
.prepare(
`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
s.session_id AS sessionId, s.video_id AS videoId,
v.canonical_title AS canonicalTitle,
s.started_at_ms AS startedAtMs, s.ended_at_ms AS endedAtMs,
COALESCE(asm.totalWatchedMs, s.total_watched_ms, 0) AS totalWatchedMs,
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
COALESCE(asm.linesSeen, s.lines_seen, 0) AS linesSeen,
COALESCE(asm.tokensSeen, s.tokens_seen, 0) AS tokensSeen,
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
COALESCE(asm.lookupCount, s.lookup_count, 0) AS lookupCount,
COALESCE(asm.lookupHits, s.lookup_hits, 0) AS lookupHits,
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
FROM imm_sessions s
JOIN imm_videos v ON v.video_id = s.video_id
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
WHERE s.video_id = ?
ORDER BY s.started_at_ms DESC
`,
)
.all(videoId) as SessionSummaryQueryRow[];
}
export function getEpisodeCardEvents(db: DatabaseSync, videoId: number): EpisodeCardEventRow[] {
const rows = db
.prepare(
`
SELECT e.event_id AS eventId, e.session_id AS sessionId,
e.ts_ms AS tsMs, e.cards_delta AS cardsDelta,
e.payload_json AS payloadJson
FROM imm_session_events e
JOIN imm_sessions s ON s.session_id = e.session_id
WHERE s.video_id = ? AND e.event_type = 4
ORDER BY e.ts_ms DESC
`,
)
.all(videoId) as Array<{
eventId: number;
sessionId: number;
tsMs: number;
cardsDelta: number;
payloadJson: string | null;
}>;
return rows.map((row) => {
let noteIds: number[] = [];
if (row.payloadJson) {
try {
const parsed = JSON.parse(row.payloadJson);
if (Array.isArray(parsed.noteIds)) noteIds = parsed.noteIds;
} catch {}
}
return {
eventId: row.eventId,
sessionId: row.sessionId,
tsMs: row.tsMs,
cardsDelta: row.cardsDelta,
noteIds,
};
});
}

View File

@@ -0,0 +1,546 @@
import { createHash } from 'node:crypto';
import type { DatabaseSync } from './sqlite';
import { buildCoverBlobReference, normalizeCoverBlobBytes } from './storage';
import { rebuildLifetimeSummariesInTransaction } from './lifetime';
import { rebuildRollupsInTransaction } from './maintenance';
import { nowMs } from './time';
import { PartOfSpeech, type MergedToken } from '../../../types';
import { shouldExcludeTokenFromVocabularyPersistence } from '../tokenizer/annotation-stage';
import { deriveStoredPartOfSpeech } from '../tokenizer/part-of-speech';
import {
cleanupUnusedCoverArtBlobHash,
deleteSessionsByIds,
findSharedCoverBlobHash,
getAffectedKanjiIdsForSessions,
getAffectedKanjiIdsForVideo,
getAffectedWordIdsForSessions,
getAffectedWordIdsForVideo,
refreshLexicalAggregates,
toDbMs,
} from './query-shared';
type CleanupVocabularyRow = {
id: number;
word: string;
headword: string;
reading: string | null;
part_of_speech: string | null;
pos1: string | null;
pos2: string | null;
pos3: string | null;
first_seen: number | null;
last_seen: number | null;
frequency: number | null;
};
type ResolvedVocabularyPos = {
headword: string;
reading: string;
hasPosMetadata: boolean;
partOfSpeech: PartOfSpeech;
pos1: string;
pos2: string;
pos3: string;
};
type CleanupVocabularyStatsOptions = {
resolveLegacyPos?: (row: CleanupVocabularyRow) => Promise<{
headword: string;
reading: string;
partOfSpeech: string;
pos1: string;
pos2: string;
pos3: string;
} | null>;
};
function toStoredWordToken(row: {
word: string;
headword: string;
part_of_speech: string | null;
pos1: string | null;
pos2: string | null;
pos3: string | null;
}): MergedToken {
return {
surface: row.word || row.headword || '',
reading: '',
headword: row.headword || row.word || '',
startPos: 0,
endPos: 0,
partOfSpeech: deriveStoredPartOfSpeech({
partOfSpeech: row.part_of_speech,
pos1: row.pos1,
}),
pos1: row.pos1 ?? '',
pos2: row.pos2 ?? '',
pos3: row.pos3 ?? '',
isMerged: true,
isKnown: false,
isNPlusOneTarget: false,
};
}
function normalizePosField(value: string | null | undefined): string {
return typeof value === 'string' ? value.trim() : '';
}
function resolveStoredVocabularyPos(row: CleanupVocabularyRow): ResolvedVocabularyPos | null {
const headword = normalizePosField(row.headword);
const reading = normalizePosField(row.reading);
const partOfSpeechRaw = typeof row.part_of_speech === 'string' ? row.part_of_speech.trim() : '';
const pos1 = normalizePosField(row.pos1);
const pos2 = normalizePosField(row.pos2);
const pos3 = normalizePosField(row.pos3);
if (!headword && !reading && !partOfSpeechRaw && !pos1 && !pos2 && !pos3) {
return null;
}
return {
headword: headword || normalizePosField(row.word),
reading,
hasPosMetadata: Boolean(partOfSpeechRaw || pos1 || pos2 || pos3),
partOfSpeech: deriveStoredPartOfSpeech({
partOfSpeech: partOfSpeechRaw,
pos1,
}),
pos1,
pos2,
pos3,
};
}
function hasStructuredPos(pos: ResolvedVocabularyPos | null): boolean {
return Boolean(pos?.hasPosMetadata && (pos.pos1 || pos.pos2 || pos.pos3 || pos.partOfSpeech));
}
function needsLegacyVocabularyMetadataRepair(
row: CleanupVocabularyRow,
stored: ResolvedVocabularyPos | null,
): boolean {
if (!stored) {
return true;
}
if (!hasStructuredPos(stored)) {
return true;
}
if (!stored.reading) {
return true;
}
if (!stored.headword) {
return true;
}
return stored.headword === normalizePosField(row.word);
}
function shouldUpdateStoredVocabularyPos(
row: CleanupVocabularyRow,
next: ResolvedVocabularyPos,
): boolean {
return (
normalizePosField(row.headword) !== next.headword ||
normalizePosField(row.reading) !== next.reading ||
(next.hasPosMetadata &&
(normalizePosField(row.part_of_speech) !== next.partOfSpeech ||
normalizePosField(row.pos1) !== next.pos1 ||
normalizePosField(row.pos2) !== next.pos2 ||
normalizePosField(row.pos3) !== next.pos3))
);
}
function chooseMergedPartOfSpeech(
current: string | null | undefined,
incoming: ResolvedVocabularyPos,
): string {
const normalizedCurrent = normalizePosField(current);
if (
normalizedCurrent &&
normalizedCurrent !== PartOfSpeech.other &&
incoming.partOfSpeech === PartOfSpeech.other
) {
return normalizedCurrent;
}
return incoming.partOfSpeech;
}
async function maybeResolveLegacyVocabularyPos(
row: CleanupVocabularyRow,
options: CleanupVocabularyStatsOptions,
): Promise<ResolvedVocabularyPos | null> {
const stored = resolveStoredVocabularyPos(row);
if (!needsLegacyVocabularyMetadataRepair(row, stored) || !options.resolveLegacyPos) {
return stored;
}
const resolved = await options.resolveLegacyPos(row);
if (resolved) {
return {
headword: normalizePosField(resolved.headword) || normalizePosField(row.word),
reading: normalizePosField(resolved.reading),
hasPosMetadata: true,
partOfSpeech: deriveStoredPartOfSpeech({
partOfSpeech: resolved.partOfSpeech,
pos1: resolved.pos1,
}),
pos1: normalizePosField(resolved.pos1),
pos2: normalizePosField(resolved.pos2),
pos3: normalizePosField(resolved.pos3),
};
}
return stored;
}
export async function cleanupVocabularyStats(
db: DatabaseSync,
options: CleanupVocabularyStatsOptions = {},
): Promise<{ scanned: number; kept: number; deleted: number; repaired: number }> {
const rows = db
.prepare(
`SELECT id, word, headword, reading, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
FROM imm_words`,
)
.all() as CleanupVocabularyRow[];
const findDuplicateStmt = db.prepare(
`SELECT id, part_of_speech, pos1, pos2, pos3, first_seen, last_seen, frequency
FROM imm_words
WHERE headword = ? AND word = ? AND reading = ? AND id != ?`,
);
const deleteStmt = db.prepare('DELETE FROM imm_words WHERE id = ?');
const updateStmt = db.prepare(
`UPDATE imm_words
SET headword = ?, reading = ?, part_of_speech = ?, pos1 = ?, pos2 = ?, pos3 = ?
WHERE id = ?`,
);
const mergeWordStmt = db.prepare(
`UPDATE imm_words
SET
frequency = COALESCE(frequency, 0) + ?,
part_of_speech = ?,
pos1 = ?,
pos2 = ?,
pos3 = ?,
first_seen = MIN(COALESCE(first_seen, ?), ?),
last_seen = MAX(COALESCE(last_seen, ?), ?)
WHERE id = ?`,
);
const moveOccurrencesStmt = db.prepare(
`INSERT INTO imm_word_line_occurrences (line_id, word_id, occurrence_count)
SELECT line_id, ?, occurrence_count
FROM imm_word_line_occurrences
WHERE word_id = ?
ON CONFLICT(line_id, word_id) DO UPDATE SET
occurrence_count = imm_word_line_occurrences.occurrence_count + excluded.occurrence_count`,
);
const deleteOccurrencesStmt = db.prepare(
'DELETE FROM imm_word_line_occurrences WHERE word_id = ?',
);
let kept = 0;
let deleted = 0;
let repaired = 0;
for (const row of rows) {
const resolvedPos = await maybeResolveLegacyVocabularyPos(row, options);
const shouldRepair = Boolean(resolvedPos && shouldUpdateStoredVocabularyPos(row, resolvedPos));
if (resolvedPos && shouldRepair) {
const duplicate = findDuplicateStmt.get(
resolvedPos.headword,
row.word,
resolvedPos.reading,
row.id,
) as {
id: number;
part_of_speech: string | null;
pos1: string | null;
pos2: string | null;
pos3: string | null;
first_seen: number | null;
last_seen: number | null;
frequency: number | null;
} | null;
if (duplicate) {
moveOccurrencesStmt.run(duplicate.id, row.id);
deleteOccurrencesStmt.run(row.id);
mergeWordStmt.run(
row.frequency ?? 0,
chooseMergedPartOfSpeech(duplicate.part_of_speech, resolvedPos),
normalizePosField(duplicate.pos1) || resolvedPos.pos1,
normalizePosField(duplicate.pos2) || resolvedPos.pos2,
normalizePosField(duplicate.pos3) || resolvedPos.pos3,
row.first_seen ?? duplicate.first_seen ?? 0,
row.first_seen ?? duplicate.first_seen ?? 0,
row.last_seen ?? duplicate.last_seen ?? 0,
row.last_seen ?? duplicate.last_seen ?? 0,
duplicate.id,
);
deleteStmt.run(row.id);
repaired += 1;
deleted += 1;
continue;
}
updateStmt.run(
resolvedPos.headword,
resolvedPos.reading,
resolvedPos.partOfSpeech,
resolvedPos.pos1,
resolvedPos.pos2,
resolvedPos.pos3,
row.id,
);
repaired += 1;
}
const effectiveRow = {
...row,
headword: resolvedPos?.headword ?? row.headword,
reading: resolvedPos?.reading ?? row.reading,
part_of_speech: resolvedPos?.hasPosMetadata ? resolvedPos.partOfSpeech : row.part_of_speech,
pos1: resolvedPos?.pos1 ?? row.pos1,
pos2: resolvedPos?.pos2 ?? row.pos2,
pos3: resolvedPos?.pos3 ?? row.pos3,
};
const missingPos =
!normalizePosField(effectiveRow.part_of_speech) &&
!normalizePosField(effectiveRow.pos1) &&
!normalizePosField(effectiveRow.pos2) &&
!normalizePosField(effectiveRow.pos3);
if (
missingPos ||
shouldExcludeTokenFromVocabularyPersistence(toStoredWordToken(effectiveRow))
) {
deleteStmt.run(row.id);
deleted += 1;
continue;
}
kept += 1;
}
return {
scanned: rows.length,
kept,
deleted,
repaired,
};
}
export function upsertCoverArt(
db: DatabaseSync,
videoId: number,
art: {
anilistId: number | null;
coverUrl: string | null;
coverBlob: ArrayBuffer | Uint8Array | Buffer | null;
titleRomaji: string | null;
titleEnglish: string | null;
episodesTotal: number | null;
},
): void {
const existing = db
.prepare(
`
SELECT cover_blob_hash AS coverBlobHash
FROM imm_media_art
WHERE video_id = ?
`,
)
.get(videoId) as { coverBlobHash: string | null } | undefined;
const sharedCoverBlobHash = findSharedCoverBlobHash(db, videoId, art.anilistId, art.coverUrl);
const fetchedAtMs = toDbMs(nowMs());
const coverBlob = normalizeCoverBlobBytes(art.coverBlob);
const computedCoverBlobHash =
coverBlob && coverBlob.length > 0
? createHash('sha256').update(coverBlob).digest('hex')
: null;
let coverBlobHash = computedCoverBlobHash ?? sharedCoverBlobHash ?? null;
if (!coverBlobHash && (!coverBlob || coverBlob.length === 0)) {
coverBlobHash = existing?.coverBlobHash ?? null;
}
if (computedCoverBlobHash && coverBlob && coverBlob.length > 0) {
db.prepare(
`
INSERT INTO imm_cover_art_blobs (blob_hash, cover_blob, CREATED_DATE, LAST_UPDATE_DATE)
VALUES (?, ?, ?, ?)
ON CONFLICT(blob_hash) DO UPDATE SET
LAST_UPDATE_DATE = excluded.LAST_UPDATE_DATE
`,
).run(computedCoverBlobHash, coverBlob, fetchedAtMs, fetchedAtMs);
}
db.prepare(
`
INSERT INTO imm_media_art (
video_id, anilist_id, cover_url, cover_blob, cover_blob_hash,
title_romaji, title_english, episodes_total,
fetched_at_ms, CREATED_DATE, LAST_UPDATE_DATE
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(video_id) DO UPDATE SET
anilist_id = excluded.anilist_id,
cover_url = excluded.cover_url,
cover_blob = excluded.cover_blob,
cover_blob_hash = excluded.cover_blob_hash,
title_romaji = excluded.title_romaji,
title_english = excluded.title_english,
episodes_total = excluded.episodes_total,
fetched_at_ms = excluded.fetched_at_ms,
LAST_UPDATE_DATE = excluded.LAST_UPDATE_DATE
`,
).run(
videoId,
art.anilistId,
art.coverUrl,
coverBlobHash ? buildCoverBlobReference(coverBlobHash) : coverBlob,
coverBlobHash,
art.titleRomaji,
art.titleEnglish,
art.episodesTotal,
fetchedAtMs,
fetchedAtMs,
fetchedAtMs,
);
if (existing?.coverBlobHash !== coverBlobHash) {
cleanupUnusedCoverArtBlobHash(db, existing?.coverBlobHash ?? null);
}
}
export function updateAnimeAnilistInfo(
db: DatabaseSync,
videoId: number,
info: {
anilistId: number;
titleRomaji: string | null;
titleEnglish: string | null;
titleNative: string | null;
episodesTotal: number | null;
},
): void {
const row = db.prepare('SELECT anime_id FROM imm_videos WHERE video_id = ?').get(videoId) as {
anime_id: number | null;
} | null;
if (!row?.anime_id) return;
db.prepare(
`
UPDATE imm_anime
SET
anilist_id = COALESCE(?, anilist_id),
title_romaji = COALESCE(?, title_romaji),
title_english = COALESCE(?, title_english),
title_native = COALESCE(?, title_native),
episodes_total = COALESCE(?, episodes_total),
LAST_UPDATE_DATE = ?
WHERE anime_id = ?
`,
).run(
info.anilistId,
info.titleRomaji,
info.titleEnglish,
info.titleNative,
info.episodesTotal,
toDbMs(nowMs()),
row.anime_id,
);
}
export function markVideoWatched(db: DatabaseSync, videoId: number, watched: boolean): void {
db.prepare('UPDATE imm_videos SET watched = ?, LAST_UPDATE_DATE = ? WHERE video_id = ?').run(
watched ? 1 : 0,
toDbMs(nowMs()),
videoId,
);
}
export function getVideoDurationMs(db: DatabaseSync, videoId: number): number {
const row = db.prepare('SELECT duration_ms FROM imm_videos WHERE video_id = ?').get(videoId) as {
duration_ms: number;
} | null;
return row?.duration_ms ?? 0;
}
export function isVideoWatched(db: DatabaseSync, videoId: number): boolean {
const row = db.prepare('SELECT watched FROM imm_videos WHERE video_id = ?').get(videoId) as {
watched: number;
} | null;
return row?.watched === 1;
}
export function deleteSession(db: DatabaseSync, sessionId: number): void {
const sessionIds = [sessionId];
const affectedWordIds = getAffectedWordIdsForSessions(db, sessionIds);
const affectedKanjiIds = getAffectedKanjiIdsForSessions(db, sessionIds);
db.exec('BEGIN IMMEDIATE');
try {
deleteSessionsByIds(db, sessionIds);
refreshLexicalAggregates(db, affectedWordIds, affectedKanjiIds);
rebuildLifetimeSummariesInTransaction(db);
rebuildRollupsInTransaction(db);
db.exec('COMMIT');
} catch (error) {
db.exec('ROLLBACK');
throw error;
}
}
export function deleteSessions(db: DatabaseSync, sessionIds: number[]): void {
if (sessionIds.length === 0) return;
const affectedWordIds = getAffectedWordIdsForSessions(db, sessionIds);
const affectedKanjiIds = getAffectedKanjiIdsForSessions(db, sessionIds);
db.exec('BEGIN IMMEDIATE');
try {
deleteSessionsByIds(db, sessionIds);
refreshLexicalAggregates(db, affectedWordIds, affectedKanjiIds);
rebuildLifetimeSummariesInTransaction(db);
rebuildRollupsInTransaction(db);
db.exec('COMMIT');
} catch (error) {
db.exec('ROLLBACK');
throw error;
}
}
export function deleteVideo(db: DatabaseSync, videoId: number): void {
const artRow = db
.prepare(
`
SELECT cover_blob_hash AS coverBlobHash
FROM imm_media_art
WHERE video_id = ?
`,
)
.get(videoId) as { coverBlobHash: string | null } | undefined;
const affectedWordIds = getAffectedWordIdsForVideo(db, videoId);
const affectedKanjiIds = getAffectedKanjiIdsForVideo(db, videoId);
const sessions = db
.prepare('SELECT session_id FROM imm_sessions WHERE video_id = ?')
.all(videoId) as Array<{ session_id: number }>;
db.exec('BEGIN IMMEDIATE');
try {
deleteSessionsByIds(
db,
sessions.map((session) => session.session_id),
);
db.prepare('DELETE FROM imm_subtitle_lines WHERE video_id = ?').run(videoId);
db.prepare('DELETE FROM imm_daily_rollups WHERE video_id = ?').run(videoId);
db.prepare('DELETE FROM imm_monthly_rollups WHERE video_id = ?').run(videoId);
db.prepare('DELETE FROM imm_media_art WHERE video_id = ?').run(videoId);
cleanupUnusedCoverArtBlobHash(db, artRow?.coverBlobHash ?? null);
db.prepare('DELETE FROM imm_videos WHERE video_id = ?').run(videoId);
refreshLexicalAggregates(db, affectedWordIds, affectedKanjiIds);
rebuildLifetimeSummariesInTransaction(db);
rebuildRollupsInTransaction(db);
db.exec('COMMIT');
} catch (error) {
db.exec('ROLLBACK');
throw error;
}
}

View File

@@ -0,0 +1,351 @@
import type { DatabaseSync } from './sqlite';
import { nowMs } from './time';
import type {
ImmersionSessionRollupRow,
SessionSummaryQueryRow,
SessionTimelineRow,
} from './types';
import { ACTIVE_SESSION_METRICS_CTE } from './query-shared';
export function getSessionSummaries(db: DatabaseSync, limit = 50): SessionSummaryQueryRow[] {
const prepared = db.prepare(`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
s.session_id AS sessionId,
s.video_id AS videoId,
v.canonical_title AS canonicalTitle,
v.anime_id AS animeId,
a.canonical_title AS animeTitle,
s.started_at_ms AS startedAtMs,
s.ended_at_ms AS endedAtMs,
COALESCE(asm.totalWatchedMs, s.total_watched_ms, 0) AS totalWatchedMs,
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
COALESCE(asm.linesSeen, s.lines_seen, 0) AS linesSeen,
COALESCE(asm.tokensSeen, s.tokens_seen, 0) AS tokensSeen,
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
COALESCE(asm.lookupCount, s.lookup_count, 0) AS lookupCount,
COALESCE(asm.lookupHits, s.lookup_hits, 0) AS lookupHits,
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
FROM imm_sessions s
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
LEFT JOIN imm_videos v ON v.video_id = s.video_id
LEFT JOIN imm_anime a ON a.anime_id = v.anime_id
ORDER BY s.started_at_ms DESC
LIMIT ?
`);
return prepared.all(limit) as unknown as SessionSummaryQueryRow[];
}
export function getSessionTimeline(
db: DatabaseSync,
sessionId: number,
limit?: number,
): SessionTimelineRow[] {
const select = `
SELECT
sample_ms AS sampleMs,
total_watched_ms AS totalWatchedMs,
active_watched_ms AS activeWatchedMs,
lines_seen AS linesSeen,
tokens_seen AS tokensSeen,
cards_mined AS cardsMined
FROM imm_session_telemetry
WHERE session_id = ?
ORDER BY sample_ms DESC, telemetry_id DESC
`;
if (limit === undefined) {
return db.prepare(select).all(sessionId) as unknown as SessionTimelineRow[];
}
return db
.prepare(`${select}\n LIMIT ?`)
.all(sessionId, limit) as unknown as SessionTimelineRow[];
}
/** Returns all distinct headwords in the vocabulary table (global). */
export function getAllDistinctHeadwords(db: DatabaseSync): string[] {
const rows = db.prepare('SELECT DISTINCT headword FROM imm_words').all() as Array<{
headword: string;
}>;
return rows.map((r) => r.headword);
}
/** Returns distinct headwords seen for a specific anime. */
export function getAnimeDistinctHeadwords(db: DatabaseSync, animeId: number): string[] {
const rows = db
.prepare(
`
SELECT DISTINCT w.headword
FROM imm_word_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_words w ON w.id = o.word_id
WHERE sl.anime_id = ?
`,
)
.all(animeId) as Array<{ headword: string }>;
return rows.map((r) => r.headword);
}
/** Returns distinct headwords seen for a specific video/media. */
export function getMediaDistinctHeadwords(db: DatabaseSync, videoId: number): string[] {
const rows = db
.prepare(
`
SELECT DISTINCT w.headword
FROM imm_word_line_occurrences o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
JOIN imm_words w ON w.id = o.word_id
WHERE sl.video_id = ?
`,
)
.all(videoId) as Array<{ headword: string }>;
return rows.map((r) => r.headword);
}
/**
* Returns the headword for each word seen in a session, grouped by line_index.
* Used to compute cumulative known-words counts for the session timeline chart.
*/
export function getSessionWordsByLine(
db: DatabaseSync,
sessionId: number,
): Array<{ lineIndex: number; headword: string; occurrenceCount: number }> {
const stmt = db.prepare(`
SELECT
sl.line_index AS lineIndex,
w.headword AS headword,
wlo.occurrence_count AS occurrenceCount
FROM imm_subtitle_lines sl
JOIN imm_word_line_occurrences wlo ON wlo.line_id = sl.line_id
JOIN imm_words w ON w.id = wlo.word_id
WHERE sl.session_id = ?
ORDER BY sl.line_index ASC
`);
return stmt.all(sessionId) as Array<{
lineIndex: number;
headword: string;
occurrenceCount: number;
}>;
}
function getNewWordCounts(db: DatabaseSync): { newWordsToday: number; newWordsThisWeek: number } {
const now = new Date();
const todayStartSec = new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime() / 1000;
const weekAgoSec =
new Date(now.getFullYear(), now.getMonth(), now.getDate() - 7).getTime() / 1000;
const row = db
.prepare(
`
WITH headword_first_seen AS (
SELECT
headword,
MIN(first_seen) AS first_seen
FROM imm_words
WHERE first_seen IS NOT NULL
AND headword IS NOT NULL
AND headword != ''
GROUP BY headword
)
SELECT
COALESCE(SUM(CASE WHEN first_seen >= ? THEN 1 ELSE 0 END), 0) AS today,
COALESCE(SUM(CASE WHEN first_seen >= ? THEN 1 ELSE 0 END), 0) AS week
FROM headword_first_seen
`,
)
.get(todayStartSec, weekAgoSec) as { today: number; week: number } | null;
return {
newWordsToday: Number(row?.today ?? 0),
newWordsThisWeek: Number(row?.week ?? 0),
};
}
export function getQueryHints(db: DatabaseSync): {
totalSessions: number;
activeSessions: number;
episodesToday: number;
activeAnimeCount: number;
totalEpisodesWatched: number;
totalAnimeCompleted: number;
totalActiveMin: number;
totalCards: number;
activeDays: number;
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
newWordsToday: number;
newWordsThisWeek: number;
} {
const active = db.prepare('SELECT COUNT(*) AS total FROM imm_sessions WHERE ended_at_ms IS NULL');
const activeSessions = Number((active.get() as { total?: number } | null)?.total ?? 0);
const lifetime = db
.prepare(
`
SELECT
total_sessions AS totalSessions,
total_active_ms AS totalActiveMs,
total_cards AS totalCards,
active_days AS activeDays,
episodes_completed AS episodesCompleted,
anime_completed AS animeCompleted
FROM imm_lifetime_global
WHERE global_id = 1
`,
)
.get() as {
totalSessions: number;
totalActiveMs: number;
totalCards: number;
activeDays: number;
episodesCompleted: number;
animeCompleted: number;
} | null;
const now = new Date();
const todayLocal = Math.floor(
new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime() / 86_400_000,
);
const episodesToday =
(
db
.prepare(
`
SELECT COUNT(DISTINCT s.video_id) AS count
FROM imm_sessions s
WHERE CAST(julianday(s.started_at_ms / 1000, 'unixepoch', 'localtime') - 2440587.5 AS INTEGER) = ?
`,
)
.get(todayLocal) as { count: number }
)?.count ?? 0;
const thirtyDaysAgoMs = nowMs() - 30 * 86400000;
const activeAnimeCount =
(
db
.prepare(
`
SELECT COUNT(DISTINCT v.anime_id) AS count
FROM imm_sessions s
JOIN imm_videos v ON v.video_id = s.video_id
WHERE v.anime_id IS NOT NULL
AND s.started_at_ms >= ?
`,
)
.get(thirtyDaysAgoMs) as { count: number }
)?.count ?? 0;
const totalEpisodesWatched = Number(lifetime?.episodesCompleted ?? 0);
const totalAnimeCompleted = Number(lifetime?.animeCompleted ?? 0);
const totalSessions = Number(lifetime?.totalSessions ?? 0);
const totalActiveMin = Math.floor(Math.max(0, lifetime?.totalActiveMs ?? 0) / 60000);
const totalCards = Number(lifetime?.totalCards ?? 0);
const activeDays = Number(lifetime?.activeDays ?? 0);
const lookupTotals = db
.prepare(
`
SELECT
COALESCE(SUM(COALESCE(t.tokens_seen, s.tokens_seen, 0)), 0) AS totalTokensSeen,
COALESCE(SUM(COALESCE(t.lookup_count, s.lookup_count, 0)), 0) AS totalLookupCount,
COALESCE(SUM(COALESCE(t.lookup_hits, s.lookup_hits, 0)), 0) AS totalLookupHits,
COALESCE(SUM(COALESCE(t.yomitan_lookup_count, s.yomitan_lookup_count, 0)), 0) AS totalYomitanLookupCount
FROM imm_sessions s
LEFT JOIN (
SELECT
session_id,
MAX(tokens_seen) AS tokens_seen,
MAX(lookup_count) AS lookup_count,
MAX(lookup_hits) AS lookup_hits,
MAX(yomitan_lookup_count) AS yomitan_lookup_count
FROM imm_session_telemetry
GROUP BY session_id
) t ON t.session_id = s.session_id
WHERE s.ended_at_ms IS NOT NULL
`,
)
.get() as {
totalTokensSeen: number;
totalLookupCount: number;
totalLookupHits: number;
totalYomitanLookupCount: number;
} | null;
return {
totalSessions,
activeSessions,
episodesToday,
activeAnimeCount,
totalEpisodesWatched,
totalAnimeCompleted,
totalActiveMin,
totalCards,
activeDays,
totalTokensSeen: Number(lookupTotals?.totalTokensSeen ?? 0),
totalLookupCount: Number(lookupTotals?.totalLookupCount ?? 0),
totalLookupHits: Number(lookupTotals?.totalLookupHits ?? 0),
totalYomitanLookupCount: Number(lookupTotals?.totalYomitanLookupCount ?? 0),
...getNewWordCounts(db),
};
}
export function getDailyRollups(db: DatabaseSync, limit = 60): ImmersionSessionRollupRow[] {
const prepared = db.prepare(`
WITH recent_days AS (
SELECT DISTINCT rollup_day
FROM imm_daily_rollups
ORDER BY rollup_day DESC
LIMIT ?
)
SELECT
r.rollup_day AS rollupDayOrMonth,
r.video_id AS videoId,
r.total_sessions AS totalSessions,
r.total_active_min AS totalActiveMin,
r.total_lines_seen AS totalLinesSeen,
r.total_tokens_seen AS totalTokensSeen,
r.total_cards AS totalCards,
r.cards_per_hour AS cardsPerHour,
r.tokens_per_min AS tokensPerMin,
r.lookup_hit_rate AS lookupHitRate
FROM imm_daily_rollups r
WHERE r.rollup_day IN (SELECT rollup_day FROM recent_days)
ORDER BY r.rollup_day DESC, r.video_id DESC
`);
return prepared.all(limit) as unknown as ImmersionSessionRollupRow[];
}
export function getMonthlyRollups(db: DatabaseSync, limit = 24): ImmersionSessionRollupRow[] {
const prepared = db.prepare(`
WITH recent_months AS (
SELECT DISTINCT rollup_month
FROM imm_monthly_rollups
ORDER BY rollup_month DESC
LIMIT ?
)
SELECT
rollup_month AS rollupDayOrMonth,
video_id AS videoId,
total_sessions AS totalSessions,
total_active_min AS totalActiveMin,
total_lines_seen AS totalLinesSeen,
total_tokens_seen AS totalTokensSeen,
total_cards AS totalCards,
CASE
WHEN total_active_min > 0 THEN (total_cards * 60.0) / total_active_min
ELSE NULL
END AS cardsPerHour,
CASE
WHEN total_active_min > 0 THEN total_tokens_seen * 1.0 / total_active_min
ELSE NULL
END AS tokensPerMin,
NULL AS lookupHitRate
FROM imm_monthly_rollups
WHERE rollup_month IN (SELECT rollup_month FROM recent_months)
ORDER BY rollup_month DESC, video_id DESC
`);
return prepared.all(limit) as unknown as ImmersionSessionRollupRow[];
}

View File

@@ -0,0 +1,282 @@
import type { DatabaseSync } from './sqlite';
export const ACTIVE_SESSION_METRICS_CTE = `
WITH active_session_metrics AS (
SELECT
t.session_id AS sessionId,
MAX(t.total_watched_ms) AS totalWatchedMs,
MAX(t.active_watched_ms) AS activeWatchedMs,
MAX(t.lines_seen) AS linesSeen,
MAX(t.tokens_seen) AS tokensSeen,
MAX(t.cards_mined) AS cardsMined,
MAX(t.lookup_count) AS lookupCount,
MAX(t.lookup_hits) AS lookupHits,
MAX(t.yomitan_lookup_count) AS yomitanLookupCount
FROM imm_session_telemetry t
JOIN imm_sessions s ON s.session_id = t.session_id
WHERE s.ended_at_ms IS NULL
GROUP BY t.session_id
)
`;
export function makePlaceholders(values: number[]): string {
return values.map(() => '?').join(',');
}
export function resolvedCoverBlobExpr(mediaAlias: string, blobStoreAlias: string): string {
return `COALESCE(${blobStoreAlias}.cover_blob, CASE WHEN ${mediaAlias}.cover_blob_hash IS NULL THEN ${mediaAlias}.cover_blob ELSE NULL END)`;
}
export function cleanupUnusedCoverArtBlobHash(db: DatabaseSync, blobHash: string | null): void {
if (!blobHash) {
return;
}
db.prepare(
`
DELETE FROM imm_cover_art_blobs
WHERE blob_hash = ?
AND NOT EXISTS (
SELECT 1
FROM imm_media_art
WHERE cover_blob_hash = ?
)
`,
).run(blobHash, blobHash);
}
export function findSharedCoverBlobHash(
db: DatabaseSync,
videoId: number,
anilistId: number | null,
coverUrl: string | null,
): string | null {
if (anilistId !== null) {
const byAnilist = db
.prepare(
`
SELECT cover_blob_hash AS coverBlobHash
FROM imm_media_art
WHERE video_id != ?
AND anilist_id = ?
AND cover_blob_hash IS NOT NULL
ORDER BY fetched_at_ms DESC, video_id DESC
LIMIT 1
`,
)
.get(videoId, anilistId) as { coverBlobHash: string | null } | undefined;
if (byAnilist?.coverBlobHash) {
return byAnilist.coverBlobHash;
}
}
if (coverUrl) {
const byUrl = db
.prepare(
`
SELECT cover_blob_hash AS coverBlobHash
FROM imm_media_art
WHERE video_id != ?
AND cover_url = ?
AND cover_blob_hash IS NOT NULL
ORDER BY fetched_at_ms DESC, video_id DESC
LIMIT 1
`,
)
.get(videoId, coverUrl) as { coverBlobHash: string | null } | undefined;
return byUrl?.coverBlobHash ?? null;
}
return null;
}
type LexicalEntity = 'word' | 'kanji';
function getAffectedIdsForSessions(
db: DatabaseSync,
entity: LexicalEntity,
sessionIds: number[],
): number[] {
if (sessionIds.length === 0) return [];
const table = entity === 'word' ? 'imm_word_line_occurrences' : 'imm_kanji_line_occurrences';
const col = `${entity}_id`;
return (
db
.prepare(
`SELECT DISTINCT o.${col} AS id
FROM ${table} o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
WHERE sl.session_id IN (${makePlaceholders(sessionIds)})`,
)
.all(...sessionIds) as Array<{ id: number }>
).map((row) => row.id);
}
function getAffectedIdsForVideo(
db: DatabaseSync,
entity: LexicalEntity,
videoId: number,
): number[] {
const table = entity === 'word' ? 'imm_word_line_occurrences' : 'imm_kanji_line_occurrences';
const col = `${entity}_id`;
return (
db
.prepare(
`SELECT DISTINCT o.${col} AS id
FROM ${table} o
JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
WHERE sl.video_id = ?`,
)
.all(videoId) as Array<{ id: number }>
).map((row) => row.id);
}
export function getAffectedWordIdsForSessions(db: DatabaseSync, sessionIds: number[]): number[] {
return getAffectedIdsForSessions(db, 'word', sessionIds);
}
export function getAffectedKanjiIdsForSessions(db: DatabaseSync, sessionIds: number[]): number[] {
return getAffectedIdsForSessions(db, 'kanji', sessionIds);
}
export function getAffectedWordIdsForVideo(db: DatabaseSync, videoId: number): number[] {
return getAffectedIdsForVideo(db, 'word', videoId);
}
export function getAffectedKanjiIdsForVideo(db: DatabaseSync, videoId: number): number[] {
return getAffectedIdsForVideo(db, 'kanji', videoId);
}
function refreshWordAggregates(db: DatabaseSync, wordIds: number[]): void {
if (wordIds.length === 0) {
return;
}
const rows = db
.prepare(
`
SELECT
w.id AS wordId,
COALESCE(SUM(o.occurrence_count), 0) AS frequency,
MIN(COALESCE(sl.CREATED_DATE, sl.LAST_UPDATE_DATE)) AS firstSeen,
MAX(COALESCE(sl.LAST_UPDATE_DATE, sl.CREATED_DATE)) AS lastSeen
FROM imm_words w
LEFT JOIN imm_word_line_occurrences o ON o.word_id = w.id
LEFT JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
WHERE w.id IN (${makePlaceholders(wordIds)})
GROUP BY w.id
`,
)
.all(...wordIds) as Array<{
wordId: number;
frequency: number;
firstSeen: number | null;
lastSeen: number | null;
}>;
const updateStmt = db.prepare(
`
UPDATE imm_words
SET frequency = ?, first_seen = ?, last_seen = ?
WHERE id = ?
`,
);
const deleteStmt = db.prepare('DELETE FROM imm_words WHERE id = ?');
for (const row of rows) {
if (row.frequency <= 0 || row.firstSeen === null || row.lastSeen === null) {
deleteStmt.run(row.wordId);
continue;
}
updateStmt.run(
row.frequency,
Math.floor(row.firstSeen / 1000),
Math.floor(row.lastSeen / 1000),
row.wordId,
);
}
}
function refreshKanjiAggregates(db: DatabaseSync, kanjiIds: number[]): void {
if (kanjiIds.length === 0) {
return;
}
const rows = db
.prepare(
`
SELECT
k.id AS kanjiId,
COALESCE(SUM(o.occurrence_count), 0) AS frequency,
MIN(COALESCE(sl.CREATED_DATE, sl.LAST_UPDATE_DATE)) AS firstSeen,
MAX(COALESCE(sl.LAST_UPDATE_DATE, sl.CREATED_DATE)) AS lastSeen
FROM imm_kanji k
LEFT JOIN imm_kanji_line_occurrences o ON o.kanji_id = k.id
LEFT JOIN imm_subtitle_lines sl ON sl.line_id = o.line_id
WHERE k.id IN (${makePlaceholders(kanjiIds)})
GROUP BY k.id
`,
)
.all(...kanjiIds) as Array<{
kanjiId: number;
frequency: number;
firstSeen: number | null;
lastSeen: number | null;
}>;
const updateStmt = db.prepare(
`
UPDATE imm_kanji
SET frequency = ?, first_seen = ?, last_seen = ?
WHERE id = ?
`,
);
const deleteStmt = db.prepare('DELETE FROM imm_kanji WHERE id = ?');
for (const row of rows) {
if (row.frequency <= 0 || row.firstSeen === null || row.lastSeen === null) {
deleteStmt.run(row.kanjiId);
continue;
}
updateStmt.run(
row.frequency,
Math.floor(row.firstSeen / 1000),
Math.floor(row.lastSeen / 1000),
row.kanjiId,
);
}
}
export function refreshLexicalAggregates(
db: DatabaseSync,
wordIds: number[],
kanjiIds: number[],
): void {
refreshWordAggregates(db, [...new Set(wordIds)]);
refreshKanjiAggregates(db, [...new Set(kanjiIds)]);
}
export function deleteSessionsByIds(db: DatabaseSync, sessionIds: number[]): void {
if (sessionIds.length === 0) {
return;
}
const placeholders = makePlaceholders(sessionIds);
db.prepare(`DELETE FROM imm_subtitle_lines WHERE session_id IN (${placeholders})`).run(
...sessionIds,
);
db.prepare(`DELETE FROM imm_session_telemetry WHERE session_id IN (${placeholders})`).run(
...sessionIds,
);
db.prepare(`DELETE FROM imm_session_events WHERE session_id IN (${placeholders})`).run(
...sessionIds,
);
db.prepare(`DELETE FROM imm_sessions WHERE session_id IN (${placeholders})`).run(...sessionIds);
}
export function toDbMs(ms: number | bigint): bigint {
if (typeof ms === 'bigint') {
return ms;
}
if (!Number.isFinite(ms)) {
throw new TypeError(`Invalid database timestamp: ${ms}`);
}
return BigInt(Math.trunc(ms));
}

View File

@@ -0,0 +1,660 @@
import type { DatabaseSync } from './sqlite';
import type { ImmersionSessionRollupRow } from './types';
import { ACTIVE_SESSION_METRICS_CTE, makePlaceholders } from './query-shared';
import { getDailyRollups, getMonthlyRollups } from './query-sessions';
type TrendRange = '7d' | '30d' | '90d' | 'all';
type TrendGroupBy = 'day' | 'month';
interface TrendChartPoint {
label: string;
value: number;
}
interface TrendPerAnimePoint {
epochDay: number;
animeTitle: string;
value: number;
}
interface TrendSessionMetricRow {
startedAtMs: number;
videoId: number | null;
canonicalTitle: string | null;
animeTitle: string | null;
activeWatchedMs: number;
tokensSeen: number;
cardsMined: number;
yomitanLookupCount: number;
}
export interface TrendsDashboardQueryResult {
activity: {
watchTime: TrendChartPoint[];
cards: TrendChartPoint[];
words: TrendChartPoint[];
sessions: TrendChartPoint[];
};
progress: {
watchTime: TrendChartPoint[];
sessions: TrendChartPoint[];
words: TrendChartPoint[];
newWords: TrendChartPoint[];
cards: TrendChartPoint[];
episodes: TrendChartPoint[];
lookups: TrendChartPoint[];
};
ratios: {
lookupsPerHundred: TrendChartPoint[];
};
animePerDay: {
episodes: TrendPerAnimePoint[];
watchTime: TrendPerAnimePoint[];
cards: TrendPerAnimePoint[];
words: TrendPerAnimePoint[];
lookups: TrendPerAnimePoint[];
lookupsPerHundred: TrendPerAnimePoint[];
};
animeCumulative: {
watchTime: TrendPerAnimePoint[];
episodes: TrendPerAnimePoint[];
cards: TrendPerAnimePoint[];
words: TrendPerAnimePoint[];
};
patterns: {
watchTimeByDayOfWeek: TrendChartPoint[];
watchTimeByHour: TrendChartPoint[];
};
}
const TREND_DAY_LIMITS: Record<Exclude<TrendRange, 'all'>, number> = {
'7d': 7,
'30d': 30,
'90d': 90,
};
const DAY_NAMES = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'];
function getTrendDayLimit(range: TrendRange): number {
return range === 'all' ? 365 : TREND_DAY_LIMITS[range];
}
function getTrendMonthlyLimit(range: TrendRange): number {
if (range === 'all') {
return 120;
}
const now = new Date();
const cutoff = new Date(
now.getFullYear(),
now.getMonth(),
now.getDate() - (TREND_DAY_LIMITS[range] - 1),
);
return Math.max(1, (now.getFullYear() - cutoff.getFullYear()) * 12 + now.getMonth() - cutoff.getMonth() + 1);
}
function getTrendCutoffMs(range: TrendRange): number | null {
if (range === 'all') {
return null;
}
const dayLimit = getTrendDayLimit(range);
const now = new Date();
const localMidnight = new Date(now.getFullYear(), now.getMonth(), now.getDate()).getTime();
return localMidnight - (dayLimit - 1) * 86_400_000;
}
function makeTrendLabel(value: number): string {
if (value > 100_000) {
const year = Math.floor(value / 100);
const month = value % 100;
return new Date(Date.UTC(year, month - 1, 1)).toLocaleDateString(undefined, {
month: 'short',
year: '2-digit',
});
}
return new Date(value * 86_400_000).toLocaleDateString(undefined, {
month: 'short',
day: 'numeric',
});
}
function getLocalEpochDay(timestampMs: number): number {
const date = new Date(timestampMs);
return Math.floor((timestampMs - date.getTimezoneOffset() * 60_000) / 86_400_000);
}
function getLocalDateForEpochDay(epochDay: number): Date {
const utcDate = new Date(epochDay * 86_400_000);
return new Date(utcDate.getTime() + utcDate.getTimezoneOffset() * 60_000);
}
function getLocalMonthKey(timestampMs: number): number {
const date = new Date(timestampMs);
return date.getFullYear() * 100 + date.getMonth() + 1;
}
function getTrendSessionWordCount(session: Pick<TrendSessionMetricRow, 'tokensSeen'>): number {
return session.tokensSeen;
}
function resolveTrendAnimeTitle(value: {
animeTitle: string | null;
canonicalTitle: string | null;
}): string {
return value.animeTitle ?? value.canonicalTitle ?? 'Unknown';
}
function accumulatePoints(points: TrendChartPoint[]): TrendChartPoint[] {
let sum = 0;
return points.map((point) => {
sum += point.value;
return {
label: point.label,
value: sum,
};
});
}
function buildAggregatedTrendRows(rollups: ImmersionSessionRollupRow[]) {
const byKey = new Map<
number,
{ activeMin: number; cards: number; words: number; sessions: number }
>();
for (const rollup of rollups) {
const existing = byKey.get(rollup.rollupDayOrMonth) ?? {
activeMin: 0,
cards: 0,
words: 0,
sessions: 0,
};
existing.activeMin += rollup.totalActiveMin;
existing.cards += rollup.totalCards;
existing.words += rollup.totalTokensSeen;
existing.sessions += rollup.totalSessions;
byKey.set(rollup.rollupDayOrMonth, existing);
}
return Array.from(byKey.entries())
.sort(([left], [right]) => left - right)
.map(([key, value]) => ({
label: makeTrendLabel(key),
activeMin: Math.round(value.activeMin),
cards: value.cards,
words: value.words,
sessions: value.sessions,
}));
}
function buildWatchTimeByDayOfWeek(sessions: TrendSessionMetricRow[]): TrendChartPoint[] {
const totals = new Array(7).fill(0);
for (const session of sessions) {
totals[new Date(session.startedAtMs).getDay()] += session.activeWatchedMs;
}
return DAY_NAMES.map((name, index) => ({
label: name,
value: Math.round(totals[index] / 60_000),
}));
}
function buildWatchTimeByHour(sessions: TrendSessionMetricRow[]): TrendChartPoint[] {
const totals = new Array(24).fill(0);
for (const session of sessions) {
totals[new Date(session.startedAtMs).getHours()] += session.activeWatchedMs;
}
return totals.map((ms, index) => ({
label: `${String(index).padStart(2, '0')}:00`,
value: Math.round(ms / 60_000),
}));
}
function dayLabel(epochDay: number): string {
return getLocalDateForEpochDay(epochDay).toLocaleDateString(undefined, {
month: 'short',
day: 'numeric',
});
}
function buildSessionSeriesByDay(
sessions: TrendSessionMetricRow[],
getValue: (session: TrendSessionMetricRow) => number,
): TrendChartPoint[] {
const byDay = new Map<number, number>();
for (const session of sessions) {
const epochDay = getLocalEpochDay(session.startedAtMs);
byDay.set(epochDay, (byDay.get(epochDay) ?? 0) + getValue(session));
}
return Array.from(byDay.entries())
.sort(([left], [right]) => left - right)
.map(([epochDay, value]) => ({ label: dayLabel(epochDay), value }));
}
function buildSessionSeriesByMonth(
sessions: TrendSessionMetricRow[],
getValue: (session: TrendSessionMetricRow) => number,
): TrendChartPoint[] {
const byMonth = new Map<number, number>();
for (const session of sessions) {
const monthKey = getLocalMonthKey(session.startedAtMs);
byMonth.set(monthKey, (byMonth.get(monthKey) ?? 0) + getValue(session));
}
return Array.from(byMonth.entries())
.sort(([left], [right]) => left - right)
.map(([monthKey, value]) => ({ label: makeTrendLabel(monthKey), value }));
}
function buildLookupsPerHundredWords(
sessions: TrendSessionMetricRow[],
groupBy: TrendGroupBy,
): TrendChartPoint[] {
const lookupsByBucket = new Map<number, number>();
const wordsByBucket = new Map<number, number>();
for (const session of sessions) {
const bucketKey =
groupBy === 'month' ? getLocalMonthKey(session.startedAtMs) : getLocalEpochDay(session.startedAtMs);
lookupsByBucket.set(
bucketKey,
(lookupsByBucket.get(bucketKey) ?? 0) + session.yomitanLookupCount,
);
wordsByBucket.set(
bucketKey,
(wordsByBucket.get(bucketKey) ?? 0) + getTrendSessionWordCount(session),
);
}
return Array.from(lookupsByBucket.entries())
.sort(([left], [right]) => left - right)
.map(([bucketKey, lookups]) => {
const words = wordsByBucket.get(bucketKey) ?? 0;
return {
label: groupBy === 'month' ? makeTrendLabel(bucketKey) : dayLabel(bucketKey),
value: words > 0 ? +((lookups / words) * 100).toFixed(1) : 0,
};
});
}
function buildPerAnimeFromSessions(
sessions: TrendSessionMetricRow[],
getValue: (session: TrendSessionMetricRow) => number,
): TrendPerAnimePoint[] {
const byAnime = new Map<string, Map<number, number>>();
for (const session of sessions) {
const animeTitle = resolveTrendAnimeTitle(session);
const epochDay = getLocalEpochDay(session.startedAtMs);
const dayMap = byAnime.get(animeTitle) ?? new Map();
dayMap.set(epochDay, (dayMap.get(epochDay) ?? 0) + getValue(session));
byAnime.set(animeTitle, dayMap);
}
const result: TrendPerAnimePoint[] = [];
for (const [animeTitle, dayMap] of byAnime) {
for (const [epochDay, value] of dayMap) {
result.push({ epochDay, animeTitle, value });
}
}
return result;
}
function buildLookupsPerHundredPerAnime(sessions: TrendSessionMetricRow[]): TrendPerAnimePoint[] {
const lookups = new Map<string, Map<number, number>>();
const words = new Map<string, Map<number, number>>();
for (const session of sessions) {
const animeTitle = resolveTrendAnimeTitle(session);
const epochDay = getLocalEpochDay(session.startedAtMs);
const lookupMap = lookups.get(animeTitle) ?? new Map();
lookupMap.set(epochDay, (lookupMap.get(epochDay) ?? 0) + session.yomitanLookupCount);
lookups.set(animeTitle, lookupMap);
const wordMap = words.get(animeTitle) ?? new Map();
wordMap.set(epochDay, (wordMap.get(epochDay) ?? 0) + getTrendSessionWordCount(session));
words.set(animeTitle, wordMap);
}
const result: TrendPerAnimePoint[] = [];
for (const [animeTitle, dayMap] of lookups) {
const wordMap = words.get(animeTitle) ?? new Map();
for (const [epochDay, lookupCount] of dayMap) {
const wordCount = wordMap.get(epochDay) ?? 0;
result.push({
epochDay,
animeTitle,
value: wordCount > 0 ? +((lookupCount / wordCount) * 100).toFixed(1) : 0,
});
}
}
return result;
}
function buildCumulativePerAnime(points: TrendPerAnimePoint[]): TrendPerAnimePoint[] {
const byAnime = new Map<string, Map<number, number>>();
const allDays = new Set<number>();
for (const point of points) {
const dayMap = byAnime.get(point.animeTitle) ?? new Map();
dayMap.set(point.epochDay, (dayMap.get(point.epochDay) ?? 0) + point.value);
byAnime.set(point.animeTitle, dayMap);
allDays.add(point.epochDay);
}
const sortedDays = [...allDays].sort((left, right) => left - right);
if (sortedDays.length === 0) {
return [];
}
const minDay = sortedDays[0]!;
const maxDay = sortedDays[sortedDays.length - 1]!;
const result: TrendPerAnimePoint[] = [];
for (const [animeTitle, dayMap] of byAnime) {
const firstDay = Math.min(...dayMap.keys());
let cumulative = 0;
for (let epochDay = minDay; epochDay <= maxDay; epochDay += 1) {
if (epochDay < firstDay) {
continue;
}
cumulative += dayMap.get(epochDay) ?? 0;
result.push({ epochDay, animeTitle, value: cumulative });
}
}
return result;
}
function getVideoAnimeTitleMap(
db: DatabaseSync,
videoIds: Array<number | null>,
): Map<number, string> {
const uniqueIds = [
...new Set(videoIds.filter((value): value is number => typeof value === 'number')),
];
if (uniqueIds.length === 0) {
return new Map();
}
const rows = db
.prepare(
`
SELECT
v.video_id AS videoId,
COALESCE(a.canonical_title, v.canonical_title, 'Unknown') AS animeTitle
FROM imm_videos v
LEFT JOIN imm_anime a ON a.anime_id = v.anime_id
WHERE v.video_id IN (${makePlaceholders(uniqueIds)})
`,
)
.all(...uniqueIds) as Array<{ videoId: number; animeTitle: string }>;
return new Map(rows.map((row) => [row.videoId, row.animeTitle]));
}
function resolveVideoAnimeTitle(
videoId: number | null,
titlesByVideoId: Map<number, string>,
): string {
if (videoId === null) {
return 'Unknown';
}
return titlesByVideoId.get(videoId) ?? 'Unknown';
}
function buildPerAnimeFromDailyRollups(
rollups: ImmersionSessionRollupRow[],
titlesByVideoId: Map<number, string>,
getValue: (rollup: ImmersionSessionRollupRow) => number,
): TrendPerAnimePoint[] {
const byAnime = new Map<string, Map<number, number>>();
for (const rollup of rollups) {
const animeTitle = resolveVideoAnimeTitle(rollup.videoId, titlesByVideoId);
const dayMap = byAnime.get(animeTitle) ?? new Map();
dayMap.set(
rollup.rollupDayOrMonth,
(dayMap.get(rollup.rollupDayOrMonth) ?? 0) + getValue(rollup),
);
byAnime.set(animeTitle, dayMap);
}
const result: TrendPerAnimePoint[] = [];
for (const [animeTitle, dayMap] of byAnime) {
for (const [epochDay, value] of dayMap) {
result.push({ epochDay, animeTitle, value });
}
}
return result;
}
function buildEpisodesPerAnimeFromDailyRollups(
rollups: ImmersionSessionRollupRow[],
titlesByVideoId: Map<number, string>,
): TrendPerAnimePoint[] {
const byAnime = new Map<string, Map<number, Set<number>>>();
for (const rollup of rollups) {
if (rollup.videoId === null) {
continue;
}
const animeTitle = resolveVideoAnimeTitle(rollup.videoId, titlesByVideoId);
const dayMap = byAnime.get(animeTitle) ?? new Map();
const videoIds = dayMap.get(rollup.rollupDayOrMonth) ?? new Set<number>();
videoIds.add(rollup.videoId);
dayMap.set(rollup.rollupDayOrMonth, videoIds);
byAnime.set(animeTitle, dayMap);
}
const result: TrendPerAnimePoint[] = [];
for (const [animeTitle, dayMap] of byAnime) {
for (const [epochDay, videoIds] of dayMap) {
result.push({ epochDay, animeTitle, value: videoIds.size });
}
}
return result;
}
function buildEpisodesPerDayFromDailyRollups(
rollups: ImmersionSessionRollupRow[],
): TrendChartPoint[] {
const byDay = new Map<number, Set<number>>();
for (const rollup of rollups) {
if (rollup.videoId === null) {
continue;
}
const videoIds = byDay.get(rollup.rollupDayOrMonth) ?? new Set<number>();
videoIds.add(rollup.videoId);
byDay.set(rollup.rollupDayOrMonth, videoIds);
}
return Array.from(byDay.entries())
.sort(([left], [right]) => left - right)
.map(([epochDay, videoIds]) => ({
label: dayLabel(epochDay),
value: videoIds.size,
}));
}
function buildEpisodesPerMonthFromRollups(rollups: ImmersionSessionRollupRow[]): TrendChartPoint[] {
const byMonth = new Map<number, Set<number>>();
for (const rollup of rollups) {
if (rollup.videoId === null) {
continue;
}
const videoIds = byMonth.get(rollup.rollupDayOrMonth) ?? new Set<number>();
videoIds.add(rollup.videoId);
byMonth.set(rollup.rollupDayOrMonth, videoIds);
}
return Array.from(byMonth.entries())
.sort(([left], [right]) => left - right)
.map(([monthKey, videoIds]) => ({
label: makeTrendLabel(monthKey),
value: videoIds.size,
}));
}
function getTrendSessionMetrics(
db: DatabaseSync,
cutoffMs: number | null,
): TrendSessionMetricRow[] {
const whereClause = cutoffMs === null ? '' : 'WHERE s.started_at_ms >= ?';
const prepared = db.prepare(`
${ACTIVE_SESSION_METRICS_CTE}
SELECT
s.started_at_ms AS startedAtMs,
s.video_id AS videoId,
v.canonical_title AS canonicalTitle,
a.canonical_title AS animeTitle,
COALESCE(asm.activeWatchedMs, s.active_watched_ms, 0) AS activeWatchedMs,
COALESCE(asm.tokensSeen, s.tokens_seen, 0) AS tokensSeen,
COALESCE(asm.cardsMined, s.cards_mined, 0) AS cardsMined,
COALESCE(asm.yomitanLookupCount, s.yomitan_lookup_count, 0) AS yomitanLookupCount
FROM imm_sessions s
LEFT JOIN active_session_metrics asm ON asm.sessionId = s.session_id
LEFT JOIN imm_videos v ON v.video_id = s.video_id
LEFT JOIN imm_anime a ON a.anime_id = v.anime_id
${whereClause}
ORDER BY s.started_at_ms ASC
`);
return (cutoffMs === null ? prepared.all() : prepared.all(cutoffMs)) as TrendSessionMetricRow[];
}
function buildNewWordsPerDay(db: DatabaseSync, cutoffMs: number | null): TrendChartPoint[] {
const whereClause = cutoffMs === null ? '' : 'AND first_seen >= ?';
const prepared = db.prepare(`
SELECT
CAST(julianday(first_seen, 'unixepoch', 'localtime') - 2440587.5 AS INTEGER) AS epochDay,
COUNT(*) AS wordCount
FROM imm_words
WHERE first_seen IS NOT NULL
${whereClause}
GROUP BY epochDay
ORDER BY epochDay ASC
`);
const rows = (
cutoffMs === null ? prepared.all() : prepared.all(Math.floor(cutoffMs / 1000))
) as Array<{
epochDay: number;
wordCount: number;
}>;
return rows.map((row) => ({
label: dayLabel(row.epochDay),
value: row.wordCount,
}));
}
function buildNewWordsPerMonth(db: DatabaseSync, cutoffMs: number | null): TrendChartPoint[] {
const whereClause = cutoffMs === null ? '' : 'AND first_seen >= ?';
const prepared = db.prepare(`
SELECT
CAST(strftime('%Y%m', first_seen, 'unixepoch', 'localtime') AS INTEGER) AS monthKey,
COUNT(*) AS wordCount
FROM imm_words
WHERE first_seen IS NOT NULL
${whereClause}
GROUP BY monthKey
ORDER BY monthKey ASC
`);
const rows = (
cutoffMs === null ? prepared.all() : prepared.all(Math.floor(cutoffMs / 1000))
) as Array<{
monthKey: number;
wordCount: number;
}>;
return rows.map((row) => ({
label: makeTrendLabel(row.monthKey),
value: row.wordCount,
}));
}
export function getTrendsDashboard(
db: DatabaseSync,
range: TrendRange = '30d',
groupBy: TrendGroupBy = 'day',
): TrendsDashboardQueryResult {
const dayLimit = getTrendDayLimit(range);
const monthlyLimit = getTrendMonthlyLimit(range);
const cutoffMs = getTrendCutoffMs(range);
const useMonthlyBuckets = groupBy === 'month';
const dailyRollups = getDailyRollups(db, dayLimit);
const monthlyRollups = getMonthlyRollups(db, monthlyLimit);
const chartRollups = useMonthlyBuckets ? monthlyRollups : dailyRollups;
const sessions = getTrendSessionMetrics(db, cutoffMs);
const titlesByVideoId = getVideoAnimeTitleMap(
db,
dailyRollups.map((rollup) => rollup.videoId),
);
const aggregatedRows = buildAggregatedTrendRows(chartRollups);
const activity = {
watchTime: aggregatedRows.map((row) => ({ label: row.label, value: row.activeMin })),
cards: aggregatedRows.map((row) => ({ label: row.label, value: row.cards })),
words: aggregatedRows.map((row) => ({ label: row.label, value: row.words })),
sessions: aggregatedRows.map((row) => ({ label: row.label, value: row.sessions })),
};
const animePerDay = {
episodes: buildEpisodesPerAnimeFromDailyRollups(dailyRollups, titlesByVideoId),
watchTime: buildPerAnimeFromDailyRollups(dailyRollups, titlesByVideoId, (rollup) =>
rollup.totalActiveMin,
),
cards: buildPerAnimeFromDailyRollups(
dailyRollups,
titlesByVideoId,
(rollup) => rollup.totalCards,
),
words: buildPerAnimeFromDailyRollups(
dailyRollups,
titlesByVideoId,
(rollup) => rollup.totalTokensSeen,
),
lookups: buildPerAnimeFromSessions(sessions, (session) => session.yomitanLookupCount),
lookupsPerHundred: buildLookupsPerHundredPerAnime(sessions),
};
return {
activity,
progress: {
watchTime: accumulatePoints(activity.watchTime),
sessions: accumulatePoints(activity.sessions),
words: accumulatePoints(activity.words),
newWords: accumulatePoints(
useMonthlyBuckets ? buildNewWordsPerMonth(db, cutoffMs) : buildNewWordsPerDay(db, cutoffMs),
),
cards: accumulatePoints(activity.cards),
episodes: accumulatePoints(
useMonthlyBuckets
? buildEpisodesPerMonthFromRollups(monthlyRollups)
: buildEpisodesPerDayFromDailyRollups(dailyRollups),
),
lookups: accumulatePoints(
useMonthlyBuckets
? buildSessionSeriesByMonth(sessions, (session) => session.yomitanLookupCount)
: buildSessionSeriesByDay(sessions, (session) => session.yomitanLookupCount),
),
},
ratios: {
lookupsPerHundred: buildLookupsPerHundredWords(sessions, groupBy),
},
animePerDay,
animeCumulative: {
watchTime: buildCumulativePerAnime(animePerDay.watchTime),
episodes: buildCumulativePerAnime(animePerDay.episodes),
cards: buildCumulativePerAnime(animePerDay.cards),
words: buildCumulativePerAnime(animePerDay.words),
},
patterns: {
watchTimeByDayOfWeek: buildWatchTimeByDayOfWeek(sessions),
watchTimeByHour: buildWatchTimeByHour(sessions),
},
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,18 @@
import crypto from 'node:crypto';
import type { DatabaseSync } from './sqlite';
import { createInitialSessionState } from './reducer';
import { nowMs } from './time';
import { SESSION_STATUS_ACTIVE, SESSION_STATUS_ENDED } from './types';
import type { SessionState } from './types';
import { toDbMs } from './query-shared';
export function startSessionRecord(
db: DatabaseSync,
videoId: number,
startedAtMs = Date.now(),
startedAtMs = nowMs(),
): { sessionId: number; state: SessionState } {
const sessionUuid = crypto.randomUUID();
const nowMs = Date.now();
const createdAtMs = nowMs();
const result = db
.prepare(
`
@@ -20,7 +22,14 @@ export function startSessionRecord(
) VALUES (?, ?, ?, ?, ?, ?)
`,
)
.run(sessionUuid, videoId, startedAtMs, SESSION_STATUS_ACTIVE, startedAtMs, nowMs);
.run(
sessionUuid,
videoId,
toDbMs(startedAtMs),
SESSION_STATUS_ACTIVE,
toDbMs(startedAtMs),
toDbMs(createdAtMs),
);
const sessionId = Number(result.lastInsertRowid);
return {
sessionId,
@@ -31,7 +40,7 @@ export function startSessionRecord(
export function finalizeSessionRecord(
db: DatabaseSync,
sessionState: SessionState,
endedAtMs = Date.now(),
endedAtMs = nowMs(),
): void {
db.prepare(
`
@@ -57,9 +66,9 @@ export function finalizeSessionRecord(
WHERE session_id = ?
`,
).run(
endedAtMs,
toDbMs(endedAtMs),
SESSION_STATUS_ENDED,
sessionState.lastMediaMs,
sessionState.lastMediaMs === null ? null : toDbMs(sessionState.lastMediaMs),
sessionState.totalWatchedMs,
sessionState.activeWatchedMs,
sessionState.linesSeen,
@@ -73,7 +82,7 @@ export function finalizeSessionRecord(
sessionState.seekForwardCount,
sessionState.seekBackwardCount,
sessionState.mediaBufferEvents,
Date.now(),
toDbMs(nowMs()),
sessionState.sessionId,
);
}

View File

@@ -219,7 +219,9 @@ test('ensureSchema adds youtube metadata table to existing schema version 15 dat
const tables = new Set(
(
db.prepare(`SELECT name FROM sqlite_master WHERE type = 'table' AND name LIKE 'imm_%'`).all() as Array<{
db
.prepare(`SELECT name FROM sqlite_master WHERE type = 'table' AND name LIKE 'imm_%'`)
.all() as Array<{
name: string;
}>
).map((row) => row.name),
@@ -899,7 +901,8 @@ test('youtube videos can be regrouped under a shared channel anime identity', ()
channelId: 'UC123',
channelName: 'Channel Name',
channelUrl: 'https://www.youtube.com/channel/UC123',
channelThumbnailUrl: 'https://yt3.googleusercontent.com/channel-123=s176-c-k-c0x00ffffff-no-rj',
channelThumbnailUrl:
'https://yt3.googleusercontent.com/channel-123=s176-c-k-c0x00ffffff-no-rj',
uploaderId: '@channelname',
uploaderUrl: 'https://www.youtube.com/@channelname',
description: null,
@@ -913,7 +916,8 @@ test('youtube videos can be regrouped under a shared channel anime identity', ()
channelId: 'UC123',
channelName: 'Channel Name',
channelUrl: 'https://www.youtube.com/channel/UC123',
channelThumbnailUrl: 'https://yt3.googleusercontent.com/channel-123=s176-c-k-c0x00ffffff-no-rj',
channelThumbnailUrl:
'https://yt3.googleusercontent.com/channel-123=s176-c-k-c0x00ffffff-no-rj',
uploaderId: '@channelname',
uploaderUrl: 'https://www.youtube.com/@channelname',
description: null,
@@ -1074,6 +1078,56 @@ test('executeQueuedWrite inserts event and telemetry rows', () => {
}
});
test('executeQueuedWrite rejects partial telemetry writes instead of zero-filling', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);
try {
ensureSchema(db);
const stmts = createTrackerPreparedStatements(db);
const videoId = getOrCreateVideoRecord(db, 'local:/tmp/partial-telemetry.mkv', {
canonicalTitle: 'Partial Telemetry',
sourcePath: '/tmp/partial-telemetry.mkv',
sourceUrl: null,
sourceType: SOURCE_TYPE_LOCAL,
});
const { sessionId } = startSessionRecord(db, videoId, 5_000);
assert.throws(
() =>
executeQueuedWrite(
{
kind: 'telemetry',
sessionId,
sampleMs: 6_000,
totalWatchedMs: 1_000,
activeWatchedMs: 900,
linesSeen: 3,
cardsMined: 1,
lookupCount: 2,
lookupHits: 1,
yomitanLookupCount: 0,
pauseCount: 1,
pauseMs: 50,
seekForwardCount: 0,
seekBackwardCount: 0,
mediaBufferEvents: 0,
},
stmts,
),
/Incomplete telemetry write/,
);
const telemetryCount = db
.prepare('SELECT COUNT(*) AS total FROM imm_session_telemetry WHERE session_id = ?')
.get(sessionId) as { total: number };
assert.equal(telemetryCount.total, 0);
} finally {
db.close();
cleanupDbPath(dbPath);
}
});
test('executeQueuedWrite inserts and upserts word and kanji rows', () => {
const dbPath = makeDbPath();
const db = new Database(dbPath);

View File

@@ -1,8 +1,10 @@
import { createHash } from 'node:crypto';
import { parseMediaInfo } from '../../../jimaku/utils';
import type { DatabaseSync } from './sqlite';
import { nowMs } from './time';
import { SCHEMA_VERSION } from './types';
import type { QueuedWrite, VideoMetadata, YoutubeVideoMetadata } from './types';
import { toDbMs } from './query-shared';
export interface TrackerPreparedStatements {
telemetryInsertStmt: ReturnType<DatabaseSync['prepare']>;
@@ -128,7 +130,7 @@ function deduplicateExistingCoverArtRows(db: DatabaseSync): void {
return;
}
const nowMs = Date.now();
const nowMsValue = toDbMs(nowMs());
const upsertBlobStmt = db.prepare(`
INSERT INTO imm_cover_art_blobs (blob_hash, cover_blob, CREATED_DATE, LAST_UPDATE_DATE)
VALUES (?, ?, ?, ?)
@@ -150,14 +152,14 @@ function deduplicateExistingCoverArtRows(db: DatabaseSync): void {
const refHash = parseCoverBlobReference(coverBlob);
if (refHash) {
if (row.cover_blob_hash !== refHash) {
updateMediaStmt.run(coverBlob, refHash, nowMs, row.video_id);
updateMediaStmt.run(coverBlob, refHash, nowMsValue, row.video_id);
}
continue;
}
const hash = createHash('sha256').update(coverBlob).digest('hex');
upsertBlobStmt.run(hash, coverBlob, nowMs, nowMs);
updateMediaStmt.run(buildCoverBlobReference(hash), hash, nowMs, row.video_id);
upsertBlobStmt.run(hash, coverBlob, nowMsValue, nowMsValue);
updateMediaStmt.run(buildCoverBlobReference(hash), hash, nowMsValue, row.video_id);
}
}
@@ -273,7 +275,7 @@ function parseLegacyAnimeBackfillCandidate(
}
function ensureLifetimeSummaryTables(db: DatabaseSync): void {
const nowMs = Date.now();
const nowMsValue = toDbMs(nowMs());
db.exec(`
CREATE TABLE IF NOT EXISTS imm_lifetime_global(
@@ -315,8 +317,8 @@ function ensureLifetimeSummaryTables(db: DatabaseSync): void {
0,
0,
NULL,
${nowMs},
${nowMs}
${nowMsValue},
${nowMsValue}
WHERE NOT EXISTS (SELECT 1 FROM imm_lifetime_global LIMIT 1)
`);
@@ -403,13 +405,13 @@ export function getOrCreateAnimeRecord(db: DatabaseSync, input: AnimeRecordInput
input.titleEnglish,
input.titleNative,
input.metadataJson,
Date.now(),
toDbMs(nowMs()),
existing.anime_id,
);
return existing.anime_id;
}
const nowMs = Date.now();
const nowMsValue = toDbMs(nowMs());
const result = db
.prepare(
`
@@ -434,8 +436,8 @@ export function getOrCreateAnimeRecord(db: DatabaseSync, input: AnimeRecordInput
input.titleEnglish,
input.titleNative,
input.metadataJson,
nowMs,
nowMs,
nowMsValue,
nowMsValue,
);
return Number(result.lastInsertRowid);
}
@@ -469,7 +471,7 @@ export function linkVideoToAnimeRecord(
input.parserSource,
input.parserConfidence,
input.parseMetadataJson,
Date.now(),
toDbMs(nowMs()),
videoId,
);
}
@@ -854,7 +856,7 @@ export function ensureSchema(db: DatabaseSync): void {
addColumnIfMissing(db, 'imm_monthly_rollups', 'CREATED_DATE');
addColumnIfMissing(db, 'imm_monthly_rollups', 'LAST_UPDATE_DATE');
const nowMs = Date.now();
const migratedAtMs = toDbMs(nowMs());
db.prepare(
`
UPDATE imm_videos
@@ -894,7 +896,7 @@ export function ensureSchema(db: DatabaseSync): void {
CREATED_DATE = COALESCE(CREATED_DATE, ?),
LAST_UPDATE_DATE = COALESCE(LAST_UPDATE_DATE, ?)
`,
).run(nowMs, nowMs);
).run(migratedAtMs, migratedAtMs);
db.prepare(
`
UPDATE imm_monthly_rollups
@@ -902,7 +904,7 @@ export function ensureSchema(db: DatabaseSync): void {
CREATED_DATE = COALESCE(CREATED_DATE, ?),
LAST_UPDATE_DATE = COALESCE(LAST_UPDATE_DATE, ?)
`,
).run(nowMs, nowMs);
).run(migratedAtMs, migratedAtMs);
}
if (currentVersion?.schema_version === 1 || currentVersion?.schema_version === 2) {
@@ -1241,7 +1243,7 @@ export function ensureSchema(db: DatabaseSync): void {
db.exec(`
INSERT INTO imm_schema_version(schema_version, applied_at_ms)
VALUES (${SCHEMA_VERSION}, ${Date.now()})
VALUES (${SCHEMA_VERSION}, ${toDbMs(nowMs())})
ON CONFLICT DO NOTHING
`);
}
@@ -1399,28 +1401,48 @@ function incrementKanjiAggregate(
}
export function executeQueuedWrite(write: QueuedWrite, stmts: TrackerPreparedStatements): void {
const currentMs = toDbMs(nowMs());
if (write.kind === 'telemetry') {
const nowMs = Date.now();
if (
write.totalWatchedMs === undefined ||
write.activeWatchedMs === undefined ||
write.linesSeen === undefined ||
write.tokensSeen === undefined ||
write.cardsMined === undefined ||
write.lookupCount === undefined ||
write.lookupHits === undefined ||
write.yomitanLookupCount === undefined ||
write.pauseCount === undefined ||
write.pauseMs === undefined ||
write.seekForwardCount === undefined ||
write.seekBackwardCount === undefined ||
write.mediaBufferEvents === undefined
) {
throw new Error('Incomplete telemetry write');
}
const telemetrySampleMs = toDbMs(write.sampleMs ?? Number(currentMs));
stmts.telemetryInsertStmt.run(
write.sessionId,
write.sampleMs!,
write.totalWatchedMs!,
write.activeWatchedMs!,
write.linesSeen!,
write.tokensSeen!,
write.cardsMined!,
write.lookupCount!,
write.lookupHits!,
write.yomitanLookupCount ?? 0,
write.pauseCount!,
write.pauseMs!,
write.seekForwardCount!,
write.seekBackwardCount!,
write.mediaBufferEvents!,
nowMs,
nowMs,
telemetrySampleMs,
write.totalWatchedMs,
write.activeWatchedMs,
write.linesSeen,
write.tokensSeen,
write.cardsMined,
write.lookupCount,
write.lookupHits,
write.yomitanLookupCount,
write.pauseCount,
write.pauseMs,
write.seekForwardCount,
write.seekBackwardCount,
write.mediaBufferEvents,
currentMs,
currentMs,
);
stmts.sessionCheckpointStmt.run(write.lastMediaMs ?? null, nowMs, write.sessionId);
if (write.lastMediaMs !== undefined) {
stmts.sessionCheckpointStmt.run(write.lastMediaMs ?? null, currentMs, write.sessionId);
}
return;
}
if (write.kind === 'word') {
@@ -1456,8 +1478,8 @@ export function executeQueuedWrite(write: QueuedWrite, stmts: TrackerPreparedSta
write.segmentEndMs ?? null,
write.text,
write.secondaryText ?? null,
Date.now(),
Date.now(),
currentMs,
currentMs,
);
const lineId = Number(lineResult.lastInsertRowid);
for (const occurrence of write.wordOccurrences) {
@@ -1473,16 +1495,16 @@ export function executeQueuedWrite(write: QueuedWrite, stmts: TrackerPreparedSta
stmts.eventInsertStmt.run(
write.sessionId,
write.sampleMs!,
write.eventType!,
toDbMs(write.sampleMs ?? Number(currentMs)),
write.eventType ?? 0,
write.lineIndex ?? null,
write.segmentStartMs ?? null,
write.segmentEndMs ?? null,
write.tokensDelta ?? 0,
write.cardsDelta ?? 0,
write.payloadJson ?? null,
Date.now(),
Date.now(),
currentMs,
currentMs,
);
}
@@ -1508,11 +1530,11 @@ export function getOrCreateVideoRecord(
LAST_UPDATE_DATE = ?
WHERE video_id = ?
`,
).run(details.canonicalTitle || 'unknown', Date.now(), existing.video_id);
).run(details.canonicalTitle || 'unknown', toDbMs(nowMs()), existing.video_id);
return existing.video_id;
}
const nowMs = Date.now();
const currentMs = toDbMs(nowMs());
const insert = db.prepare(`
INSERT INTO imm_videos (
video_key, canonical_title, source_type, source_path, source_url,
@@ -1539,8 +1561,8 @@ export function getOrCreateVideoRecord(
null,
null,
null,
nowMs,
nowMs,
currentMs,
currentMs,
);
return Number(result.lastInsertRowid);
}
@@ -1582,7 +1604,7 @@ export function updateVideoMetadataRecord(
metadata.hashSha256,
metadata.screenshotPath,
metadata.metadataJson,
Date.now(),
toDbMs(nowMs()),
videoId,
);
}
@@ -1600,7 +1622,7 @@ export function updateVideoTitleRecord(
LAST_UPDATE_DATE = ?
WHERE video_id = ?
`,
).run(canonicalTitle, Date.now(), videoId);
).run(canonicalTitle, toDbMs(nowMs()), videoId);
}
export function upsertYoutubeVideoMetadata(
@@ -1608,7 +1630,7 @@ export function upsertYoutubeVideoMetadata(
videoId: number,
metadata: YoutubeVideoMetadata,
): void {
const nowMs = Date.now();
const currentMs = toDbMs(nowMs());
db.prepare(
`
INSERT INTO imm_youtube_videos (
@@ -1659,8 +1681,8 @@ export function upsertYoutubeVideoMetadata(
metadata.uploaderUrl ?? null,
metadata.description ?? null,
metadata.metadataJson ?? null,
nowMs,
nowMs,
nowMs,
currentMs,
currentMs,
currentMs,
);
}

View File

@@ -0,0 +1,7 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { nowMs } from './time.js';
test('nowMs returns wall-clock epoch milliseconds', () => {
assert.ok(nowMs() > 1_600_000_000_000);
});

View File

@@ -0,0 +1,8 @@
export function nowMs(): number {
const perf = globalThis.performance;
if (perf && Number.isFinite(perf.timeOrigin)) {
return Math.floor(perf.timeOrigin + perf.now());
}
return Date.now();
}

View File

@@ -79,10 +79,7 @@ export {
handleOverlayWindowBeforeInputEvent,
isTabInputForMpvForwarding,
} from './overlay-window-input';
export {
initializeOverlayAnkiIntegration,
initializeOverlayRuntime,
} from './overlay-runtime-init';
export { initializeOverlayAnkiIntegration, initializeOverlayRuntime } from './overlay-runtime-init';
export { setVisibleOverlayVisible, updateVisibleOverlayVisibility } from './overlay-visibility';
export {
MPV_REQUEST_ID_SECONDARY_SUB_VISIBILITY,

View File

@@ -70,7 +70,11 @@ function createControllerConfigFixture() {
nextAudio: { kind: 'button' as const, buttonIndex: 5 },
playCurrentAudio: { kind: 'button' as const, buttonIndex: 7 },
toggleMpvPause: { kind: 'button' as const, buttonIndex: 6 },
leftStickHorizontal: { kind: 'axis' as const, axisIndex: 0, dpadFallback: 'horizontal' as const },
leftStickHorizontal: {
kind: 'axis' as const,
axisIndex: 0,
dpadFallback: 'horizontal' as const,
},
leftStickVertical: { kind: 'axis' as const, axisIndex: 1, dpadFallback: 'vertical' as const },
rightStickHorizontal: { kind: 'axis' as const, axisIndex: 3, dpadFallback: 'none' as const },
rightStickVertical: { kind: 'axis' as const, axisIndex: 4, dpadFallback: 'none' as const },

View File

@@ -64,7 +64,9 @@ export interface IpcServiceDeps {
getCurrentSecondarySub: () => string;
focusMainWindow: () => void;
runSubsyncManual: (request: SubsyncManualRunRequest) => Promise<SubsyncResult>;
onYoutubePickerResolve: (request: YoutubePickerResolveRequest) => Promise<YoutubePickerResolveResult>;
onYoutubePickerResolve: (
request: YoutubePickerResolveRequest,
) => Promise<YoutubePickerResolveResult>;
getAnkiConnectStatus: () => boolean;
getRuntimeOptions: () => unknown;
setRuntimeOption: (id: RuntimeOptionId, value: RuntimeOptionValue) => unknown;
@@ -167,7 +169,9 @@ export interface IpcDepsRuntimeOptions {
getMpvClient: () => MpvClientLike | null;
focusMainWindow: () => void;
runSubsyncManual: (request: SubsyncManualRunRequest) => Promise<SubsyncResult>;
onYoutubePickerResolve: (request: YoutubePickerResolveRequest) => Promise<YoutubePickerResolveResult>;
onYoutubePickerResolve: (
request: YoutubePickerResolveRequest,
) => Promise<YoutubePickerResolveResult>;
getAnkiConnectStatus: () => boolean;
getRuntimeOptions: () => unknown;
setRuntimeOption: (id: RuntimeOptionId, value: RuntimeOptionValue) => unknown;
@@ -291,13 +295,16 @@ export function registerIpcHandlers(deps: IpcServiceDeps, ipc: IpcMainRegistrar
deps.onOverlayModalOpened(parsedModal);
});
ipc.handle(IPC_CHANNELS.request.youtubePickerResolve, async (_event: unknown, request: unknown) => {
const parsedRequest = parseYoutubePickerResolveRequest(request);
if (!parsedRequest) {
return { ok: false, message: 'Invalid YouTube picker resolve payload' };
}
return await deps.onYoutubePickerResolve(parsedRequest);
});
ipc.handle(
IPC_CHANNELS.request.youtubePickerResolve,
async (_event: unknown, request: unknown) => {
const parsedRequest = parseYoutubePickerResolveRequest(request);
if (!parsedRequest) {
return { ok: false, message: 'Invalid YouTube picker resolve payload' };
}
return await deps.onYoutubePickerResolve(parsedRequest);
},
);
ipc.on(IPC_CHANNELS.command.openYomitanSettings, () => {
deps.openYomitanSettings();
@@ -375,13 +382,16 @@ export function registerIpcHandlers(deps: IpcServiceDeps, ipc: IpcMainRegistrar
},
);
ipc.handle(IPC_CHANNELS.command.saveControllerConfig, async (_event: unknown, update: unknown) => {
const parsedUpdate = parseControllerConfigUpdate(update);
if (!parsedUpdate) {
throw new Error('Invalid controller config payload');
}
await deps.saveControllerConfig(parsedUpdate);
});
ipc.handle(
IPC_CHANNELS.command.saveControllerConfig,
async (_event: unknown, update: unknown) => {
const parsedUpdate = parseControllerConfigUpdate(update);
if (!parsedUpdate) {
throw new Error('Invalid controller config payload');
}
await deps.saveControllerConfig(parsedUpdate);
},
);
ipc.handle(IPC_CHANNELS.request.getMecabStatus, () => {
return deps.getMecabStatus();

View File

@@ -263,7 +263,9 @@ test('reportProgress posts timeline payload and treats failure as non-fatal', as
audioStreamIndex: 1,
subtitleStreamIndex: 2,
});
const expectedPostedPayload = JSON.parse(JSON.stringify(expectedPayload));
const expectedPostedPayload = Object.fromEntries(
Object.entries(structuredClone(expectedPayload)).filter(([, value]) => value !== undefined),
);
const ok = await service.reportProgress({
itemId: 'movie-2',

View File

@@ -1,6 +1,6 @@
import * as fs from 'fs';
import * as path from 'path';
import electron from 'electron';
import { ensureDirForFile } from '../../shared/fs-utils';
const { safeStorage } = electron;
@@ -27,15 +27,8 @@ export interface JellyfinTokenStore {
clearSession: () => void;
}
function ensureDirectory(filePath: string): void {
const dir = path.dirname(filePath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
function writePayload(filePath: string, payload: PersistedSessionPayload): void {
ensureDirectory(filePath);
ensureDirForFile(filePath);
fs.writeFileSync(filePath, JSON.stringify(payload, null, 2), 'utf-8');
}

View File

@@ -0,0 +1,51 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import {
getIgnoredPos1Entries,
JLPT_EXCLUDED_TERMS,
JLPT_IGNORED_MECAB_POS1,
JLPT_IGNORED_MECAB_POS1_ENTRIES,
JLPT_IGNORED_MECAB_POS1_LIST,
shouldIgnoreJlptByTerm,
shouldIgnoreJlptForMecabPos1,
} from './jlpt-token-filter';
test('shouldIgnoreJlptByTerm matches the excluded JLPT lexical terms', () => {
assert.equal(shouldIgnoreJlptByTerm('この'), true);
assert.equal(shouldIgnoreJlptByTerm('そこ'), true);
assert.equal(shouldIgnoreJlptByTerm('猫'), false);
assert.deepEqual(Array.from(JLPT_EXCLUDED_TERMS), [
'この',
'その',
'あの',
'どの',
'これ',
'それ',
'あれ',
'どれ',
'ここ',
'そこ',
'あそこ',
'どこ',
'こと',
'ああ',
'ええ',
'うう',
'おお',
'はは',
'へえ',
'ふう',
'ほう',
]);
});
test('shouldIgnoreJlptForMecabPos1 matches the exported ignored POS1 list', () => {
assert.equal(shouldIgnoreJlptForMecabPos1('助詞'), true);
assert.equal(shouldIgnoreJlptForMecabPos1('名詞'), false);
assert.deepEqual(JLPT_IGNORED_MECAB_POS1, JLPT_IGNORED_MECAB_POS1_LIST);
assert.deepEqual(
JLPT_IGNORED_MECAB_POS1_ENTRIES.map((entry) => entry.pos1),
JLPT_IGNORED_MECAB_POS1_LIST,
);
assert.deepEqual(getIgnoredPos1Entries(), JLPT_IGNORED_MECAB_POS1_ENTRIES);
});

View File

@@ -59,3 +59,21 @@ export function handleOverlayWindowBeforeInputEvent(options: {
options.preventDefault();
return true;
}
export function handleOverlayWindowBlurred(options: {
kind: OverlayWindowKind;
windowVisible: boolean;
isOverlayVisible: (kind: OverlayWindowKind) => boolean;
ensureOverlayWindowLevel: () => void;
moveWindowTop: () => void;
}): boolean {
if (options.kind === 'visible' && !options.isOverlayVisible(options.kind)) {
return false;
}
options.ensureOverlayWindowLevel();
if (options.kind === 'visible' && options.windowVisible) {
options.moveWindowTop();
}
return true;
}

View File

@@ -2,6 +2,7 @@ import assert from 'node:assert/strict';
import test from 'node:test';
import {
handleOverlayWindowBeforeInputEvent,
handleOverlayWindowBlurred,
isTabInputForMpvForwarding,
} from './overlay-window-input';
@@ -82,3 +83,58 @@ test('handleOverlayWindowBeforeInputEvent leaves modal Tab handling alone', () =
assert.equal(handled, false);
assert.deepEqual(calls, []);
});
test('handleOverlayWindowBlurred skips visible overlay restacking after manual hide', () => {
const calls: string[] = [];
const handled = handleOverlayWindowBlurred({
kind: 'visible',
windowVisible: true,
isOverlayVisible: () => false,
ensureOverlayWindowLevel: () => {
calls.push('ensure-level');
},
moveWindowTop: () => {
calls.push('move-top');
},
});
assert.equal(handled, false);
assert.deepEqual(calls, []);
});
test('handleOverlayWindowBlurred preserves active visible/modal window stacking', () => {
const calls: string[] = [];
assert.equal(
handleOverlayWindowBlurred({
kind: 'visible',
windowVisible: true,
isOverlayVisible: () => true,
ensureOverlayWindowLevel: () => {
calls.push('ensure-visible');
},
moveWindowTop: () => {
calls.push('move-visible');
},
}),
true,
);
assert.equal(
handleOverlayWindowBlurred({
kind: 'modal',
windowVisible: true,
isOverlayVisible: () => false,
ensureOverlayWindowLevel: () => {
calls.push('ensure-modal');
},
moveWindowTop: () => {
calls.push('move-modal');
},
}),
true,
);
assert.deepEqual(calls, ['ensure-visible', 'move-visible', 'ensure-modal']);
});

View File

@@ -5,6 +5,7 @@ import { createLogger } from '../../logger';
import { IPC_CHANNELS } from '../../shared/ipc/contracts';
import {
handleOverlayWindowBeforeInputEvent,
handleOverlayWindowBlurred,
type OverlayWindowKind,
} from './overlay-window-input';
import { buildOverlayWindowOptions } from './overlay-window-options';
@@ -124,12 +125,18 @@ export function createOverlayWindow(
});
window.on('blur', () => {
if (!window.isDestroyed()) {
options.ensureOverlayWindowLevel(window);
if (kind === 'visible' && window.isVisible()) {
if (window.isDestroyed()) return;
handleOverlayWindowBlurred({
kind,
windowVisible: window.isVisible(),
isOverlayVisible: options.isOverlayVisible,
ensureOverlayWindowLevel: () => {
options.ensureOverlayWindowLevel(window);
},
moveWindowTop: () => {
window.moveTop();
}
}
},
});
});
if (options.isDev && kind === 'visible') {

View File

@@ -1,4 +1,8 @@
import { RuntimeOptionApplyResult, RuntimeOptionId, RuntimeOptionValue } from '../../types';
import {
RuntimeOptionApplyResult,
RuntimeOptionId,
RuntimeOptionValue,
} from '../../types/runtime-options';
export interface RuntimeOptionsManagerLike {
setOptionValue: (id: RuntimeOptionId, value: RuntimeOptionValue) => RuntimeOptionApplyResult;

View File

@@ -1,8 +1,9 @@
import { Hono } from 'hono';
import { serve } from '@hono/node-server';
import type { ImmersionTrackerService } from './immersion-tracker-service.js';
import http, { type IncomingMessage, type ServerResponse } from 'node:http';
import { basename, extname, resolve, sep } from 'node:path';
import { readFileSync, existsSync, statSync } from 'node:fs';
import { Readable } from 'node:stream';
import { MediaGenerator } from '../../media-generator.js';
import { AnkiConnectClient } from '../../anki-connect.js';
import type { AnkiConnectConfig } from '../../types.js';
@@ -60,6 +61,71 @@ function resolveStatsNoteFieldName(
return null;
}
function toFetchHeaders(headers: IncomingMessage['headers']): Headers {
const fetchHeaders = new Headers();
for (const [name, value] of Object.entries(headers)) {
if (value === undefined) continue;
if (Array.isArray(value)) {
for (const entry of value) {
fetchHeaders.append(name, entry);
}
continue;
}
fetchHeaders.set(name, value);
}
return fetchHeaders;
}
function toFetchRequest(req: IncomingMessage): Request {
const method = req.method ?? 'GET';
const url = new URL(req.url ?? '/', `http://${req.headers.host ?? '127.0.0.1'}`);
const init: RequestInit & { duplex?: 'half' } = {
method,
headers: toFetchHeaders(req.headers),
};
if (method !== 'GET' && method !== 'HEAD') {
init.body = Readable.toWeb(req) as BodyInit;
init.duplex = 'half';
}
return new Request(url, init);
}
async function writeFetchResponse(res: ServerResponse, response: Response): Promise<void> {
res.statusCode = response.status;
response.headers.forEach((value, key) => {
res.setHeader(key, value);
});
const body = await response.arrayBuffer();
res.end(Buffer.from(body));
}
function startNodeHttpServer(
app: Hono,
config: StatsServerConfig,
): { close: () => void } {
const server = http.createServer((req, res) => {
void (async () => {
try {
await writeFetchResponse(res, await app.fetch(toFetchRequest(req)));
} catch {
res.statusCode = 500;
res.end('Internal Server Error');
}
})();
});
server.listen(config.port, '127.0.0.1');
return {
close: () => {
server.close();
},
};
}
/** Load known words cache from disk into a Set. Returns null if unavailable. */
function loadKnownWordsSet(cachePath: string | undefined): Set<string> | null {
if (!cachePath || !existsSync(cachePath)) return null;
@@ -182,7 +248,9 @@ function buildAnkiNotePreview(
return {
word: getPreferredNoteFieldValue(fields, [getConfiguredWordFieldName(ankiConfig)]),
sentence: getPreferredNoteFieldValue(fields, [getConfiguredSentenceFieldName(ankiConfig)]),
translation: getPreferredNoteFieldValue(fields, [getConfiguredTranslationFieldName(ankiConfig)]),
translation: getPreferredNoteFieldValue(fields, [
getConfiguredTranslationFieldName(ankiConfig),
]),
};
}
@@ -657,7 +725,11 @@ export function createStatsApp(
method: 'POST',
headers: { 'Content-Type': 'application/json' },
signal: AbortSignal.timeout(ANKI_CONNECT_FETCH_TIMEOUT_MS),
body: JSON.stringify({ action: 'notesInfo', version: 6, params: { notes: resolvedNoteIds } }),
body: JSON.stringify({
action: 'notesInfo',
version: 6,
params: { notes: resolvedNoteIds },
}),
});
const result = (await response.json()) as {
result?: Array<{ noteId: number; fields: Record<string, { value: string }> }>;
@@ -1001,15 +1073,29 @@ export function startStatsServer(config: StatsServerConfig): { close: () => void
resolveAnkiNoteId: config.resolveAnkiNoteId,
});
const server = serve({
fetch: app.fetch,
port: config.port,
hostname: '127.0.0.1',
});
return {
close: () => {
server.close();
},
const bunRuntime = globalThis as typeof globalThis & {
Bun?: {
serve?: (options: {
fetch: (typeof app)['fetch'];
port: number;
hostname: string;
}) => { stop: () => void };
};
};
if (bunRuntime.Bun?.serve) {
const server = bunRuntime.Bun.serve({
fetch: app.fetch,
port: config.port,
hostname: '127.0.0.1',
});
return {
close: () => {
server.stop();
},
};
}
return startNodeHttpServer(app, config);
}

View File

@@ -35,6 +35,21 @@ test('parseSrtCues handles multi-line subtitle text', () => {
assert.equal(cues[0]!.text, 'これは\nテストです');
});
test('parseSrtCues strips HTML-like markup while preserving line breaks', () => {
const content = [
'1',
'00:01:00,000 --> 00:01:05,000',
'<font color="japanese">これは</font>',
'<font color="japanese">テストです</font>',
'',
].join('\n');
const cues = parseSrtCues(content);
assert.equal(cues.length, 1);
assert.equal(cues[0]!.text, 'これは\nテストです');
});
test('parseSrtCues handles hours in timestamps', () => {
const content = ['1', '01:30:00,000 --> 01:30:05,000', 'テスト', ''].join('\n');
@@ -134,6 +149,18 @@ test('parseAssCues handles \\N line breaks', () => {
assert.equal(cues[0]!.text, '一行目\\N二行目');
});
test('parseAssCues strips HTML-like markup while preserving ASS line breaks', () => {
const content = [
'[Events]',
'Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text',
'Dialogue: 0,0:00:01.00,0:00:04.00,Default,,0,0,0,,<font color="japanese">一行目</font>\\N<font color="japanese">二行目</font>',
].join('\n');
const cues = parseAssCues(content);
assert.equal(cues[0]!.text, '一行目\\N二行目');
});
test('parseAssCues returns empty for content without Events section', () => {
const content = ['[Script Info]', 'Title: Test'].join('\n');

View File

@@ -4,6 +4,8 @@ export interface SubtitleCue {
text: string;
}
const HTML_SUBTITLE_TAG_PATTERN = /<\/?[A-Za-z][^>\n]*>/g;
const SRT_TIMING_PATTERN =
/^\s*(?:(\d{1,2}):)?(\d{2}):(\d{2})[,.](\d{1,3})\s*-->\s*(?:(\d{1,2}):)?(\d{2}):(\d{2})[,.](\d{1,3})/;
@@ -21,6 +23,10 @@ function parseTimestamp(
);
}
function sanitizeSubtitleCueText(text: string): string {
return text.replace(ASS_OVERRIDE_TAG_PATTERN, '').replace(HTML_SUBTITLE_TAG_PATTERN, '').trim();
}
export function parseSrtCues(content: string): SubtitleCue[] {
const cues: SubtitleCue[] = [];
const lines = content.split(/\r?\n/);
@@ -54,7 +60,7 @@ export function parseSrtCues(content: string): SubtitleCue[] {
i += 1;
}
const text = textLines.join('\n').trim();
const text = sanitizeSubtitleCueText(textLines.join('\n'));
if (text) {
cues.push({ startTime, endTime, text });
}
@@ -140,13 +146,9 @@ export function parseAssCues(content: string): SubtitleCue[] {
continue;
}
const rawText = fields
.slice(textFieldIndex)
.join(',')
.replace(ASS_OVERRIDE_TAG_PATTERN, '')
.trim();
if (rawText) {
cues.push({ startTime, endTime, text: rawText });
const text = sanitizeSubtitleCueText(fields.slice(textFieldIndex).join(','));
if (text) {
cues.push({ startTime, endTime, text });
}
}

View File

@@ -0,0 +1,113 @@
import assert from 'node:assert/strict';
import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import test from 'node:test';
import {
loadSubtitlePosition,
saveSubtitlePosition,
updateCurrentMediaPath,
} from './subtitle-position';
function makeTempDir(): string {
return fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-subtitle-position-test-'));
}
test('saveSubtitlePosition queues pending position when media path is unavailable', () => {
const queued: Array<{ yPercent: number }> = [];
let persisted = false;
saveSubtitlePosition({
position: { yPercent: 21 },
currentMediaPath: null,
subtitlePositionsDir: makeTempDir(),
onQueuePending: (position) => {
queued.push(position);
},
onPersisted: () => {
persisted = true;
},
});
assert.deepEqual(queued, [{ yPercent: 21 }]);
assert.equal(persisted, false);
});
test('saveSubtitlePosition persists and loadSubtitlePosition restores the stored position', () => {
const dir = makeTempDir();
const mediaPath = path.join(dir, 'episode.mkv');
const position = { yPercent: 37 };
let persisted = false;
saveSubtitlePosition({
position,
currentMediaPath: mediaPath,
subtitlePositionsDir: dir,
onQueuePending: () => {
throw new Error('unexpected queue');
},
onPersisted: () => {
persisted = true;
},
});
const loaded = loadSubtitlePosition({
currentMediaPath: mediaPath,
fallbackPosition: { yPercent: 0 },
subtitlePositionsDir: dir,
});
assert.equal(persisted, true);
assert.deepEqual(loaded, position);
assert.equal(
fs.readdirSync(dir).some((entry) => entry.endsWith('.json')),
true,
);
});
test('updateCurrentMediaPath persists a queued subtitle position before broadcasting', () => {
const dir = makeTempDir();
let currentMediaPath: string | null = null;
let cleared = false;
const setPositions: Array<{ yPercent: number } | null> = [];
const broadcasts: Array<{ yPercent: number } | null> = [];
const pending = { yPercent: 64 };
updateCurrentMediaPath({
mediaPath: path.join(dir, 'video.mkv'),
currentMediaPath,
pendingSubtitlePosition: pending,
subtitlePositionsDir: dir,
loadSubtitlePosition: () =>
loadSubtitlePosition({
currentMediaPath,
fallbackPosition: { yPercent: 0 },
subtitlePositionsDir: dir,
}),
setCurrentMediaPath: (next) => {
currentMediaPath = next;
},
clearPendingSubtitlePosition: () => {
cleared = true;
},
setSubtitlePosition: (position) => {
setPositions.push(position);
},
broadcastSubtitlePosition: (position) => {
broadcasts.push(position);
},
});
assert.equal(currentMediaPath, path.join(dir, 'video.mkv'));
assert.equal(cleared, true);
assert.deepEqual(setPositions, [pending]);
assert.deepEqual(broadcasts, [pending]);
assert.deepEqual(
loadSubtitlePosition({
currentMediaPath,
fallbackPosition: { yPercent: 0 },
subtitlePositionsDir: dir,
}),
pending,
);
});

View File

@@ -228,7 +228,11 @@ test('consumeCachedSubtitle returns prefetched payload and prevents reprocessing
controller.onSubtitleChange('猫\nです');
await flushMicrotasks();
assert.equal(tokenizeCalls, 0, 'same cached subtitle should not reprocess after immediate consume');
assert.equal(
tokenizeCalls,
0,
'same cached subtitle should not reprocess after immediate consume',
);
assert.deepEqual(emitted, []);
});

View File

@@ -3428,40 +3428,43 @@ test('tokenizeSubtitle keeps standalone grammar-only tokens hoverable while clea
test('tokenizeSubtitle keeps trailing quote-particle merged tokens hoverable while clearing only their annotation metadata', async () => {
const result = await tokenizeSubtitle(
'どうしてもって',
makeDepsFromYomitanTokens([{ surface: 'どうしてもって', reading: 'どうしてもって', headword: 'どうしても' }], {
getFrequencyDictionaryEnabled: () => true,
getFrequencyRank: (text) => (text === 'どうしても' ? 123 : null),
getJlptLevel: (text) => (text === 'どうしても' ? 'N3' : null),
tokenizeWithMecab: async () => [
{
headword: 'どうしても',
surface: 'どうしても',
reading: 'ドウシテモ',
startPos: 0,
endPos: 5,
partOfSpeech: PartOfSpeech.other,
pos1: '副詞',
pos2: '一般',
isMerged: false,
isKnown: false,
isNPlusOneTarget: false,
},
{
headword: 'って',
surface: 'って',
reading: 'ッテ',
startPos: 5,
endPos: 7,
partOfSpeech: PartOfSpeech.particle,
pos1: '助詞',
pos2: '格助詞',
isMerged: false,
isKnown: false,
isNPlusOneTarget: false,
},
],
getMinSentenceWordsForNPlusOne: () => 1,
}),
makeDepsFromYomitanTokens(
[{ surface: 'どうしてもって', reading: 'どうしてもって', headword: 'どうしても' }],
{
getFrequencyDictionaryEnabled: () => true,
getFrequencyRank: (text) => (text === 'どうしても' ? 123 : null),
getJlptLevel: (text) => (text === 'どうしても' ? 'N3' : null),
tokenizeWithMecab: async () => [
{
headword: 'どうしても',
surface: 'どうしても',
reading: 'ドウシテモ',
startPos: 0,
endPos: 5,
partOfSpeech: PartOfSpeech.other,
pos1: '副詞',
pos2: '一般',
isMerged: false,
isKnown: false,
isNPlusOneTarget: false,
},
{
headword: 'って',
surface: 'って',
reading: 'ッテ',
startPos: 5,
endPos: 7,
partOfSpeech: PartOfSpeech.particle,
pos1: '助詞',
pos2: '格助詞',
isMerged: false,
isKnown: false,
isNPlusOneTarget: false,
},
],
getMinSentenceWordsForNPlusOne: () => 1,
},
),
);
assert.equal(result.text, 'どうしてもって');
@@ -3812,7 +3815,14 @@ test('tokenizeSubtitle clears all annotations for explanatory pondering endings'
jlptLevel: token.jlptLevel,
})),
[
{ surface: '俺', headword: '俺', isKnown: true, isNPlusOneTarget: false, frequencyRank: 19, jlptLevel: 'N5' },
{
surface: '俺',
headword: '俺',
isKnown: true,
isNPlusOneTarget: false,
frequencyRank: 19,
jlptLevel: 'N5',
},
{
surface: 'どうかしちゃった',
headword: 'どうかしちゃう',

View File

@@ -140,7 +140,11 @@ function isExcludedFromSubtitleAnnotationsByPos1(normalizedPos1: string): boolea
function isExcludedTrailingParticleMergedToken(token: MergedToken): boolean {
const normalizedSurface = normalizeJlptTextForExclusion(token.surface);
const normalizedHeadword = normalizeJlptTextForExclusion(token.headword);
if (!normalizedSurface || !normalizedHeadword || !normalizedSurface.startsWith(normalizedHeadword)) {
if (
!normalizedSurface ||
!normalizedHeadword ||
!normalizedSurface.startsWith(normalizedHeadword)
) {
return false;
}
@@ -164,7 +168,10 @@ function isExcludedTrailingParticleMergedToken(token: MergedToken): boolean {
function isAuxiliaryStemGrammarTailToken(token: MergedToken): boolean {
const pos1Parts = splitNormalizedTagParts(normalizePos1Tag(token.pos1));
if (pos1Parts.length === 0 || !pos1Parts.every((part) => AUXILIARY_STEM_GRAMMAR_TAIL_POS1.has(part))) {
if (
pos1Parts.length === 0 ||
!pos1Parts.every((part) => AUXILIARY_STEM_GRAMMAR_TAIL_POS1.has(part))
) {
return false;
}

View File

@@ -46,7 +46,11 @@ const SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDING_TRAILING_PARTICLES = [
'かな',
'かね',
] as const;
const SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDING_THOUGHT_SUFFIXES = ['か', 'かな', 'かね'] as const;
const SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDING_THOUGHT_SUFFIXES = [
'か',
'かな',
'かね',
] as const;
const SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDINGS = new Set(
SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDING_PREFIXES.flatMap((prefix) =>
SUBTITLE_ANNOTATION_EXCLUDED_EXPLANATORY_ENDING_CORES.flatMap((core) =>
@@ -96,9 +100,7 @@ function isExcludedByTagSet(normalizedTag: string, exclusions: ReadonlySet<strin
return parts.every((part) => exclusions.has(part));
}
function resolvePos1Exclusions(
options: SubtitleAnnotationFilterOptions = {},
): ReadonlySet<string> {
function resolvePos1Exclusions(options: SubtitleAnnotationFilterOptions = {}): ReadonlySet<string> {
if (options.pos1Exclusions) {
return options.pos1Exclusions;
}
@@ -106,9 +108,7 @@ function resolvePos1Exclusions(
return resolveAnnotationPos1ExclusionSet(DEFAULT_ANNOTATION_POS1_EXCLUSION_CONFIG);
}
function resolvePos2Exclusions(
options: SubtitleAnnotationFilterOptions = {},
): ReadonlySet<string> {
function resolvePos2Exclusions(options: SubtitleAnnotationFilterOptions = {}): ReadonlySet<string> {
if (options.pos2Exclusions) {
return options.pos2Exclusions;
}
@@ -212,7 +212,11 @@ function isReduplicatedKanaSfxWithOptionalTrailingTo(text: string): boolean {
function isExcludedTrailingParticleMergedToken(token: MergedToken): boolean {
const normalizedSurface = normalizeKana(token.surface);
const normalizedHeadword = normalizeKana(token.headword);
if (!normalizedSurface || !normalizedHeadword || !normalizedSurface.startsWith(normalizedHeadword)) {
if (
!normalizedSurface ||
!normalizedHeadword ||
!normalizedSurface.startsWith(normalizedHeadword)
) {
return false;
}
@@ -236,7 +240,10 @@ function isExcludedTrailingParticleMergedToken(token: MergedToken): boolean {
function isAuxiliaryStemGrammarTailToken(token: MergedToken): boolean {
const pos1Parts = splitNormalizedTagParts(normalizePosTag(token.pos1));
if (pos1Parts.length === 0 || !pos1Parts.every((part) => AUXILIARY_STEM_GRAMMAR_TAIL_POS1.has(part))) {
if (
pos1Parts.length === 0 ||
!pos1Parts.every((part) => AUXILIARY_STEM_GRAMMAR_TAIL_POS1.has(part))
) {
return false;
}

View File

@@ -1255,7 +1255,7 @@ test('dictionary settings helpers upsert and remove dictionary entries without r
const deps = createDeps(async (script) => {
scripts.push(script);
if (script.includes('optionsGetFull')) {
return JSON.parse(JSON.stringify(optionsFull));
return structuredClone(optionsFull);
}
if (script.includes('setAllSettings')) {
return true;

View File

@@ -3,7 +3,11 @@ import type { YoutubeTrackKind } from './kinds';
export type { YoutubeTrackKind };
export function normalizeYoutubeLangCode(value: string): string {
return value.trim().toLowerCase().replace(/_/g, '-').replace(/[^a-z0-9-]+/g, '');
return value
.trim()
.toLowerCase()
.replace(/_/g, '-')
.replace(/[^a-z0-9-]+/g, '');
}
export function isJapaneseYoutubeLang(value: string): boolean {

View File

@@ -75,15 +75,11 @@ test('probeYoutubeVideoMetadata returns null on malformed yt-dlp JSON', async ()
});
});
test(
'probeYoutubeVideoMetadata times out when yt-dlp hangs',
{ timeout: 20_000 },
async () => {
await withHangingFakeYtDlp(async () => {
await assert.rejects(
probeYoutubeVideoMetadata('https://www.youtube.com/watch?v=abc123'),
/timed out after 15000ms/,
);
});
},
);
test('probeYoutubeVideoMetadata times out when yt-dlp hangs', { timeout: 20_000 }, async () => {
await withHangingFakeYtDlp(async () => {
await assert.rejects(
probeYoutubeVideoMetadata('https://www.youtube.com/watch?v=abc123'),
/timed out after 15000ms/,
);
});
});

View File

@@ -25,9 +25,7 @@ function decodeHtmlEntities(value: string): string {
.replace(/&gt;/g, '>')
.replace(/&quot;/g, '"')
.replace(/&#39;/g, "'")
.replace(/&#(\d+);/g, (match, codePoint) =>
decodeNumericEntity(match, Number(codePoint)),
)
.replace(/&#(\d+);/g, (match, codePoint) => decodeNumericEntity(match, Number(codePoint)))
.replace(/&#x([0-9a-f]+);/gi, (match, codePoint) =>
decodeNumericEntity(match, Number.parseInt(codePoint, 16)),
);
@@ -52,9 +50,7 @@ function extractYoutubeTimedTextRows(xml: string): YoutubeTimedTextRow[] {
continue;
}
const inner = (match[2] ?? '')
.replace(/<br\s*\/?>/gi, '\n')
.replace(/<[^>]+>/g, '');
const inner = (match[2] ?? '').replace(/<br\s*\/?>/gi, '\n').replace(/<[^>]+>/g, '');
const text = decodeHtmlEntities(inner).trim();
if (!text) {
continue;
@@ -110,7 +106,9 @@ export function convertYoutubeTimedTextToVtt(xml: string): string {
if (!text) {
continue;
}
blocks.push(`${formatVttTimestamp(row.startMs)} --> ${formatVttTimestamp(clampedEnd)}\n${text}`);
blocks.push(
`${formatVttTimestamp(row.startMs)} --> ${formatVttTimestamp(clampedEnd)}\n${text}`,
);
}
return `WEBVTT\n\n${blocks.join('\n\n')}\n`;

Some files were not shown because too many files have changed in this diff Show More