13 Commits

Author SHA1 Message Date
4d95de51a0 fix: address CodeRabbit review feedback 2026-03-27 03:05:32 -07:00
ed32f985c6 test: increase launcher test timeout for CI stability 2026-03-27 02:24:56 -07:00
854179b9c1 Add backlog tasks and launcher time helper tests
- Track follow-up cleanup work in Backlog.md
- Replace Date.now usage with shared nowMs helper
- Add launcher args/parser and core regression tests
2026-03-27 02:01:36 -07:00
a3ddfa0641 refactor: compose startup and setup window wiring 2026-03-27 01:14:58 -07:00
49a582b4fc refactor: migrate shared type imports 2026-03-27 00:33:52 -07:00
a92631bf52 chore: update backlog task records 2026-03-27 00:33:48 -07:00
ac857e932e refactor: split immersion tracker query modules 2026-03-27 00:33:43 -07:00
8c633f7e48 fix: add stats server node fallback 2026-03-27 00:28:05 -07:00
d2cfa1b871 feat: add repo-local subminer workflow plugin 2026-03-27 00:27:13 -07:00
3fe63a6afa refactor: use bun serve for stats server 2026-03-26 23:18:43 -07:00
5dd8bb7fbf refactor: split shared type entrypoints 2026-03-26 23:17:04 -07:00
5b06579e65 refactor: split character dictionary runtime modules 2026-03-26 23:16:47 -07:00
416942ff2d chore(backlog): add mining workflow milestone and tasks 2026-03-26 22:32:11 -07:00
214 changed files with 12612 additions and 7710 deletions

View File

@@ -0,0 +1,20 @@
{
"name": "subminer-local",
"interface": {
"displayName": "SubMiner Local"
},
"plugins": [
{
"name": "subminer-workflow",
"source": {
"source": "local",
"path": "./plugins/subminer-workflow"
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
]
}

View File

@@ -1,127 +1,22 @@
---
name: "subminer-change-verification"
description: "Use when working in the SubMiner repo and you need to verify code changes actually work. Covers targeted regression checks during debugging and pre-handoff verification, with cheap-first lane selection for config, docs, launcher/plugin, runtime-compat, and optional real-runtime escalation."
name: 'subminer-change-verification'
description: 'Compatibility shim. Canonical SubMiner change verification workflow now lives in the repo-local subminer-workflow plugin.'
---
# SubMiner Change Verification
# Compatibility Shim
Use this skill for SubMiner code changes. Default to cheap, repo-native verification first. Escalate only when the changed behavior actually depends on Electron, mpv, overlay/window tracking, or other GUI-sensitive runtime behavior.
Canonical source:
## Scripts
- `plugins/subminer-workflow/skills/subminer-change-verification/SKILL.md`
- `scripts/classify_subminer_diff.sh`
- Emits suggested lanes and flags from explicit paths or current git changes.
- `scripts/verify_subminer_change.sh`
- Runs selected lanes, captures artifacts, and writes a compact summary.
Canonical helper scripts:
If you need an explicit installed path, use the directory that contains this `SKILL.md`. The helper scripts live under:
- `plugins/subminer-workflow/skills/subminer-change-verification/scripts/classify_subminer_diff.sh`
- `plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh`
```bash
export SUBMINER_VERIFY_SKILL="<path-to-skill>"
```
When this shim is invoked:
## Default workflow
1. Inspect the changed files or user-requested area.
2. Run the classifier unless you already know the right lane.
3. Run the verifier with the cheapest sufficient lane set.
4. If the classifier emits `flag:real-runtime-candidate`, do not jump straight to runtime verification. First run the non-runtime lanes.
5. Escalate to explicit `--lane real-runtime --allow-real-runtime` only when cheaper lanes cannot validate the behavior claim.
6. Return:
- verification summary
- exact commands run
- artifact paths
- skipped lanes and blockers
## Quick start
Repo-source quick start:
```bash
bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh
```
Installed-skill quick start:
```bash
bash "$SUBMINER_VERIFY_SKILL/scripts/classify_subminer_diff.sh"
```
Classify explicit files:
```bash
bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh \
launcher/main.ts \
plugin/subminer/lifecycle.lua \
src/main/runtime/mpv-client-runtime-service.ts
```
Run automatic lane selection:
```bash
bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh
```
Installed-skill form:
```bash
bash "$SUBMINER_VERIFY_SKILL/scripts/verify_subminer_change.sh"
```
Run targeted lanes:
```bash
bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh \
--lane launcher-plugin \
--lane runtime-compat
```
Dry-run to inspect planned commands and artifact layout:
```bash
bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh \
--dry-run \
launcher/main.ts \
src/main.ts
```
## Lane guidance
- `docs`
- For `docs-site/`, `docs/`, and doc-only edits.
- `config`
- For `src/config/` and config-template-sensitive edits.
- `core`
- For general source changes where `typecheck` + `test:fast` is the best cheap signal.
- `launcher-plugin`
- For `launcher/`, `plugin/subminer/`, plugin gating scripts, and wrapper/mpv routing work.
- `runtime-compat`
- For `src/main*`, runtime/composer wiring, mpv/overlay services, window trackers, and dist-sensitive behavior.
- `real-runtime`
- Only after deliberate escalation.
## Real Runtime Escalation
Escalate only when the change claim depends on actual runtime behavior, for example:
- overlay appears, hides, or tracks a real mpv window
- mpv launch flags or pause-until-ready behavior
- plugin/socket/auto-start handshake under a real player
- macOS/window-tracker/focus-sensitive behavior
If the environment cannot support authoritative runtime verification, report the blocker explicitly. Do not silently downgrade a runtime-required claim to a pass.
## Artifact contract
The verifier writes under `.tmp/skill-verification/<timestamp>/`:
- `summary.json`
- `summary.txt`
- `classification.txt`
- `env.txt`
- `lanes.txt`
- `steps.tsv`
- `steps/*.stdout.log`
- `steps/*.stderr.log`
On failure, quote the exact failing command and point at the artifact directory.
1. Read the canonical plugin-owned skill.
2. Follow the plugin-owned skill as the source of truth.
3. Use the wrapper scripts in this shim directory only for compatibility with existing commands, docs, and backlog history.
4. Do not duplicate workflow changes here; update the plugin-owned skill and scripts instead.

View File

@@ -1,163 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: classify_subminer_diff.sh [path ...]
SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
REPO_ROOT=$(cd "$SCRIPT_DIR/../../../.." && pwd)
TARGET="$REPO_ROOT/plugins/subminer-workflow/skills/subminer-change-verification/scripts/classify_subminer_diff.sh"
Emit suggested verification lanes for explicit paths or current local git changes.
Output format:
lane:<name>
flag:<name>
reason:<text>
EOF
}
has_item() {
local needle=$1
shift || true
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
add_lane() {
local lane=$1
if ! has_item "$lane" "${LANES[@]:-}"; then
LANES+=("$lane")
fi
}
add_flag() {
local flag=$1
if ! has_item "$flag" "${FLAGS[@]:-}"; then
FLAGS+=("$flag")
fi
}
add_reason() {
REASONS+=("$1")
}
collect_git_paths() {
local top_level
if ! top_level=$(git rev-parse --show-toplevel 2>/dev/null); then
return 0
fi
(
cd "$top_level"
if git rev-parse --verify HEAD >/dev/null 2>&1; then
git diff --name-only --relative HEAD --
git diff --name-only --relative --cached --
else
git diff --name-only --relative --
git diff --name-only --relative --cached --
fi
git ls-files --others --exclude-standard
) | awk 'NF' | sort -u
}
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then
usage
exit 0
if [[ ! -x "$TARGET" ]]; then
echo "Missing canonical script: $TARGET" >&2
exit 1
fi
declare -a PATHS=()
declare -a LANES=()
declare -a FLAGS=()
declare -a REASONS=()
if [[ $# -gt 0 ]]; then
while [[ $# -gt 0 ]]; do
PATHS+=("$1")
shift
done
else
while IFS= read -r line; do
[[ -n "$line" ]] && PATHS+=("$line")
done < <(collect_git_paths)
fi
if [[ ${#PATHS[@]} -eq 0 ]]; then
add_lane "core"
add_reason "no changed paths detected -> default to core"
fi
for path in "${PATHS[@]}"; do
specialized=0
case "$path" in
docs-site/*|docs/*|changes/*|README.md)
add_lane "docs"
add_reason "$path -> docs"
specialized=1
;;
esac
case "$path" in
src/config/*|src/generate-config-example.ts|src/verify-config-example.ts|docs-site/public/config.example.jsonc|config.example.jsonc)
add_lane "config"
add_reason "$path -> config"
specialized=1
;;
esac
case "$path" in
launcher/*|plugin/subminer/*|plugin/subminer.conf|scripts/test-plugin-*|scripts/get-mpv-window-*|scripts/configure-plugin-binary-path.mjs)
add_lane "launcher-plugin"
add_reason "$path -> launcher-plugin"
add_flag "real-runtime-candidate"
add_reason "$path -> real-runtime-candidate"
specialized=1
;;
esac
case "$path" in
src/main.ts|src/main-entry.ts|src/preload.ts|src/main/*|src/core/services/mpv*|src/core/services/overlay*|src/renderer/*|src/window-trackers/*|scripts/prepare-build-assets.mjs)
add_lane "runtime-compat"
add_reason "$path -> runtime-compat"
add_flag "real-runtime-candidate"
add_reason "$path -> real-runtime-candidate"
specialized=1
;;
esac
if [[ "$specialized" == "0" ]]; then
case "$path" in
src/*|package.json|tsconfig*.json|scripts/*|Makefile)
add_lane "core"
add_reason "$path -> core"
;;
esac
fi
case "$path" in
package.json|src/main.ts|src/main-entry.ts|src/preload.ts)
add_flag "broad-impact"
add_reason "$path -> broad-impact"
;;
esac
done
if [[ ${#LANES[@]} -eq 0 ]]; then
add_lane "core"
add_reason "no lane-specific matches -> default to core"
fi
for lane in "${LANES[@]}"; do
printf 'lane:%s\n' "$lane"
done
for flag in "${FLAGS[@]}"; do
printf 'flag:%s\n' "$flag"
done
for reason in "${REASONS[@]}"; do
printf 'reason:%s\n' "$reason"
done
exec "$TARGET" "$@"

View File

@@ -1,566 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: verify_subminer_change.sh [options] [path ...]
Options:
--lane <name> Force a verification lane. Repeatable.
--artifact-dir <dir> Use an explicit artifact directory.
--allow-real-runtime Allow explicit real-runtime execution.
--allow-real-gui Deprecated alias for --allow-real-runtime.
--dry-run Record planned steps without executing commands.
--help Show this help text.
If no lanes are supplied, the script classifies the provided paths. If no paths are
provided, it classifies the current local git changes.
Authoritative real-runtime verification should be requested with explicit path
arguments instead of relying on inferred local git changes.
EOF
}
timestamp() {
date +%Y%m%d-%H%M%S
}
timestamp_iso() {
date -u +%Y-%m-%dT%H:%M:%SZ
}
generate_session_id() {
local tmp_dir
tmp_dir=$(mktemp -d "${TMPDIR:-/tmp}/subminer-verify-$(timestamp)-XXXXXX")
basename "$tmp_dir"
rmdir "$tmp_dir"
}
has_item() {
local needle=$1
shift || true
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
normalize_lane_name() {
case "$1" in
real-gui)
printf '%s' "real-runtime"
;;
*)
printf '%s' "$1"
;;
esac
}
add_lane() {
local lane
lane=$(normalize_lane_name "$1")
if ! has_item "$lane" "${SELECTED_LANES[@]:-}"; then
SELECTED_LANES+=("$lane")
fi
}
add_blocker() {
BLOCKERS+=("$1")
BLOCKED=1
}
append_step_record() {
printf '%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n' \
"$1" "$2" "$3" "$4" "$5" "$6" "$7" "$8" >>"$STEPS_TSV"
}
record_env() {
{
printf 'repo_root=%s\n' "$REPO_ROOT"
printf 'session_id=%s\n' "$SESSION_ID"
printf 'artifact_dir=%s\n' "$ARTIFACT_DIR"
printf 'path_selection_mode=%s\n' "$PATH_SELECTION_MODE"
printf 'dry_run=%s\n' "$DRY_RUN"
printf 'allow_real_runtime=%s\n' "$ALLOW_REAL_RUNTIME"
printf 'session_home=%s\n' "$SESSION_HOME"
printf 'session_xdg_config_home=%s\n' "$SESSION_XDG_CONFIG_HOME"
printf 'session_mpv_dir=%s\n' "$SESSION_MPV_DIR"
printf 'session_logs_dir=%s\n' "$SESSION_LOGS_DIR"
printf 'session_mpv_log=%s\n' "$SESSION_MPV_LOG"
printf 'pwd=%s\n' "$(pwd)"
git rev-parse --short HEAD 2>/dev/null | sed 's/^/git_head=/' || true
git status --short 2>/dev/null || true
if [[ ${#PATH_ARGS[@]} -gt 0 ]]; then
printf 'requested_paths=\n'
printf ' %s\n' "${PATH_ARGS[@]}"
fi
} >"$ARTIFACT_DIR/env.txt"
}
run_step() {
local lane=$1
local name=$2
local command=$3
local note=${4:-}
local slug=${name//[^a-zA-Z0-9_-]/-}
local stdout_rel="steps/${slug}.stdout.log"
local stderr_rel="steps/${slug}.stderr.log"
local stdout_path="$ARTIFACT_DIR/$stdout_rel"
local stderr_path="$ARTIFACT_DIR/$stderr_rel"
local status exit_code
COMMANDS_RUN+=("$command")
printf '%s\n' "$command" >"$ARTIFACT_DIR/steps/${slug}.command.txt"
if [[ "$DRY_RUN" == "1" ]]; then
printf '[dry-run] %s\n' "$command" >"$stdout_path"
: >"$stderr_path"
status="dry-run"
exit_code=0
else
if bash -lc "cd \"$REPO_ROOT\" && $command" >"$stdout_path" 2>"$stderr_path"; then
status="passed"
exit_code=0
EXECUTED_REAL_STEPS=1
else
exit_code=$?
status="failed"
FAILED=1
fi
fi
append_step_record "$lane" "$name" "$status" "$exit_code" "$command" "$stdout_rel" "$stderr_rel" "$note"
printf '%s\t%s\t%s\n' "$lane" "$name" "$status"
if [[ "$status" == "failed" ]]; then
FAILURE_STEP="$name"
FAILURE_COMMAND="$command"
FAILURE_STDOUT="$stdout_rel"
FAILURE_STDERR="$stderr_rel"
return "$exit_code"
fi
}
record_nonpassing_step() {
local lane=$1
local name=$2
local status=$3
local note=$4
local slug=${name//[^a-zA-Z0-9_-]/-}
local stdout_rel="steps/${slug}.stdout.log"
local stderr_rel="steps/${slug}.stderr.log"
printf '%s\n' "$note" >"$ARTIFACT_DIR/$stdout_rel"
: >"$ARTIFACT_DIR/$stderr_rel"
append_step_record "$lane" "$name" "$status" "0" "" "$stdout_rel" "$stderr_rel" "$note"
printf '%s\t%s\t%s\n' "$lane" "$name" "$status"
}
record_skipped_step() {
record_nonpassing_step "$1" "$2" "skipped" "$3"
}
record_blocked_step() {
add_blocker "$3"
record_nonpassing_step "$1" "$2" "blocked" "$3"
}
record_failed_step() {
FAILED=1
FAILURE_STEP=$2
FAILURE_COMMAND=${FAILURE_COMMAND:-"(validation)"}
FAILURE_STDOUT="steps/${2//[^a-zA-Z0-9_-]/-}.stdout.log"
FAILURE_STDERR="steps/${2//[^a-zA-Z0-9_-]/-}.stderr.log"
add_blocker "$3"
record_nonpassing_step "$1" "$2" "failed" "$3"
}
find_real_runtime_helper() {
local candidate
for candidate in \
"$SCRIPT_DIR/run_real_runtime_smoke.sh" \
"$SCRIPT_DIR/run_real_mpv_smoke.sh"; do
if [[ -x "$candidate" ]]; then
printf '%s' "$candidate"
return 0
fi
done
return 1
}
acquire_real_runtime_lease() {
local lease_root="$REPO_ROOT/.tmp/skill-verification/locks"
local lease_dir="$lease_root/exclusive-real-runtime"
mkdir -p "$lease_root"
if mkdir "$lease_dir" 2>/dev/null; then
REAL_RUNTIME_LEASE_DIR="$lease_dir"
printf '%s\n' "$SESSION_ID" >"$lease_dir/session_id"
return 0
fi
local owner=""
if [[ -f "$lease_dir/session_id" ]]; then
owner=$(cat "$lease_dir/session_id")
fi
add_blocker "real-runtime lease already held${owner:+ by $owner}"
return 1
}
release_real_runtime_lease() {
if [[ -n "$REAL_RUNTIME_LEASE_DIR" && -d "$REAL_RUNTIME_LEASE_DIR" ]]; then
if [[ -f "$REAL_RUNTIME_LEASE_DIR/session_id" ]]; then
local owner
owner=$(cat "$REAL_RUNTIME_LEASE_DIR/session_id")
if [[ "$owner" != "$SESSION_ID" ]]; then
return 0
fi
fi
rm -rf "$REAL_RUNTIME_LEASE_DIR"
fi
}
compute_final_status() {
if [[ "$FAILED" == "1" ]]; then
FINAL_STATUS="failed"
elif [[ "$BLOCKED" == "1" ]]; then
FINAL_STATUS="blocked"
elif [[ "$EXECUTED_REAL_STEPS" == "1" ]]; then
FINAL_STATUS="passed"
else
FINAL_STATUS="skipped"
fi
}
write_summary_files() {
local lane_lines
lane_lines=$(printf '%s\n' "${SELECTED_LANES[@]}")
printf '%s\n' "$lane_lines" >"$ARTIFACT_DIR/lanes.txt"
printf '%s\n' "${BLOCKERS[@]}" >"$ARTIFACT_DIR/blockers.txt"
printf '%s\n' "${PATH_ARGS[@]}" >"$ARTIFACT_DIR/requested-paths.txt"
ARTIFACT_DIR_ENV="$ARTIFACT_DIR" \
SESSION_ID_ENV="$SESSION_ID" \
FINAL_STATUS_ENV="$FINAL_STATUS" \
PATH_SELECTION_MODE_ENV="$PATH_SELECTION_MODE" \
ALLOW_REAL_RUNTIME_ENV="$ALLOW_REAL_RUNTIME" \
SESSION_HOME_ENV="$SESSION_HOME" \
SESSION_XDG_CONFIG_HOME_ENV="$SESSION_XDG_CONFIG_HOME" \
SESSION_MPV_DIR_ENV="$SESSION_MPV_DIR" \
SESSION_LOGS_DIR_ENV="$SESSION_LOGS_DIR" \
SESSION_MPV_LOG_ENV="$SESSION_MPV_LOG" \
STARTED_AT_ENV="$STARTED_AT" \
FINISHED_AT_ENV="$FINISHED_AT" \
FAILED_ENV="$FAILED" \
FAILURE_COMMAND_ENV="${FAILURE_COMMAND:-}" \
FAILURE_STDOUT_ENV="${FAILURE_STDOUT:-}" \
FAILURE_STDERR_ENV="${FAILURE_STDERR:-}" \
bun -e '
const fs = require("fs");
const path = require("path");
function readLines(filePath) {
if (!fs.existsSync(filePath)) return [];
return fs.readFileSync(filePath, "utf8").split(/\r?\n/).filter(Boolean);
}
const artifactDir = process.env.ARTIFACT_DIR_ENV;
const reportsDir = path.join(artifactDir, "reports");
const lanes = readLines(path.join(artifactDir, "lanes.txt"));
const blockers = readLines(path.join(artifactDir, "blockers.txt"));
const requestedPaths = readLines(path.join(artifactDir, "requested-paths.txt"));
const steps = readLines(path.join(artifactDir, "steps.tsv")).map((line) => {
const [lane, name, status, exitCode, command, stdout, stderr, note] = line.split("\t");
return {
lane,
name,
status,
exitCode: Number(exitCode || 0),
command,
stdout,
stderr,
note,
};
});
const summary = {
sessionId: process.env.SESSION_ID_ENV || "",
artifactDir,
reportsDir,
status: process.env.FINAL_STATUS_ENV || "failed",
selectedLanes: lanes,
failed: process.env.FAILED_ENV === "1",
failure:
process.env.FAILED_ENV === "1"
? {
command: process.env.FAILURE_COMMAND_ENV || "",
stdout: process.env.FAILURE_STDOUT_ENV || "",
stderr: process.env.FAILURE_STDERR_ENV || "",
}
: null,
blockers,
pathSelectionMode: process.env.PATH_SELECTION_MODE_ENV || "git-inferred",
requestedPaths,
allowRealRuntime: process.env.ALLOW_REAL_RUNTIME_ENV === "1",
startedAt: process.env.STARTED_AT_ENV || "",
finishedAt: process.env.FINISHED_AT_ENV || "",
env: {
home: process.env.SESSION_HOME_ENV || "",
xdgConfigHome: process.env.SESSION_XDG_CONFIG_HOME_ENV || "",
mpvDir: process.env.SESSION_MPV_DIR_ENV || "",
logsDir: process.env.SESSION_LOGS_DIR_ENV || "",
mpvLog: process.env.SESSION_MPV_LOG_ENV || "",
},
steps,
};
const summaryJson = JSON.stringify(summary, null, 2) + "\n";
fs.writeFileSync(path.join(artifactDir, "summary.json"), summaryJson);
fs.writeFileSync(path.join(reportsDir, "summary.json"), summaryJson);
const lines = [];
lines.push(`session_id: ${summary.sessionId}`);
lines.push(`artifact_dir: ${artifactDir}`);
lines.push(`selected_lanes: ${lanes.join(", ") || "(none)"}`);
lines.push(`status: ${summary.status}`);
lines.push(`path_selection_mode: ${summary.pathSelectionMode}`);
if (requestedPaths.length > 0) {
lines.push(`requested_paths: ${requestedPaths.join(", ")}`);
}
if (blockers.length > 0) {
lines.push(`blockers: ${blockers.join(" | ")}`);
}
for (const step of steps) {
lines.push(`${step.lane}/${step.name}: ${step.status}`);
if (step.command) lines.push(` command: ${step.command}`);
lines.push(` stdout: ${step.stdout}`);
lines.push(` stderr: ${step.stderr}`);
if (step.note) lines.push(` note: ${step.note}`);
}
if (summary.failed) {
lines.push(`failure_command: ${process.env.FAILURE_COMMAND_ENV || ""}`);
}
const summaryText = lines.join("\n") + "\n";
fs.writeFileSync(path.join(artifactDir, "summary.txt"), summaryText);
fs.writeFileSync(path.join(reportsDir, "summary.txt"), summaryText);
'
}
cleanup() {
release_real_runtime_lease
}
CLASSIFIER_OUTPUT=""
ARTIFACT_DIR=""
ALLOW_REAL_RUNTIME=0
DRY_RUN=0
FAILED=0
BLOCKED=0
EXECUTED_REAL_STEPS=0
FINAL_STATUS=""
FAILURE_STEP=""
FAILURE_COMMAND=""
FAILURE_STDOUT=""
FAILURE_STDERR=""
REAL_RUNTIME_LEASE_DIR=""
STARTED_AT=""
FINISHED_AT=""
declare -a EXPLICIT_LANES=()
declare -a SELECTED_LANES=()
declare -a PATH_ARGS=()
declare -a COMMANDS_RUN=()
declare -a BLOCKERS=()
while [[ $# -gt 0 ]]; do
case "$1" in
--lane)
EXPLICIT_LANES+=("$(normalize_lane_name "$2")")
shift 2
;;
--artifact-dir)
ARTIFACT_DIR=$2
shift 2
;;
--allow-real-runtime|--allow-real-gui)
ALLOW_REAL_RUNTIME=1
shift
;;
--dry-run)
DRY_RUN=1
shift
;;
--help|-h)
usage
exit 0
;;
--)
shift
while [[ $# -gt 0 ]]; do
PATH_ARGS+=("$1")
shift
done
;;
*)
PATH_ARGS+=("$1")
shift
;;
esac
done
SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
SESSION_ID=$(generate_session_id)
PATH_SELECTION_MODE="git-inferred"
if [[ ${#PATH_ARGS[@]} -gt 0 ]]; then
PATH_SELECTION_MODE="explicit"
REPO_ROOT=$(cd "$SCRIPT_DIR/../../../.." && pwd)
TARGET="$REPO_ROOT/plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh"
if [[ ! -x "$TARGET" ]]; then
echo "Missing canonical script: $TARGET" >&2
exit 1
fi
if [[ -z "$ARTIFACT_DIR" ]]; then
mkdir -p "$REPO_ROOT/.tmp/skill-verification"
ARTIFACT_DIR="$REPO_ROOT/.tmp/skill-verification/$SESSION_ID"
fi
SESSION_HOME="$ARTIFACT_DIR/home"
SESSION_XDG_CONFIG_HOME="$ARTIFACT_DIR/xdg"
SESSION_MPV_DIR="$ARTIFACT_DIR/mpv"
SESSION_LOGS_DIR="$ARTIFACT_DIR/logs"
SESSION_MPV_LOG="$SESSION_LOGS_DIR/mpv.log"
mkdir -p "$ARTIFACT_DIR/steps" "$ARTIFACT_DIR/reports" "$SESSION_HOME" "$SESSION_XDG_CONFIG_HOME" "$SESSION_MPV_DIR" "$SESSION_LOGS_DIR"
STEPS_TSV="$ARTIFACT_DIR/steps.tsv"
: >"$STEPS_TSV"
trap cleanup EXIT
STARTED_AT=$(timestamp_iso)
if [[ ${#EXPLICIT_LANES[@]} -gt 0 ]]; then
local_lane=""
for local_lane in "${EXPLICIT_LANES[@]}"; do
add_lane "$local_lane"
done
printf 'reason:explicit lanes supplied\n' >"$ARTIFACT_DIR/classification.txt"
else
if [[ ${#PATH_ARGS[@]} -gt 0 ]]; then
CLASSIFIER_OUTPUT=$(bash "$SCRIPT_DIR/classify_subminer_diff.sh" "${PATH_ARGS[@]}")
else
CLASSIFIER_OUTPUT=$(bash "$SCRIPT_DIR/classify_subminer_diff.sh")
fi
printf '%s\n' "$CLASSIFIER_OUTPUT" >"$ARTIFACT_DIR/classification.txt"
while IFS= read -r line; do
case "$line" in
lane:*)
add_lane "${line#lane:}"
;;
esac
done <<<"$CLASSIFIER_OUTPUT"
fi
record_env
printf 'artifact_dir=%s\n' "$ARTIFACT_DIR"
printf 'selected_lanes=%s\n' "$(IFS=,; echo "${SELECTED_LANES[*]}")"
for lane in "${SELECTED_LANES[@]}"; do
case "$lane" in
docs)
run_step "$lane" "docs-test" "bun run docs:test" || break
[[ "$FAILED" == "1" ]] && break
run_step "$lane" "docs-build" "bun run docs:build" || break
;;
config)
run_step "$lane" "test-config" "bun run test:config" || break
;;
core)
run_step "$lane" "typecheck" "bun run typecheck" || break
[[ "$FAILED" == "1" ]] && break
run_step "$lane" "test-fast" "bun run test:fast" || break
;;
launcher-plugin)
run_step "$lane" "launcher-smoke-src" "bun run test:launcher:smoke:src" || break
[[ "$FAILED" == "1" ]] && break
run_step "$lane" "plugin-src" "bun run test:plugin:src" || break
;;
runtime-compat)
run_step "$lane" "build" "bun run build" || break
[[ "$FAILED" == "1" ]] && break
run_step "$lane" "test-runtime-compat" "bun run test:runtime:compat" || break
[[ "$FAILED" == "1" ]] && break
run_step "$lane" "test-smoke-dist" "bun run test:smoke:dist" || break
;;
real-runtime)
if [[ "$PATH_SELECTION_MODE" != "explicit" ]]; then
record_blocked_step \
"$lane" \
"real-runtime-guard" \
"real-runtime lane requires explicit paths; inferred local git changes are non-authoritative"
break
fi
if [[ "$ALLOW_REAL_RUNTIME" != "1" ]]; then
record_blocked_step \
"$lane" \
"real-runtime-guard" \
"real-runtime lane requested but --allow-real-runtime was not supplied"
break
fi
if ! acquire_real_runtime_lease; then
record_blocked_step \
"$lane" \
"real-runtime-lease" \
"real-runtime lease already held; rerun after the active runtime verification finishes"
break
fi
if ! REAL_RUNTIME_HELPER=$(find_real_runtime_helper); then
record_blocked_step \
"$lane" \
"real-runtime-helper" \
"real-runtime helper not implemented yet"
break
fi
printf -v REAL_RUNTIME_COMMAND \
'SESSION_ID=%q HOME=%q XDG_CONFIG_HOME=%q SUBMINER_MPV_LOG=%q bash %q' \
"$SESSION_ID" \
"$SESSION_HOME" \
"$SESSION_XDG_CONFIG_HOME" \
"$SESSION_MPV_LOG" \
"$REAL_RUNTIME_HELPER"
run_step "$lane" "real-runtime-smoke" "$REAL_RUNTIME_COMMAND" || break
;;
*)
record_failed_step "$lane" "lane-validation" "unknown lane: $lane"
break
;;
esac
if [[ "$FAILED" == "1" || "$BLOCKED" == "1" ]]; then
break
fi
done
FINISHED_AT=$(timestamp_iso)
compute_final_status
write_summary_files
printf 'status=%s\n' "$FINAL_STATUS"
printf 'artifact_dir=%s\n' "$ARTIFACT_DIR"
case "$FINAL_STATUS" in
failed)
printf 'result=failed\n'
printf 'failure_command=%s\n' "$FAILURE_COMMAND"
exit 1
;;
blocked)
printf 'result=blocked\n'
exit 2
;;
*)
printf 'result=ok\n'
exit 0
;;
esac
exec "$TARGET" "$@"

View File

@@ -1,146 +1,18 @@
---
name: "subminer-scrum-master"
description: "Use in the SubMiner repo when a request should be turned into planned work and driven through execution. Assesses whether backlog tracking is warranted, creates or updates tasks when needed, records a plan, dispatches one or more subagents, and requires verification before handoff."
name: 'subminer-scrum-master'
description: 'Compatibility shim. Canonical SubMiner scrum-master workflow now lives in the repo-local subminer-workflow plugin.'
---
# SubMiner Scrum Master
# Compatibility Shim
Own workflow, not code by default.
Canonical source:
Use this skill when the user gives a feature request, bug report, issue, refactor, or implementation ask and the agent should manage intake, planning, backlog hygiene, worker dispatch, and verification through completion.
- `plugins/subminer-workflow/skills/subminer-scrum-master/SKILL.md`
## Core Rules
When this shim is invoked:
1. Decide first whether backlog tracking is warranted.
2. If backlog is needed, search first. Update existing work when it clearly matches.
3. If backlog is not needed, keep the process light. Do not invent ticket ceremony.
4. Record a plan before dispatching coding work.
5. Use parent + subtasks for multi-part work when backlog is used.
6. Dispatch conservatively. Parallelize only disjoint write scopes.
7. Require verification before handoff, typically via `subminer-change-verification`.
8. Report backlog actions, dispatched workers, verification, blockers, and remaining risks.
1. Read the canonical plugin-owned skill.
2. Follow the plugin-owned skill as the source of truth.
3. Do not duplicate workflow changes here; update the plugin-owned skill instead.
## Backlog Decision
Skip backlog when the request is:
- question only
- obvious mechanical edit
- tiny isolated change with no real planning
Use backlog when the work:
- needs planning or scope decisions
- spans multiple phases or subsystems
- is likely to need subagent dispatch
- should remain traceable for handoff/resume
If backlog is used:
- search existing tasks first
- create/update a standalone task for one focused deliverable
- create/update a parent task plus subtasks for multi-part work
- record the implementation plan in the task before implementation begins
## Intake Workflow
1. Parse the request.
Classify it as question, mechanical edit, bugfix, feature, refactor, investigation, or follow-up.
2. Decide whether backlog is needed.
3. If backlog is needed:
- search first
- update existing task if clearly relevant
- otherwise create the right structure
- write the implementation plan before dispatch
4. If backlog is skipped:
- write a short working plan in-thread
- proceed without fake ticketing
5. Choose execution mode:
- no subagents for trivial work
- one worker for focused work
- parallel workers only for disjoint scopes
6. Run verification before handoff.
## Dispatch Rules
The scrum master orchestrates. Workers implement.
- Do not become the default implementer unless delegation is unnecessary.
- Do not parallelize overlapping files or tightly coupled runtime work.
- Give every worker explicit ownership of files/modules.
- Tell every worker other agents may be active and they must not revert unrelated edits.
- Require each worker to report:
- changed files
- tests run
- blockers
Use worker agents for implementation and explorer agents only for bounded codebase questions.
## Verification
Every nontrivial code task gets verification.
Preferred flow:
1. use `subminer-change-verification`
2. start with the cheapest sufficient lane
3. escalate only when needed
4. if worker verification is sufficient, accept it or run one final consolidating pass
Never hand off nontrivial work without stating what was verified and what was skipped.
## Pre-Handoff Policy Checks (Required)
Before handoff, always ask and answer both of these questions explicitly:
1. **Docs update required?**
2. **Changelog fragment required?**
Rules:
- Do not assume silence implies "no." Record an explicit yes/no decision for each item.
- If the answer is yes, either complete the update or report the blocker before handoff.
- Include the final answers in the handoff summary even when both answers are "no."
## Failure / Scope Handling
- If a worker hits ambiguity, pause and ask the user.
- If verification fails, either:
- send the worker back with exact failure context, or
- fix it directly if it is tiny and clearly in scope
- If new scope appears, revisit backlog structure before silently expanding work.
## Representative Flows
### Trivial no-ticket work
- decide backlog is unnecessary
- keep a short plan
- implement directly or with one worker if helpful
- run targeted verification
- report outcome concisely
### Single-task implementation
- search/create/update one task
- record plan
- dispatch one worker
- integrate
- verify
- update task and report outcome
### Parent + subtasks execution
- search/create/update parent task
- create subtasks for distinct deliverables/phases
- record sequencing in the plan
- dispatch workers only where scopes are disjoint
- integrate
- run consolidated verification
- update task state and report outcome
## Output Expectations
At the end, report:
- whether backlog was used and what changed
- which workers were dispatched and what they owned
- what verification ran
- explicit answers to:
- docs update required?
- changelog fragment required?
- blockers, skips, and risks
This shim exists so existing repo references and prompts keep resolving during the migration to the repo-local plugin workflow.

View File

@@ -83,7 +83,6 @@ This project uses Backlog.md MCP for all task and project management activities.
- **When to read it**: BEFORE creating tasks, or when you're unsure whether to track work
These guides cover:
- Decision framework for when to create tasks
- Search-first workflow to avoid duplicates
- Links to detailed guides for task creation, execution, and finalization

194
Backlog.md Normal file
View File

@@ -0,0 +1,194 @@
# Backlog
Purpose: lightweight repo-local task board. Seeded with current testing / coverage work.
Status keys:
- `todo`: not started
- `doing`: in progress
- `blocked`: waiting
- `done`: shipped
Priority keys:
- `P0`: urgent / release-risk
- `P1`: high value
- `P2`: useful cleanup
- `P3`: nice-to-have
## Active
None.
## Ready
| ID | Pri | Status | Area | Title |
| ------ | --- | ------ | ----------------- | ---------------------------------------------------------------- |
| SM-001 | P1 | todo | launcher | Add tests for CLI parser and args normalizer |
| SM-002 | P1 | todo | immersion-tracker | Backfill tests for uncovered query exports |
| SM-003 | P1 | todo | anki | Add focused field-grouping service + merge edge-case tests |
| SM-004 | P2 | todo | tests | Extract shared test utils for deps factories and polling helpers |
| SM-005 | P2 | todo | tests | Strengthen weak assertions in app-ready and IPC tests |
| SM-006 | P2 | todo | tests | Break up monolithic youtube-flow and subtitle-sidebar tests |
| SM-007 | P2 | todo | anilist | Add tests for AniList rate limiter |
| SM-008 | P3 | todo | subtitles | Add core subtitle-position persistence/path tests |
| SM-009 | P3 | todo | tokenizer | Add tests for JLPT token filter |
| SM-010 | P1 | todo | immersion-tracker | Refactor storage + immersion-tracker service into focused modules |
## Icebox
None.
## Ticket Details
### SM-001
Title: Add tests for CLI parser and args normalizer
Priority: P1
Status: todo
Scope:
- `launcher/config/cli-parser-builder.ts`
- `launcher/config/args-normalizer.ts`
Acceptance:
- root options parsing covered
- subcommand routing covered
- invalid action / invalid log level / invalid backend cases covered
- target classification covered: file, directory, URL, invalid
### SM-002
Title: Backfill tests for uncovered query exports
Priority: P1
Status: todo
Scope:
- `src/core/services/immersion-tracker/query-*.ts`
Targets:
- headword helpers
- anime/media detail helpers not covered by existing wrapper tests
- lexical detail / appearance helpers
- maintenance helpers beyond `deleteSession` and `upsertCoverArt`
Acceptance:
- every exported query helper either directly tested or explicitly justified as covered elsewhere
- at least one focused regression per complex SQL branch / aggregation branch
### SM-003
Title: Add focused field-grouping service + merge edge-case tests
Priority: P1
Status: todo
Scope:
- `src/anki-integration/field-grouping.ts`
- `src/anki-integration/field-grouping-merge.ts`
Acceptance:
- auto/manual/disabled flow branches covered
- duplicate-card preview failure path covered
- merge edge cases covered: empty fields, generated media fallback, strict grouped spans, audio synchronization
### SM-004
Title: Extract shared test utils for deps factories and polling helpers
Priority: P2
Status: todo
Scope:
- common `makeDeps` / `createDeps` helpers
- common `waitForCondition`
Acceptance:
- shared helper module added
- at least 3 duplicated polling helpers removed
- at least 5 duplicated deps factories consolidated or clearly prepared for follow-up migration
### SM-005
Title: Strengthen weak assertions in app-ready and IPC tests
Priority: P2
Status: todo
Scope:
- `src/core/services/app-ready.test.ts`
- `src/core/services/ipc.test.ts`
Acceptance:
- replace broad `assert.ok(...)` presence checks with exact value / order assertions where expected value known
- handler registration tests assert channel-specific behavior, not only existence
### SM-006
Title: Break up monolithic youtube-flow and subtitle-sidebar tests
Priority: P2
Status: todo
Scope:
- `src/main/runtime/youtube-flow.test.ts`
- `src/renderer/modals/subtitle-sidebar.test.ts`
Acceptance:
- reduce single-test breadth
- split largest tests into focused cases by behavior
- keep semantics unchanged
### SM-007
Title: Add tests for AniList rate limiter
Priority: P2
Status: todo
Scope:
- `src/core/services/anilist/rate-limiter.ts`
Acceptance:
- capacity-window wait behavior covered
- `x-ratelimit-remaining` + reset handling covered
- `retry-after` handling covered
### SM-008
Title: Add core subtitle-position persistence/path tests
Priority: P3
Status: todo
Scope:
- `src/core/services/subtitle-position.ts`
Acceptance:
- save/load persistence covered
- fallback behavior covered
- path normalization behavior covered for URL vs local target
### SM-009
Title: Add tests for JLPT token filter
Priority: P3
Status: todo
Scope:
- `src/core/services/jlpt-token-filter.ts`
Acceptance:
- excluded term membership covered
- ignored POS1 membership covered
- exported list / entry consistency covered
### SM-010
Title: Refactor storage + immersion-tracker service into focused layers without API changes
Priority: P1
Status: todo
Scope:
- `src/core/database/storage/storage.ts`
- `src/core/database/storage/schema.ts`
- `src/core/database/storage/cover-blob.ts`
- `src/core/database/storage/records.ts`
- `src/core/database/storage/write-path.ts`
- `src/core/services/immersion-tracker/youtube.ts`
- `src/core/services/immersion-tracker/youtube-manager.ts`
- `src/core/services/immersion-tracker/write-queue.ts`
- `src/core/services/immersion-tracker/immersion-tracker-service.ts`
Acceptance:
- behavior and public API remain unchanged for all callers
- `storage.ts` responsibilities split into DDL/migrations, cover blob helpers, record CRUD, and write-path execution
- `immersion-tracker-service.ts` reduces to session state, media change orchestration, query proxies, and lifecycle
- YouTube code split into pure utilities, a stateful manager (`YouTubeManager`), and a dedicated write queue (`WriteQueue`)
- removed `storage.ts` is replaced with focused modules and updated imports
- no API or migration regressions; existing tests for trackers/storage coverage remain green or receive focused updates

View File

@@ -1,11 +1,11 @@
project_name: 'SubMiner'
default_status: 'To Do'
statuses: ['To Do', 'In Progress', 'Done']
project_name: "SubMiner"
default_status: "To Do"
statuses: ["To Do", "In Progress", "Done"]
labels: []
definition_of_done: []
date_format: yyyy-mm-dd
max_column_width: 20
default_editor: 'nvim'
default_editor: "nvim"
auto_open_browser: false
default_port: 6420
remote_operations: true
@@ -13,4 +13,4 @@ auto_commit: false
bypass_git_hooks: false
check_active_branches: true
active_branch_days: 30
task_prefix: 'task'
task_prefix: "task"

View File

@@ -0,0 +1,8 @@
---
id: m-2
title: 'Mining Workflow Upgrades'
---
## Description
Future user-facing workflow improvements that directly improve discoverability, previewability, and mining control without depending on speculative platform integrations like OCR, marketplace infrastructure, or cloud sync.

View File

@@ -0,0 +1,59 @@
---
id: TASK-238
title: Codebase health follow-up: decompose remaining oversized runtime surfaces
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- tech-debt
- maintainability
- runtime
milestone: m-0
dependencies: []
references:
- src/main.ts
- src/types.ts
- src/main/character-dictionary-runtime.ts
- src/core/services/immersion-tracker/query.ts
- backlog/tasks/task-87 - Codebase-health-harden-verification-and-retire-dead-architecture-identified-in-the-March-2026-review.md
- backlog/completed/task-87.4 - Runtime-composition-root-remove-dead-symbols-and-tighten-module-boundaries-in-src-main.ts.md
- backlog/completed/task-87.6 - Anki-integration-maintainability-continue-decomposing-the-oversized-orchestration-layer.md
- backlog/tasks/task-238.6 - Extract-remaining-inline-runtime-logic-and-composer-gaps-from-src-main.ts.md
- backlog/tasks/task-238.7 - Split-src-main.ts-into-boot-phase-services-runtimes-and-handlers.md
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Follow up the March 2026 codebase-health work with a narrower pass over the biggest remaining production hotspots. The latest review correctly flags `src/main.ts` and `src/types.ts` as maintainability pressure, but it also misses the next real large surfaces that will keep slowing future work: `src/main/character-dictionary-runtime.ts` and `src/core/services/immersion-tracker/query.ts`. This parent task should track focused decomposition work that preserves behavior, avoids redoing already-completed dead-architecture cleanup, and keeps each slice small enough for isolated implementation.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Child tasks exist for each focused cleanup slice instead of one broad “split the monoliths” effort.
- [ ] #2 The parent task records sequencing so agents do not overlap on `src/main.ts` and other shared surfaces.
- [ ] #3 The selected follow-up tasks target still-live pressure points, not already-completed work like TASK-87.4, TASK-87.5, or TASK-87.6.
- [ ] #4 Completion of the child tasks leaves runtime wiring, shared types, character-dictionary orchestration, and immersion-tracker queries materially easier to review and extend.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
Recommended sequencing:
1. Start TASK-238.3 first. A compatibility-first type split reduces churn risk for the later runtime/query refactors.
2. Run TASK-238.4 and TASK-238.5 in parallel after TASK-238.3 if desired; they touch different domains.
3. Run TASK-238.1 after or alongside the domain refactors, but keep it focused on window/bootstrap composition only.
4. Run TASK-238.2 after TASK-238.1 because both touch `src/main.ts` and the CLI/headless flow should build on the cleaner composition root.
5. Run TASK-238.6 after the current composer/setup-window-factory work lands, so the remaining inline runtime logic and composer gaps are extracted from the already-cleaned composition root.
6. Run TASK-238.7 only after TASK-238.6 confirms the remaining entrypoint surface still justifies a boot-phase split; then move the boot wiring into dedicated service/runtime/handler modules.
Shared guardrails:
- Do not reopen already-completed dead-module cleanup from TASK-87.5 unless new evidence appears.
- Keep `src/types.ts` migration compatibility-first; avoid a repo-wide import churn bomb.
- Prefer extracting named runtime/domain modules over moving code into new giant helper files.
- Verify each slice with the cheapest sufficient lane, then escalate when a task crosses runtime/build boundaries.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,45 @@
---
id: TASK-238.1
title: Extract main-window and overlay-window composition from src/main.ts
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- tech-debt
- runtime
- windows
- maintainability
milestone: m-0
dependencies: []
references:
- src/main.ts
- src/main/runtime/composers
- src/main/runtime/overlay-runtime-bootstrap.ts
- docs/architecture/README.md
parent_task_id: TASK-238
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/main.ts` still directly owns several `BrowserWindow` construction and window-lifecycle paths, including overlay-adjacent windows and setup flows. That keeps the composition root far larger than intended and makes window behavior hard to test in isolation. Extract the remaining window/bootstrap composition into named runtime modules so `src/main.ts` mostly wires dependencies and app lifecycle events together.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 At least the main overlay window path plus two other window/setup flows are extracted from direct `BrowserWindow` construction inside `src/main.ts`.
- [ ] #2 The extracted modules expose narrow factory/handler APIs that can be tested without booting the whole app.
- [ ] #3 `src/main.ts` becomes materially smaller and easier to scan, with window creation concentrated behind well-named runtime surfaces.
- [ ] #4 Relevant runtime/window tests pass, and new tests are added for any newly isolated window composition helpers.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Map the remaining direct `BrowserWindow` creation sites in `src/main.ts` and group them by shared lifecycle concerns.
2. Extract coherent modules for construction, preload/path resolution, and open/focus/reuse behavior rather than moving raw option objects wholesale.
3. Update the composition root to consume the new modules and keep side effects/app state ownership explicit.
4. Verify with focused runtime/window tests plus `bun run typecheck`.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,46 @@
---
id: TASK-238.2
title: Extract CLI and headless command wiring from src/main.ts
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- tech-debt
- cli
- runtime
- maintainability
milestone: m-0
dependencies:
- TASK-238.1
references:
- src/main.ts
- src/main/cli-runtime.ts
- src/cli/args.ts
- launcher
parent_task_id: TASK-238
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/main.ts` still owns the headless-initial-command flow, argument handling, and a large amount of CLI/runtime bridging. That makes non-window startup paths difficult to reason about and keeps CLI behavior coupled to unrelated desktop boot logic. Extract the remaining CLI/headless orchestration into dedicated runtime services so the main entrypoint only decides which startup path to invoke.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 CLI parsing, initial-command dispatch, and headless command execution no longer live as large inline flows in `src/main.ts`.
- [ ] #2 The new modules make the desktop startup path and headless startup path visibly separate and easier to test.
- [ ] #3 Existing CLI behaviors remain unchanged, including help output and startup gating behavior.
- [ ] #4 Targeted CLI/runtime tests cover the extracted path, and `bun run typecheck` passes.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Map the current `parseArgs` / `handleInitialArgs` / `runHeadlessInitialCommand` / `handleCliCommand` flow in `src/main.ts`.
2. Extract a small startup-path selector plus dedicated runtime services for headless execution and interactive startup dispatch.
3. Keep Electron app ownership in `src/main.ts`; move only CLI orchestration and context assembly.
4. Verify with CLI-focused tests plus `bun run typecheck`.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,59 @@
---
id: TASK-238.3
title: Introduce domain type entrypoints and shrink src/types.ts import surface
status: Done
assignee: []
created_date: '2026-03-26 20:49'
updated_date: '2026-03-27 00:14'
labels:
- tech-debt
- types
- maintainability
milestone: m-0
dependencies: []
references:
- src/types.ts
- src/shared/ipc/contracts.ts
- src/config/service.ts
- docs/architecture/README.md
parent_task_id: TASK-238
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/types.ts` has become the repo-wide dumping ground for unrelated domains. Splitting it is still worthwhile, but a big-bang move would create noisy churn across a large import graph. Introduce domain entrypoints under `src/types/` and migrate the highest-churn imports first while leaving `src/types.ts` as a compatibility barrel until the new structure is proven.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Domain-focused type modules exist for the main clusters currently mixed together in `src/types.ts` (for example Anki, config/runtime, subtitle/media, and integration/runtime-option types).
- [x] #2 `src/types.ts` becomes a thinner compatibility layer or barrel instead of the sole source of truth for every shared type.
- [x] #3 A meaningful set of imports is migrated to the new entrypoints without breaking the maintained typecheck/test lanes.
- [x] #4 The new structure is documented well enough that contributors can tell where new shared types should live.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Inventory the main type clusters in `src/types.ts` and choose stable domain seams.
2. Create `src/types/` modules and re-export through `src/types.ts` so the migration can be incremental.
3. Migrate the highest-value import sites first, especially config/runtime and Anki-heavy surfaces.
4. Verify with `bun run typecheck` and the cheapest test lane covering touched domains.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Implemented domain entrypoints under `src/types/` and kept `src/types.ts` as a compatibility barrel (`src/types/anki.ts`, `src/types/config.ts`, `src/types/integrations.ts`, `src/types/runtime.ts`, `src/types/runtime-options.ts`, `src/types/subtitle.ts`). Migrated the highest-value import surfaces away from `src/types.ts` in config/runtime/Anki-related modules and shared IPC surfaces. Added type-level regression coverage in `src/types-domain-entrypoints.type-test.ts`.
Aligned docs in `docs/architecture/README.md`, `docs/architecture/domains.md`, and `docs-site/changelog.md` to support the change and clear docs-site sync mismatch.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Task completed with commit `5dd8bb7f` (`refactor: split shared type entrypoints`). The refactor introduced domain type entrypoints, shrank the `src/types.ts` import surface, updated import consumers, and recorded verification evidence in the local verifier artifacts. Backlog now tracks TASK-238.3 as done.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,58 @@
---
id: TASK-238.4
title: Decompose character dictionary runtime into fetch, build, and cache modules
status: Done
updated_date: '2026-03-27 00:20'
assignee: []
created_date: '2026-03-26 20:49'
labels:
- tech-debt
- runtime
- anilist
- maintainability
milestone: m-0
dependencies:
- TASK-238.3
references:
- src/main/character-dictionary-runtime.ts
- src/main/runtime/character-dictionary-auto-sync.ts
- docs/architecture/README.md
parent_task_id: TASK-238
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/main/character-dictionary-runtime.ts` is now one of the largest live production files in the repo and combines AniList transport, name normalization, snapshot/image shaping, cache management, and zip packaging. That file will keep growing as character-dictionary features evolve. Split it into focused modules so the runtime surface becomes orchestration instead of a catch-all implementation blob.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 AniList fetch/parsing logic, dictionary-entry building, and snapshot/cache/zip persistence no longer live in one giant file.
- [x] #2 The public runtime API stays behavior-compatible for current callers.
- [x] #3 The top-level runtime/orchestration file becomes materially smaller and easier to review.
- [x] #4 Existing character-dictionary tests still pass, and new focused tests cover the extracted modules where needed.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Identify the dominant concern boundaries inside `src/main/character-dictionary-runtime.ts`.
2. Extract fetch/transform/persist modules with narrow interfaces, keeping data-shape ownership explicit.
3. Leave the exported runtime API stable for current main-process callers.
4. Verify with the maintained character-dictionary/runtime test lane plus `bun run typecheck`.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Split `src/main/character-dictionary-runtime.ts` into focused modules under `src/main/character-dictionary-runtime/` (`fetch`, `build`, `cache`, plus helper modules). The orchestrator stayed as a compatibility shim/API surface with delegated module functions. Added focused tests for cache snapshot semantics and term rebuild + collapsible-open-state behavior in the new modules. Updated runtime architecture docs in `docs/architecture/domains.md` and `docs-site/architecture.md`.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Task completed with commit `5b06579e` (`refactor: split character dictionary runtime modules`). Runtime refactor landed with regression coverage and verification including runtime-compat lanes, and all changed behavior was validated as API-compatible for callers.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,61 @@
---
id: TASK-238.5
title: Split immersion tracker query layer into focused read-model modules
status: Done
assignee:
- codex
created_date: '2026-03-26 20:49'
updated_date: '2026-03-27 00:00'
labels:
- tech-debt
- stats
- database
- maintainability
milestone: m-0
dependencies:
- TASK-238.3
references:
- src/core/services/immersion-tracker/query.ts
- src/core/services/stats-server.ts
- src/core/services/immersion-tracker-service.ts
parent_task_id: TASK-238
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/core/services/immersion-tracker/query.ts` has grown into a large mixed read/write/maintenance surface that owns library queries, timeline/detail queries, cleanup helpers, and rollup rebuild hooks. That size makes stats work harder to change safely. Split the query layer into focused read-model and maintenance modules so future stats/dashboard work does not keep landing in one 2500-line file.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 Query responsibilities are grouped into focused modules such as library/session detail, vocabulary/kanji detail, and maintenance/cleanup helpers.
- [x] #2 The stats server and immersion tracker service depend on stable exported query surfaces instead of one monolithic file.
- [x] #3 The refactor preserves current SQL behavior and existing statistics outputs.
- [x] #4 Existing stats/immersion tests still pass, with added focused coverage where extraction creates new seams.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Inventory the major query clusters and choose modules that match current caller boundaries.
2. Extract without changing schema or response contracts unless a narrow cleanup is required for compile/test health.
3. Keep SQL ownership close to the domain module that consumes it; avoid a giant `queries/` dump with no structure.
4. Verify with the maintained stats/immersion test lane plus `bun run typecheck`.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
Split the monolithic query surface into focused read-model modules for sessions, trends, lexical data, library lookups, and maintenance helpers. Updated the service and test imports to use the new module boundaries.
Verification: `bun run typecheck` passed. Focused query and stats-server tests passed, including the `stats-server.test.ts` coverage around the new Bun fallback path.
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Extracted the immersion-tracker query layer into smaller read-model modules and kept the compatibility barrel in place so existing call sites can transition cleanly. Added focused coverage and verified the refactor with typecheck plus targeted tests.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,60 @@
---
id: TASK-238.6
title: Extract remaining inline runtime logic and composer gaps from src/main.ts
status: To Do
assignee: []
created_date: '2026-03-27 00:00'
labels:
- tech-debt
- runtime
- maintainability
- composers
milestone: m-0
dependencies:
- TASK-238.1
- TASK-238.2
references:
- src/main.ts
- src/main/runtime/youtube-flow.ts
- src/main/runtime/autoplay-ready-gate.ts
- src/main/runtime/subtitle-prefetch-init.ts
- src/main/runtime/discord-presence-runtime.ts
- src/main/overlay-modal-state.ts
- src/main/runtime/composers
parent_task_id: TASK-238
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
`src/main.ts` still mixes two concerns: pure dependency wiring and inline runtime logic. The earlier composer extractions reduce the wiring burden, but the file still owns several substantial behavior blocks and a few large inline dependency groupings. This task tracks the next maintainability pass: move the remaining runtime logic into the appropriate domain modules, add missing composer wrappers for the biggest grouped handler blocks, and reassess whether a boot-phase split is still necessary after the entrypoint becomes mostly wiring.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 `runYoutubePlaybackFlow`, `maybeSignalPluginAutoplayReady`, `refreshSubtitlePrefetchFromActiveTrack`, `publishDiscordPresence`, and `handleModalInputStateChange` no longer live as substantial inline logic in `src/main.ts`.
- [ ] #2 The large subtitle/prefetch, stats startup, and overlay visibility dependency groupings are wrapped behind named composer helpers instead of remaining inline in `src/main.ts`.
- [ ] #3 `src/main.ts` reads primarily as a boot and lifecycle coordinator, with domain behavior concentrated in named runtime modules.
- [ ] #4 Focused tests cover the extracted behavior or the new composer surfaces.
- [ ] #5 The task records whether the remaining size still justifies a boot-phase split or whether that follow-up can wait.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
Recommended sequence:
1. Let the current composer and `setup-window-factory` work land first so this slice starts from a stable wiring baseline.
2. Extract the five inline runtime functions into their natural domain modules or direct equivalents.
3. Add or extend composer helpers for subtitle/prefetch, stats startup, and overlay visibility handler grouping.
4. Re-scan `src/main.ts` after the extraction and decide whether a boot-phase split is still the right next task.
5. Verify the extracted behavior with focused tests first, then run the relevant broader runtime gate if the slice crosses startup boundaries.
Guardrails:
- Keep the work behavior-preserving.
- Prefer moving logic to existing runtime surfaces over creating new giant helper files.
- Do not expand into unrelated `src/main.ts` cleanup that is already tracked by other TASK-238 slices.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,58 @@
---
id: TASK-238.7
title: Split src/main.ts into boot-phase services, runtimes, and handlers
status: To Do
assignee: []
created_date: '2026-03-27 00:00'
labels:
- tech-debt
- runtime
- maintainability
- architecture
milestone: m-0
dependencies:
- TASK-238.6
references:
- src/main.ts
- src/main/boot/services.ts
- src/main/boot/runtimes.ts
- src/main/boot/handlers.ts
- src/main/runtime/composers
parent_task_id: TASK-238
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
After the remaining inline runtime logic and composer gaps are extracted, `src/main.ts` should be split along boot-phase boundaries so the entrypoint stops mixing service construction, domain runtime composition, and handler wiring in one file. This task tracks that structural split: move service instantiation, runtime composition, and handler orchestration into dedicated boot modules, then leave `src/main.ts` as a thin lifecycle coordinator with clear startup-path selection.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Service instantiation lives in a dedicated boot module instead of a large inline setup block in `src/main.ts`.
- [ ] #2 Domain runtime composition lives in a dedicated boot module, separate from lifecycle and handler dispatch.
- [ ] #3 Handler/composer invocation lives in a dedicated boot module, with `src/main.ts` reduced to app lifecycle and startup-path selection.
- [ ] #4 Existing startup behavior remains unchanged across desktop and headless flows.
- [ ] #5 Focused tests cover the split surfaces, and the relevant runtime/typecheck gate passes.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
Recommended sequence:
1. Re-scan `src/main.ts` after TASK-238.6 lands and mark the remaining boot-phase seams by responsibility.
2. Extract service instantiation into `src/main/boot/services.ts` or equivalent.
3. Extract runtime composition into `src/main/boot/runtimes.ts` or equivalent.
4. Extract handler/composer orchestration into `src/main/boot/handlers.ts` or equivalent.
5. Shrink `src/main.ts` to startup-path selection, app lifecycle hooks, and minimal boot wiring.
6. Verify the split with focused entrypoint/runtime tests first, then run the broader runtime gate if the refactor crosses startup boundaries.
Guardrails:
- Keep the split behavior-preserving.
- Prefer small boot modules with narrow ownership over a new monolithic bootstrap layer.
- Do not reopen the inline logic work already tracked by TASK-238.6 unless a remaining seam truly belongs here.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,51 @@
---
id: TASK-239
title: Mining workflow upgrades: prioritize high-value user-facing improvements
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- feature
- ux
- planning
milestone: m-2
dependencies: []
references:
- src/main.ts
- src/renderer
- src/anki-integration.ts
- src/config/service.ts
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Track the next set of high-value workflow improvements surfaced by the March 2026 review. The goal is to capture bounded, implementation-sized feature slices with clear user value and avoid prematurely committing to much larger bets like hard-sub OCR, plugin marketplace infrastructure, or cloud config sync. Focus this parent task on features that improve the core mining workflow directly: profile-aware setup, action discoverability, previewing output before mining, and selecting richer subtitle ranges.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Child tasks exist for the selected near-to-medium-term workflow upgrades with explicit scope and exclusions.
- [ ] #2 The parent task records the recommended sequencing so future work starts with the best value/risk ratio.
- [ ] #3 The tracked feature set stays grounded in existing product surfaces instead of speculative external-platform integrations.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
Recommended sequencing:
1. Start TASK-239.3 first. Template preview is the smallest high-signal UX win on a core mining path.
2. Start TASK-239.2 next. A command palette improves discoverability across existing actions without large backend upheaval.
3. Start TASK-239.4 after the preview/palette work. Sentence clipping is high-value but touches runtime, subtitle selection, and card creation flows together.
4. Keep TASK-239.1 as a foundation project and scope it narrowly to local multi-profile support. Do not expand it into cloud sync in the same slice.
Deliberate exclusions for now:
- hard-sub OCR
- plugin marketplace infrastructure
- cloud/device sync
- site-specific streaming source auto-detection beyond narrow discovery spikes
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,46 @@
---
id: TASK-239.1
title: Add profile-aware config foundations and profile selection flow
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- feature
- config
- launcher
- ux
milestone: m-2
dependencies: []
references:
- src/config/service.ts
- src/config/load.ts
- launcher/config.ts
- src/main.ts
parent_task_id: TASK-239
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Introduce the foundation for local multi-profile use so users can keep separate setups for different workflows without hand-editing or swapping config files manually. Keep the first slice intentionally narrow: named local profiles, explicit selection, separate config/data paths, and safe migration from the current single-profile setup. Do not couple this task to cloud sync or remote profile sharing.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Users can create/select a named local profile and launch SubMiner against that profile explicitly.
- [ ] #2 Each profile uses separate config and data storage paths for settings and profile-scoped runtime state that should not bleed across workflows.
- [ ] #3 Existing single-profile users migrate safely to a default profile without losing settings.
- [ ] #4 The active profile is visible in the launcher/app surface where it materially affects user behavior.
- [ ] #5 Tests cover profile resolution, migration/defaulting behavior, and at least one end-to-end selection path.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Design a minimal profile storage layout and resolution strategy that works for launcher and desktop runtime entrypoints.
2. Add profile selection plumbing before changing feature behavior inside individual services.
3. Migrate config/data-path resolution to be profile-aware while preserving a safe default-profile fallback.
4. Verify with config/launcher tests plus targeted runtime coverage.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,46 @@
---
id: TASK-239.2
title: Add a searchable command palette for desktop actions
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- feature
- ux
- desktop
- shortcuts
milestone: m-2
dependencies: []
references:
- src/renderer
- src/shared/ipc/contracts.ts
- src/main/runtime/overlay-runtime-options.ts
- src/main.ts
parent_task_id: TASK-239
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
SubMiner already exposes many actions through scattered shortcuts, menus, and modal flows. Add a searchable command palette so users can discover and execute high-value desktop actions from one keyboard-first surface. Build on the existing runtime-options/modal infrastructure where practical instead of creating a completely separate interaction model.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 A keyboard-accessible command palette opens from the desktop app and lists supported actions with searchable labels.
- [ ] #2 Commands are backed by an explicit registry so action availability and labels are not hard-coded in one renderer component.
- [ ] #3 Users can navigate and execute commands entirely from the keyboard.
- [ ] #4 The first slice includes the highest-value existing actions rather than trying to cover every possible command on day one.
- [ ] #5 Tests cover command filtering, execution dispatch, and at least one disabled/unavailable command state.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Define a small command-registry contract shared across renderer and main-process dispatch.
2. Reuse existing modal/runtime plumbing where it fits so the palette is a thin discoverability layer over current actions.
3. Ship a narrow but useful initial command set, then expand later based on usage.
4. Verify with renderer tests plus targeted IPC/runtime tests.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,45 @@
---
id: TASK-239.3
title: Add live Anki template preview for card output
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- feature
- anki
- ux
milestone: m-2
dependencies: []
references:
- src/anki-integration.ts
- src/anki-integration/card-creation.ts
- src/config/resolve/anki-connect.ts
- src/renderer
parent_task_id: TASK-239
priority: high
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Users currently have to infer what card output will look like from config fields and post-mine results. Add a live preview surface that shows the resolved card template output before mining so users can catch broken field mappings, missing media, or undesirable formatting earlier.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Users can open a preview that renders the resolved front/back field output for the current note/card template configuration.
- [ ] #2 The preview clearly surfaces missing or unmapped fields instead of silently showing blank content.
- [ ] #3 Preview generation uses the same transformation logic as the live card-creation path so it stays trustworthy.
- [ ] #4 The first slice works with representative sample mining payloads and handles missing optional media gracefully.
- [ ] #5 Tests cover preview rendering for at least one valid and one invalid/missing-field configuration.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Identify the current card-creation data path and extract any logic needed to render a preview without duplicating transformation rules.
2. Add a focused preview UI in the most relevant existing configuration/setup surface.
3. Surface validation/warning states for empty mappings, missing fields, and media-dependent outputs.
4. Verify with Anki integration tests plus renderer coverage for preview states.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,46 @@
---
id: TASK-239.4
title: Add sentence clipping from arbitrary subtitle ranges
status: To Do
assignee: []
created_date: '2026-03-26 20:49'
labels:
- feature
- subtitle
- anki
- ux
milestone: m-2
dependencies: []
references:
- src/renderer/modals/subtitle-sidebar.ts
- src/main/runtime/subtitle-position.ts
- src/anki-integration/card-creation.ts
- src/main/runtime/mpv-main-event-actions.ts
parent_task_id: TASK-239
priority: medium
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Current mining flows are optimized around the active subtitle line. Add a sentence-clipping workflow that lets users select an arbitrary contiguous subtitle range, preview the combined text/timing, and mine from that selection. This should improve multi-line dialogue capture without forcing manual copy/paste or separate post-processing.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Users can select a contiguous subtitle range from the existing subtitle UI instead of being limited to the active cue.
- [ ] #2 The workflow previews the combined text and resulting timing range before mining.
- [ ] #3 Mining from a clipped range uses the combined subtitle payload in card generation while preserving existing single-line behavior.
- [ ] #4 The feature handles overlapping/edge timing cases predictably and does not corrupt the normal active-cue flow.
- [ ] #5 Tests cover range selection, combined payload generation, and at least one card-creation path using a clipped selection.
<!-- AC:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Define a selection model that fits the existing subtitle sidebar/runtime data flow.
2. Add preview + confirmation UI before routing the clipped payload into mining.
3. Keep the existing single-line path intact and treat clipping as an additive workflow.
4. Verify with subtitle-sidebar, runtime, and Anki/card-creation tests.
<!-- SECTION:PLAN:END -->

View File

@@ -0,0 +1,81 @@
---
id: TASK-240
title: Migrate SubMiner agent skills into a repo-local plugin workflow
status: Done
assignee:
- codex
created_date: '2026-03-26 00:00'
updated_date: '2026-03-26 23:23'
labels:
- skills
- plugin
- workflow
- backlog
- tooling
dependencies:
- TASK-159
- TASK-160
priority: high
ordinal: 24000
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Turn the current SubMiner-specific repo skills into a reproducible repo-local plugin workflow. The plugin should become the canonical source of truth for the SubMiner scrum-master and change-verification skills, bundle the scripts and metadata needed to test and validate changes, and preserve compatibility for existing repo references through thin `.agents/skills/` shims while the migration settles.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [x] #1 A repo-local plugin scaffold exists for the SubMiner workflow, with manifest and marketplace metadata wired according to the repo-local plugin layout.
- [x] #2 `subminer-scrum-master` and `subminer-change-verification` live under the plugin as the canonical skill sources, along with any helper scripts or supporting files needed for reproducible use.
- [x] #3 Existing repo-level `.agents/skills/` entrypoints are reduced to compatibility shims or redirects instead of remaining as duplicate sources of truth.
- [x] #4 The plugin-owned workflow explicitly documents backlog-first orchestration and change verification expectations, including how the skills work together.
- [x] #5 The migration is validated with the cheapest sufficient repo-native verification lane and the task records the exact commands and any skips/blockers.
<!-- SECTION:END -->
## Implementation Plan
<!-- SECTION:PLAN:BEGIN -->
1. Inspect the plugin-creator contract and current repo skill/script layout, then choose the plugin name, directory structure, and migration boundaries.
2. Scaffold a repo-local plugin plus marketplace entry, keeping the plugin payload under `plugins/<name>/` and the catalog entry under `.agents/plugins/marketplace.json`.
3. Move the two SubMiner-specific skills and their helper scripts into the plugin as the canonical source, adding any plugin docs or supporting metadata needed for reproducible testing/validation.
4. Replace the existing `.agents/skills/subminer-*` surfaces with minimal compatibility shims that point agents at the plugin-owned sources without duplicating logic.
5. Update internal docs or references that should now describe the plugin-first workflow.
6. Run the cheapest sufficient verification lane for plugin/internal-doc changes and record the results in this task.
<!-- SECTION:PLAN:END -->
## Implementation Notes
<!-- SECTION:NOTES:BEGIN -->
2026-03-26: User approved the migration shape where the plugin becomes the canonical source of truth and `.agents/skills/` stays only as compatibility shims. Repo-local plugin chosen over home-local plugin.
2026-03-26: Backlog MCP resources/tools are not available in this Codex session (`MCP startup failed`), so this task is being initialized directly in the repo-local `backlog/` files instead of through the live Backlog MCP interface.
2026-03-26: Scaffolded `plugins/subminer-workflow/` plus `.agents/plugins/marketplace.json`, moved the scrum-master and change-verification skill definitions into the plugin as the canonical sources, and converted the old `.agents/skills/` surfaces into compatibility shims. Preserved the old verifier script entrypoints as wrappers because backlog/docs history already calls them directly.
2026-03-26: Verification passed.
- `bash -n plugins/subminer-workflow/skills/subminer-change-verification/scripts/classify_subminer_diff.sh`
- `bash -n plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh`
- `bash -n .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh`
- `bash -n .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh`
- `bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh plugins/subminer-workflow/.codex-plugin/plugin.json docs/workflow/agent-plugins.md .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh`
- `bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane docs plugins/subminer-workflow .agents/skills/subminer-scrum-master/SKILL.md .agents/skills/subminer-change-verification/SKILL.md .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh .agents/plugins/marketplace.json docs/workflow/README.md docs/workflow/agent-plugins.md 'backlog/tasks/task-240 - Migrate-SubMiner-agent-skills-into-a-repo-local-plugin-workflow.md'`
- Verifier artifacts: `.tmp/skill-verification/subminer-verify-20260326-232300-E2NQVX/`
<!-- SECTION:NOTES:END -->
## Final Summary
<!-- SECTION:FINAL_SUMMARY:BEGIN -->
Created a repo-local `subminer-workflow` plugin as the canonical packaging for the SubMiner scrum-master and change-verification workflow. The plugin now owns both skills, the verifier helper scripts, plugin metadata, and workflow docs. The old `.agents/skills/` surfaces remain only as compatibility shims, and the old verifier script paths now forward to the plugin-owned scripts so existing docs and backlog commands continue to work. Targeted plugin/docs verification passed, including wrapper-script syntax checks and a real verifier run through the legacy entrypoint.
<!-- SECTION:FINAL_SUMMARY:END -->

View File

@@ -0,0 +1,37 @@
id: TASK-241
title: Add optional setup action to seed SubMiner mpv profile
type: feature
status: Open
assignee: []
created_date: '2026-03-27 11:22'
updated_date: '2026-03-27 11:22'
labels:
- setup
- mpv
- docs
- ux
dependencies: []
references: []
documentation:
- /home/sudacode/projects/japanese/SubMiner/docs-site/usage.md
- /home/sudacode/projects/japanese/SubMiner/docs-site/launcher-script.md
ordinal: 24100
---
## Description
<!-- SECTION:DESCRIPTION:BEGIN -->
Add an optional control in the first-run / setup flow to write or update the users mpv configuration with SubMiner-recommended defaults (especially the `subminer` profile), so users can recover from a missing profile without manual config editing.
The docs for launcher usage must explicitly state that SubMiners Windows mpv launcher path runs mpv with `--profile=subminer` by default.
<!-- SECTION:DESCRIPTION:END -->
## Acceptance Criteria
<!-- AC:BEGIN -->
- [ ] #1 Add an optional setup UI action/button to generate or overwrite a user-confirmed mpv config that includes a `subminer` profile.
- [ ] #2 The action should be non-destructive by default, show diff/contents before write, and support append/update mode when other mpv settings already exist.
- [ ] #3 Document how to resolve the missing-profile scenario and clearly state that the SubMiner mpv launcher runs with `--profile=subminer` by default (`--launch-mpv` / Windows mpv shortcut path).
- [ ] #4 Add/adjust setup validation messaging so users are not blocked if `subminer` profile is initially missing, but can opt into one-click setup recovery.
- [ ] #5 Include a short verification path for both Windows and non-Windows flows (for example dry-run + write path).
<!-- AC:END -->

View File

@@ -7,7 +7,6 @@
"dependencies": {
"@fontsource-variable/geist": "^5.2.8",
"@fontsource-variable/geist-mono": "^5.2.7",
"@hono/node-server": "^1.19.11",
"axios": "^1.13.5",
"commander": "^14.0.3",
"discord-rpc": "^4.0.1",
@@ -110,8 +109,6 @@
"@fontsource-variable/geist-mono": ["@fontsource-variable/geist-mono@5.2.7", "", {}, "sha512-ZKlZ5sjtalb2TwXKs400mAGDlt/+2ENLNySPx0wTz3bP3mWARCsUW+rpxzZc7e05d2qGch70pItt3K4qttbIYA=="],
"@hono/node-server": ["@hono/node-server@1.19.11", "", { "peerDependencies": { "hono": "^4" } }, "sha512-dr8/3zEaB+p0D2n/IUrlPF1HZm586qgJNXK1a9fhg/PzdtkK7Ksd5l312tJX2yBuALqDYBlG20QEbayqPyxn+g=="],
"@isaacs/cliui": ["@isaacs/cliui@8.0.2", "", { "dependencies": { "string-width": "^5.1.2", "string-width-cjs": "npm:string-width@^4.2.0", "strip-ansi": "^7.0.1", "strip-ansi-cjs": "npm:strip-ansi@^6.0.1", "wrap-ansi": "^8.1.0", "wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0" } }, "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA=="],
"@isaacs/fs-minipass": ["@isaacs/fs-minipass@4.0.1", "", { "dependencies": { "minipass": "^7.0.4" } }, "sha512-wgm9Ehl2jpeqP3zw/7mo3kRHFp5MEDhqAdwy1fTGkHAwnkGOVsgpvQhL8B5n1qlb01jV3n/bI0ZfZp5lWA1k4w=="],

View File

@@ -0,0 +1,5 @@
type: fixed
area: stats
- Fixed stats startup so the immersion tracker can run when `Bun.serve` is unavailable.
- Stats server now falls back to a Node `http` listener in Electron/runtime paths that do not expose Bun.

View File

@@ -39,6 +39,7 @@ src/
types.ts # Shared type definitions
main/ # Main-process composition/runtime adapters
app-lifecycle.ts # App lifecycle + app-ready runtime runner factories
character-dictionary-runtime.ts # Character-dictionary orchestration/public runtime API
cli-runtime.ts # CLI command runtime service adapters
config-validation.ts # Startup/hot-reload config error formatting and fail-fast helpers
dependencies.ts # Shared dependency builders for IPC/runtime services
@@ -53,6 +54,7 @@ src/
startup-lifecycle.ts # Lifecycle runtime runner adapter
state.ts # Application runtime state container + reducer transitions
subsync-runtime.ts # Subsync command runtime adapter
character-dictionary-runtime/ # Character-dictionary fetch/build/cache modules + focused tests
runtime/
composers/ # High-level composition clusters used by main.ts
domains/ # Domain barrel exports (startup/overlay/mpv/jellyfin/...)

View File

@@ -1,5 +1,12 @@
# Changelog
## v0.9.3 (2026-03-25)
- Moved YouTube primary subtitle language defaults to `youtube.primarySubLanguages`.
- Removed the placeholder YouTube subtitle retime step; downloaded primary subtitle tracks are now used directly.
- Removed the old internal YouTube retime helper and its tests.
- Clarified optional `alass` / `ffsubsync` subtitle-sync setup and fallback behavior in the docs.
- Removed the legacy `youtubeSubgen.primarySubLanguages` config path from generated config and docs.
## v0.9.2 (2026-03-25)
- Fixed overlay pointer tracking so Windows click-through toggles immediately when the cursor enters or leaves subtitle regions.
- Fixed Windows overlay window tracking on scaled displays by converting native tracked window bounds to Electron DIP coordinates.

View File

@@ -3,7 +3,7 @@
# Architecture Map
Status: active
Last verified: 2026-03-13
Last verified: 2026-03-26
Owner: Kyle Yasuda
Read when: runtime ownership, composition boundaries, or layering questions
@@ -27,6 +27,7 @@ The desktop app keeps `src/main.ts` as composition root and pushes behavior into
- `src/core/services/` owns focused runtime services plus pure or side-effect-bounded logic.
- `src/renderer/` owns overlay rendering and input behavior.
- `src/config/` owns config definitions, defaults, loading, and resolution.
- `src/types/` owns shared cross-runtime contracts via domain entrypoints; `src/types.ts` stays a compatibility barrel.
- `src/main/runtime/composers/` owns larger domain compositions.
## Architecture Intent

View File

@@ -3,7 +3,7 @@
# Domain Ownership
Status: active
Last verified: 2026-03-13
Last verified: 2026-03-26
Owner: Kyle Yasuda
Read when: you need to find the owner module for a behavior or test surface
@@ -23,17 +23,28 @@ Read when: you need to find the owner module for a behavior or test surface
- Anki workflow: `src/anki-integration/`, `src/core/services/anki-jimaku*.ts`
- Immersion tracking: `src/core/services/immersion-tracker/`
Includes stats storage/query schema such as `imm_videos`, `imm_media_art`, and `imm_youtube_videos` for per-video and YouTube-specific library metadata.
- AniList tracking: `src/core/services/anilist/`, `src/main/runtime/composers/anilist-*`
- AniList tracking + character dictionary: `src/core/services/anilist/`, `src/main/runtime/composers/anilist-*`, `src/main/character-dictionary-runtime.ts`, `src/main/character-dictionary-runtime/`
- Jellyfin integration: `src/core/services/jellyfin*.ts`, `src/main/runtime/composers/jellyfin-*`
- Window trackers: `src/window-trackers/`
- Stats app: `stats/`
- Public docs site: `docs-site/`
## Shared Contract Entry Points
- Config + app-state contracts: `src/types/config.ts`
- Subtitle/token/media annotation contracts: `src/types/subtitle.ts`
- Runtime/window/controller/Electron bridge contracts: `src/types/runtime.ts`
- Anki-specific contracts: `src/types/anki.ts`
- External integration contracts: `src/types/integrations.ts`
- Runtime-option contracts: `src/types/runtime-options.ts`
- Compatibility-only barrel: `src/types.ts`
## Ownership Heuristics
- Runtime wiring or dependency setup: start in `src/main/`
- Business logic or service behavior: start in `src/core/services/`
- UI interaction or overlay DOM behavior: start in `src/renderer/`
- Command parsing or mpv launch flow: start in `launcher/`
- Shared contract changes: add or edit the narrowest `src/types/<domain>.ts` entrypoint; only touch `src/types.ts` for compatibility exports.
- User-facing docs: `docs-site/`
- Internal process/docs: `docs/`

View File

@@ -13,6 +13,7 @@ This section is the internal workflow map for contributors and agents.
- [Planning](./planning.md) - when to write a lightweight plan vs a full execution plan
- [Verification](./verification.md) - maintained test/build lanes and handoff gate
- [Agent Plugins](./agent-plugins.md) - repo-local plugin ownership for agent workflow skills
- [Release Guide](../RELEASING.md) - tagged release workflow
## Default Flow

View File

@@ -0,0 +1,32 @@
<!-- read_when: using or modifying repo-local agent plugins -->
# Agent Plugins
Status: active
Last verified: 2026-03-26
Owner: Kyle Yasuda
Read when: packaging or migrating repo-local agent workflow skills into plugins
## SubMiner Workflow Plugin
- Canonical plugin path: `plugins/subminer-workflow/`
- Marketplace catalog: `.agents/plugins/marketplace.json`
- Canonical skill sources:
- `plugins/subminer-workflow/skills/subminer-scrum-master/`
- `plugins/subminer-workflow/skills/subminer-change-verification/`
## Migration Rule
- Plugin-owned skills are the source of truth.
- `.agents/skills/subminer-*` remain only as compatibility shims.
- Existing script entrypoints under `.agents/skills/subminer-change-verification/scripts/` stay as wrappers so historical commands do not break.
## Backlog
- Prefer Backlog.md MCP when the host session exposes it.
- If MCP is unavailable, use repo-local `backlog/` files and record that fallback.
## Verification
- For plugin/docs-only changes, start with `bun run test:docs:kb`.
- Use the plugin-owned verifier when the change crosses from docs into scripts or workflow logic.

View File

@@ -227,11 +227,7 @@ test('stats background command launches attached daemon control command with res
assert.equal(handled, true);
assert.deepEqual(harness.forwarded, [
[
'--stats-daemon-start',
'--stats-response-path',
'/tmp/subminer-stats-test/response.json',
],
['--stats-daemon-start', '--stats-response-path', '/tmp/subminer-stats-test/response.json'],
]);
assert.equal(harness.removedPaths.length, 1);
});
@@ -257,11 +253,7 @@ test('stats command waits for attached app exit after startup response', async (
const final = await statsCommand;
assert.equal(final, true);
assert.deepEqual(harness.forwarded, [
[
'--stats',
'--stats-response-path',
'/tmp/subminer-stats-test/response.json',
],
['--stats', '--stats-response-path', '/tmp/subminer-stats-test/response.json'],
]);
assert.equal(harness.removedPaths.length, 1);
});
@@ -317,11 +309,7 @@ test('stats stop command forwards stop flag to the app', async () => {
assert.equal(handled, true);
assert.deepEqual(harness.forwarded, [
[
'--stats-daemon-stop',
'--stats-response-path',
'/tmp/subminer-stats-test/response.json',
],
['--stats-daemon-stop', '--stats-response-path', '/tmp/subminer-stats-test/response.json'],
]);
assert.equal(harness.removedPaths.length, 1);
});

View File

@@ -14,6 +14,7 @@ import {
waitForUnixSocketReady,
} from '../mpv.js';
import type { Args } from '../types.js';
import { nowMs } from '../time.js';
import type { LauncherCommandContext } from './context.js';
import { ensureLauncherSetupReady } from '../setup-gate.js';
import {
@@ -116,7 +117,7 @@ async function ensurePlaybackSetupReady(context: LauncherCommandContext): Promis
child.unref();
},
sleep: (ms) => new Promise((resolve) => setTimeout(resolve, ms)),
now: () => Date.now(),
now: () => nowMs(),
timeoutMs: SETUP_WAIT_TIMEOUT_MS,
pollIntervalMs: SETUP_POLL_INTERVAL_MS,
});
@@ -209,7 +210,11 @@ export async function runPlaybackCommandWithDeps(
pluginRuntimeConfig.autoStartPauseUntilReady;
if (shouldPauseUntilOverlayReady) {
deps.log('info', args.logLevel, 'Configured to pause mpv until overlay and tokenization are ready');
deps.log(
'info',
args.logLevel,
'Configured to pause mpv until overlay and tokenization are ready',
);
}
await deps.startMpv(
@@ -250,7 +255,11 @@ export async function runPlaybackCommandWithDeps(
if (ready) {
deps.log('info', args.logLevel, 'MPV IPC socket ready, relying on mpv plugin auto-start');
} else {
deps.log('info', args.logLevel, 'MPV IPC socket not ready yet, relying on mpv plugin auto-start');
deps.log(
'info',
args.logLevel,
'MPV IPC socket not ready yet, relying on mpv plugin auto-start',
);
}
} else if (ready) {
deps.log(

View File

@@ -2,6 +2,7 @@ import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import { runAppCommandAttached } from '../mpv.js';
import { nowMs } from '../time.js';
import { sleep } from '../util.js';
import type { LauncherCommandContext } from './context.js';
@@ -45,8 +46,8 @@ const defaultDeps: StatsCommandDeps = {
runAppCommandAttached: (appPath, appArgs, logLevel, label) =>
runAppCommandAttached(appPath, appArgs, logLevel, label),
waitForStatsResponse: async (responsePath, signal) => {
const deadline = Date.now() + STATS_STARTUP_RESPONSE_TIMEOUT_MS;
while (Date.now() < deadline) {
const deadline = nowMs() + STATS_STARTUP_RESPONSE_TIMEOUT_MS;
while (nowMs() < deadline) {
if (signal?.aborted) {
return {
ok: false,

View File

@@ -0,0 +1,155 @@
import assert from 'node:assert/strict';
import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import test from 'node:test';
import {
applyInvocationsToArgs,
applyRootOptionsToArgs,
createDefaultArgs,
} from './args-normalizer.js';
class ExitSignal extends Error {
code: number;
constructor(code: number) {
super(`exit:${code}`);
this.code = code;
}
}
function withProcessExitIntercept(callback: () => void): ExitSignal {
const originalExit = process.exit;
try {
process.exit = ((code?: number) => {
throw new ExitSignal(code ?? 0);
}) as typeof process.exit;
callback();
} catch (error) {
if (error instanceof ExitSignal) {
return error;
}
throw error;
} finally {
process.exit = originalExit;
}
throw new Error('expected process.exit');
}
function withTempDir<T>(fn: (dir: string) => T): T {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), 'subminer-launcher-args-'));
try {
return fn(dir);
} finally {
fs.rmSync(dir, { recursive: true, force: true });
}
}
test('createDefaultArgs normalizes configured language codes and env thread override', () => {
const originalThreads = process.env.SUBMINER_WHISPER_THREADS;
process.env.SUBMINER_WHISPER_THREADS = '7';
try {
const parsed = createDefaultArgs({
primarySubLanguages: [' JA ', 'jpn', 'ja'],
secondarySubLanguages: ['en', 'ENG', ''],
whisperThreads: 2,
});
assert.deepEqual(parsed.youtubePrimarySubLangs, ['ja', 'jpn']);
assert.deepEqual(parsed.youtubeSecondarySubLangs, ['en', 'eng']);
assert.deepEqual(parsed.youtubeAudioLangs, ['ja', 'jpn', 'en', 'eng']);
assert.equal(parsed.whisperThreads, 7);
assert.equal(parsed.youtubeWhisperSourceLanguage, 'ja');
} finally {
if (originalThreads === undefined) {
delete process.env.SUBMINER_WHISPER_THREADS;
} else {
process.env.SUBMINER_WHISPER_THREADS = originalThreads;
}
}
});
test('applyRootOptionsToArgs maps file, directory, and url targets', () => {
withTempDir((dir) => {
const filePath = path.join(dir, 'movie.mkv');
const folderPath = path.join(dir, 'anime');
fs.writeFileSync(filePath, 'x');
fs.mkdirSync(folderPath);
const fileParsed = createDefaultArgs({});
applyRootOptionsToArgs(fileParsed, {}, filePath);
assert.equal(fileParsed.targetKind, 'file');
assert.equal(fileParsed.target, filePath);
const dirParsed = createDefaultArgs({});
applyRootOptionsToArgs(dirParsed, {}, folderPath);
assert.equal(dirParsed.directory, folderPath);
assert.equal(dirParsed.target, '');
assert.equal(dirParsed.targetKind, '');
const urlParsed = createDefaultArgs({});
applyRootOptionsToArgs(urlParsed, {}, 'https://example.test/video');
assert.equal(urlParsed.targetKind, 'url');
assert.equal(urlParsed.target, 'https://example.test/video');
});
});
test('applyRootOptionsToArgs rejects unsupported targets', () => {
const parsed = createDefaultArgs({});
const error = withProcessExitIntercept(() => {
applyRootOptionsToArgs(parsed, {}, '/definitely/missing/subminer-target');
});
assert.equal(error.code, 1);
assert.match(error.message, /exit:1/);
});
test('applyInvocationsToArgs maps config and jellyfin invocation state', () => {
const parsed = createDefaultArgs({});
applyInvocationsToArgs(parsed, {
jellyfinInvocation: {
action: 'play',
play: true,
server: 'https://jf.example',
username: 'alice',
password: 'secret',
logLevel: 'debug',
},
configInvocation: {
action: 'show',
logLevel: 'warn',
},
mpvInvocation: null,
appInvocation: null,
dictionaryTriggered: false,
dictionaryTarget: null,
dictionaryLogLevel: null,
statsTriggered: false,
statsBackground: false,
statsStop: false,
statsCleanup: false,
statsCleanupVocab: false,
statsCleanupLifetime: false,
statsLogLevel: null,
doctorTriggered: false,
doctorLogLevel: null,
doctorRefreshKnownWords: false,
texthookerTriggered: false,
texthookerLogLevel: null,
});
assert.equal(parsed.jellyfin, false);
assert.equal(parsed.jellyfinPlay, true);
assert.equal(parsed.jellyfinDiscovery, false);
assert.equal(parsed.jellyfinLogin, false);
assert.equal(parsed.jellyfinLogout, false);
assert.equal(parsed.jellyfinServer, 'https://jf.example');
assert.equal(parsed.jellyfinUsername, 'alice');
assert.equal(parsed.jellyfinPassword, 'secret');
assert.equal(parsed.configShow, true);
assert.equal(parsed.logLevel, 'warn');
});

View File

@@ -0,0 +1,37 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { parseCliPrograms, resolveTopLevelCommand } from './cli-parser-builder.js';
test('resolveTopLevelCommand skips root options and finds the first command', () => {
assert.deepEqual(resolveTopLevelCommand(['--backend', 'macos', 'config', 'show']), {
name: 'config',
index: 2,
});
});
test('resolveTopLevelCommand respects the app alias after root options', () => {
assert.deepEqual(resolveTopLevelCommand(['--log-level', 'debug', 'bin', '--foo']), {
name: 'bin',
index: 2,
});
});
test('parseCliPrograms keeps root options and target when no command is present', () => {
const result = parseCliPrograms(['--backend', 'x11', '/tmp/movie.mkv'], 'subminer');
assert.equal(result.options.backend, 'x11');
assert.equal(result.rootTarget, '/tmp/movie.mkv');
assert.equal(result.invocations.appInvocation, null);
});
test('parseCliPrograms routes app alias arguments through passthrough mode', () => {
const result = parseCliPrograms(
['--backend', 'macos', 'bin', '--anilist', '--log-level', 'debug'],
'subminer',
);
assert.equal(result.options.backend, 'macos');
assert.deepEqual(result.invocations.appInvocation, {
appArgs: ['--anilist', '--log-level', 'debug'],
});
});

View File

@@ -236,17 +236,12 @@ export function parseCliPrograms(
normalizedAction !== 'rebuild' &&
normalizedAction !== 'backfill'
) {
throw new Error(
'Invalid stats action. Valid values are cleanup, rebuild, or backfill.',
);
throw new Error('Invalid stats action. Valid values are cleanup, rebuild, or backfill.');
}
if (normalizedAction && (statsBackground || statsStop)) {
throw new Error('Stats background and stop flags cannot be combined with stats actions.');
}
if (
normalizedAction !== 'cleanup' &&
(options.vocab === true || options.lifetime === true)
) {
if (normalizedAction !== 'cleanup' && (options.vocab === true || options.lifetime === true)) {
throw new Error('Stats --vocab and --lifetime flags require the cleanup action.');
}
if (normalizedAction === 'cleanup') {

View File

@@ -10,6 +10,7 @@ import type {
JellyfinGroupEntry,
} from './types.js';
import { log, fail, getMpvLogPath } from './log.js';
import { nowMs } from './time.js';
import { commandExists, resolvePathMaybe, sleep } from './util.js';
import {
pickLibrary,
@@ -453,9 +454,9 @@ async function runAppJellyfinCommand(
}
return retriedAfterStart ? 12000 : 4000;
})();
const settleDeadline = Date.now() + settleWindowMs;
const settleDeadline = nowMs() + settleWindowMs;
const settleOffset = attempt.logOffset;
while (Date.now() < settleDeadline) {
while (nowMs() < settleDeadline) {
await sleep(100);
const settledOutput = readLogAppendedSince(settleOffset);
if (!settledOutput.trim()) {
@@ -489,8 +490,8 @@ async function requestJellyfinPreviewAuthFromApp(
return null;
}
const deadline = Date.now() + 4000;
while (Date.now() < deadline) {
const deadline = nowMs() + 4000;
while (nowMs() < deadline) {
try {
if (fs.existsSync(responsePath)) {
const raw = fs.readFileSync(responsePath, 'utf8');

View File

@@ -14,12 +14,7 @@ test('getDefaultMpvLogFile uses APPDATA on windows', () => {
assert.equal(
path.normalize(resolved),
path.normalize(
path.join(
'C:\\Users\\tester\\AppData\\Roaming',
'SubMiner',
'logs',
`mpv-${today}.log`,
),
path.join('C:\\Users\\tester\\AppData\\Roaming', 'SubMiner', 'logs', `mpv-${today}.log`),
),
);
});
@@ -33,12 +28,6 @@ test('getDefaultLauncherLogFile uses launcher prefix', () => {
assert.equal(
resolved,
path.join(
'/home/tester',
'.config',
'SubMiner',
'logs',
`launcher-${today}.log`,
),
path.join('/home/tester', '.config', 'SubMiner', 'logs', `launcher-${today}.log`),
);
});

View File

@@ -36,6 +36,8 @@ function withTempDir<T>(fn: (dir: string) => T): T {
}
}
const LAUNCHER_RUN_TIMEOUT_MS = 30000;
function runLauncher(argv: string[], env: NodeJS.ProcessEnv): RunResult {
const result = spawnSync(
process.execPath,
@@ -43,6 +45,7 @@ function runLauncher(argv: string[], env: NodeJS.ProcessEnv): RunResult {
{
env,
encoding: 'utf8',
timeout: LAUNCHER_RUN_TIMEOUT_MS,
},
);
return {
@@ -269,10 +272,7 @@ ${bunBinary} -e "const net=require('node:net'); const fs=require('node:fs'); con
SUBMINER_APPIMAGE_PATH: appPath,
SUBMINER_TEST_MPV_ARGS: mpvArgsPath,
};
const result = runLauncher(
['--args', '--pause=yes --title="movie night"', videoPath],
env,
);
const result = runLauncher(['--args', '--pause=yes --title="movie night"', videoPath], env);
assert.equal(result.status, 0, `stdout:\n${result.stdout}\nstderr:\n${result.stderr}`);
const argsFile = fs.readFileSync(mpvArgsPath, 'utf8');
@@ -355,10 +355,7 @@ ${bunBinary} -e "const net=require('node:net'); const fs=require('node:fs'); con
const result = runLauncher(['--log-level', 'debug', videoPath], env);
assert.equal(result.status, 0, `stdout:\n${result.stdout}\nstderr:\n${result.stderr}`);
assert.match(
fs.readFileSync(mpvArgsPath, 'utf8'),
/--script-opts=.*subminer-log_level=debug/,
);
assert.match(fs.readFileSync(mpvArgsPath, 'utf8'), /--script-opts=.*subminer-log_level=debug/);
});
});

View File

@@ -427,7 +427,10 @@ function withFindAppBinaryEnvSandbox(run: () => void): void {
}
}
function withAccessSyncStub(isExecutablePath: (filePath: string) => boolean, run: () => void): void {
function withAccessSyncStub(
isExecutablePath: (filePath: string) => boolean,
run: () => void,
): void {
const originalAccessSync = fs.accessSync;
try {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
@@ -468,10 +471,13 @@ test('findAppBinary resolves /opt/SubMiner/SubMiner.AppImage when ~/.local/bin c
try {
os.homedir = () => baseDir;
withFindAppBinaryEnvSandbox(() => {
withAccessSyncStub((filePath) => filePath === '/opt/SubMiner/SubMiner.AppImage', () => {
const result = findAppBinary('/some/other/path/subminer');
assert.equal(result, '/opt/SubMiner/SubMiner.AppImage');
});
withAccessSyncStub(
(filePath) => filePath === '/opt/SubMiner/SubMiner.AppImage',
() => {
const result = findAppBinary('/some/other/path/subminer');
assert.equal(result, '/opt/SubMiner/SubMiner.AppImage');
},
);
});
} finally {
os.homedir = originalHomedir;
@@ -492,11 +498,14 @@ test('findAppBinary finds subminer on PATH when AppImage candidates do not exist
process.env.PATH = `${binDir}${path.delimiter}${originalPath ?? ''}`;
withFindAppBinaryEnvSandbox(() => {
withAccessSyncStub((filePath) => filePath === wrapperPath, () => {
// selfPath must differ from wrapperPath so the self-check does not exclude it
const result = findAppBinary(path.join(baseDir, 'launcher', 'subminer'));
assert.equal(result, wrapperPath);
});
withAccessSyncStub(
(filePath) => filePath === wrapperPath,
() => {
// selfPath must differ from wrapperPath so the self-check does not exclude it
const result = findAppBinary(path.join(baseDir, 'launcher', 'subminer'));
assert.equal(result, wrapperPath);
},
);
});
} finally {
os.homedir = originalHomedir;

View File

@@ -7,6 +7,7 @@ import type { LogLevel, Backend, Args, MpvTrack } from './types.js';
import { DEFAULT_MPV_SUBMINER_ARGS, DEFAULT_YOUTUBE_YTDL_FORMAT } from './types.js';
import { appendToAppLog, getAppLogPath, log, fail, getMpvLogPath } from './log.js';
import { buildSubminerScriptOpts, resolveAniSkipMetadataForFile } from './aniskip-metadata.js';
import { nowMs } from './time.js';
import {
commandExists,
getPathEnv,
@@ -47,7 +48,11 @@ export function parseMpvArgString(input: string): string[] {
let inDoubleQuote = false;
let escaping = false;
const canEscape = (nextChar: string | undefined): boolean =>
nextChar === undefined || nextChar === '"' || nextChar === "'" || nextChar === '\\' || /\s/.test(nextChar);
nextChar === undefined ||
nextChar === '"' ||
nextChar === "'" ||
nextChar === '\\' ||
/\s/.test(nextChar);
for (let i = 0; i < chars.length; i += 1) {
const ch = chars[i] || '';
@@ -196,8 +201,8 @@ async function terminateTrackedDetachedMpv(logLevel: LogLevel): Promise<void> {
return;
}
const deadline = Date.now() + 1500;
while (Date.now() < deadline) {
const deadline = nowMs() + 1500;
while (nowMs() < deadline) {
if (!isProcessAlive(pid)) {
clearTrackedDetachedMpvPid();
return;
@@ -340,7 +345,7 @@ export function sendMpvCommandWithResponse(
timeoutMs = 5000,
): Promise<unknown> {
return new Promise((resolve, reject) => {
const requestId = Date.now() + Math.floor(Math.random() * 1000);
const requestId = nowMs() + Math.floor(Math.random() * 1000);
const socket = net.createConnection(socketPath);
let buffer = '';
@@ -598,7 +603,9 @@ export async function startMpv(
? await resolveAniSkipMetadataForFile(target)
: null;
const extraScriptOpts =
targetKind === 'url' && isYoutubeTarget(target) && options?.disableYoutubeSubtitleAutoLoad === true
targetKind === 'url' &&
isYoutubeTarget(target) &&
options?.disableYoutubeSubtitleAutoLoad === true
? ['subminer-auto_start_pause_until_ready=no']
: [];
const scriptOpts = buildSubminerScriptOpts(
@@ -1064,7 +1071,9 @@ export function launchMpvIdleDetached(
mpvArgs.push(...parseMpvArgString(args.mpvArgs));
}
mpvArgs.push('--idle=yes');
mpvArgs.push(`--script-opts=${buildSubminerScriptOpts(appPath, socketPath, null, args.logLevel)}`);
mpvArgs.push(
`--script-opts=${buildSubminerScriptOpts(appPath, socketPath, null, args.logLevel)}`,
);
mpvArgs.push(`--log-file=${getMpvLogPath()}`);
mpvArgs.push(`--input-ipc-server=${socketPath}`);
const mpvTarget = resolveCommandInvocation('mpv', mpvArgs);
@@ -1109,8 +1118,8 @@ export async function waitForUnixSocketReady(
socketPath: string,
timeoutMs: number,
): Promise<boolean> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const deadline = nowMs() + timeoutMs;
while (nowMs() < deadline) {
try {
if (fs.existsSync(socketPath)) {
const ready = await canConnectUnixSocket(socketPath);

8
launcher/time.ts Normal file
View File

@@ -0,0 +1,8 @@
export function nowMs(): number {
const perf = globalThis.performance;
if (perf) {
return Math.floor(perf.timeOrigin + perf.now());
}
return Number(process.hrtime.bigint() / 1000000n);
}

View File

@@ -4,6 +4,7 @@ import os from 'node:os';
import { spawn } from 'node:child_process';
import type { LogLevel, CommandExecOptions, CommandExecResult } from './types.js';
import { log } from './log.js';
import { nowMs } from './time.js';
export function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
@@ -198,7 +199,7 @@ export function normalizeBasename(value: string, fallback: string): string {
if (safe) return safe;
const fallbackSafe = sanitizeToken(fallback);
if (fallbackSafe) return fallbackSafe;
return `${Date.now()}`;
return `${nowMs()}`;
}
export function normalizeLangCode(value: string): string {

View File

@@ -42,9 +42,9 @@
"test:config:smoke:dist": "bun test dist/config/path-resolution.test.js",
"test:plugin:src": "lua scripts/test-plugin-start-gate.lua && lua scripts/test-plugin-binary-windows.lua",
"test:launcher:smoke:src": "bun test launcher/smoke.e2e.test.ts",
"test:launcher:src": "bun test launcher/config.test.ts launcher/config-domain-parsers.test.ts launcher/mpv.test.ts launcher/picker.test.ts launcher/parse-args.test.ts launcher/main.test.ts launcher/commands/command-modules.test.ts launcher/smoke.e2e.test.ts && bun run test:plugin:src",
"test:core:src": "bun test src/cli/args.test.ts src/cli/help.test.ts src/shared/setup-state.test.ts src/core/services/cli-command.test.ts src/core/services/field-grouping-overlay.test.ts src/core/services/numeric-shortcut-session.test.ts src/core/services/secondary-subtitle.test.ts src/core/services/mpv-render-metrics.test.ts src/core/services/overlay-content-measurement.test.ts src/core/services/mpv-control.test.ts src/core/services/mpv.test.ts src/core/services/runtime-options-ipc.test.ts src/core/services/runtime-config.test.ts src/core/services/yomitan-extension-paths.test.ts src/core/services/config-hot-reload.test.ts src/core/services/discord-presence.test.ts src/core/services/tokenizer.test.ts src/core/services/tokenizer/annotation-stage.test.ts src/core/services/tokenizer/parser-selection-stage.test.ts src/core/services/tokenizer/parser-enrichment-stage.test.ts src/core/services/subsync.test.ts src/core/services/overlay-bridge.test.ts src/core/services/overlay-shortcut-handler.test.ts src/core/services/stats-window.test.ts src/core/services/mining.test.ts src/core/services/anki-jimaku.test.ts src/core/services/jimaku-download-path.test.ts src/core/services/jellyfin.test.ts src/core/services/jellyfin-remote.test.ts src/core/services/immersion-tracker-service.test.ts src/core/services/overlay-runtime-init.test.ts src/core/services/app-ready.test.ts src/core/services/startup-bootstrap.test.ts src/core/services/subtitle-processing-controller.test.ts src/core/services/anilist/anilist-update-queue.test.ts src/core/utils/shortcut-config.test.ts src/main/runtime/first-run-setup-plugin.test.ts src/main/runtime/first-run-setup-service.test.ts src/main/runtime/first-run-setup-window.test.ts src/main/runtime/tray-runtime.test.ts src/main/runtime/tray-main-actions.test.ts src/main/runtime/tray-main-deps.test.ts src/main/runtime/tray-runtime-handlers.test.ts src/main/runtime/cli-command-context-main-deps.test.ts src/main/runtime/app-ready-main-deps.test.ts src/renderer/error-recovery.test.ts src/renderer/subtitle-render.test.ts src/renderer/handlers/mouse.test.ts src/renderer/handlers/keyboard.test.ts src/renderer/modals/jimaku.test.ts src/subsync/utils.test.ts src/main/anilist-url-guard.test.ts src/window-trackers/hyprland-tracker.test.ts src/window-trackers/x11-tracker.test.ts src/window-trackers/windows-helper.test.ts src/window-trackers/windows-tracker.test.ts launcher/config.test.ts launcher/config-domain-parsers.test.ts launcher/parse-args.test.ts launcher/main.test.ts launcher/commands/command-modules.test.ts launcher/setup-gate.test.ts stats/src/lib/api-client.test.ts",
"test:core:dist": "bun test dist/cli/args.test.js dist/cli/help.test.js dist/core/services/cli-command.test.js dist/core/services/ipc.test.js dist/core/services/anki-jimaku-ipc.test.js dist/core/services/field-grouping-overlay.test.js dist/core/services/numeric-shortcut-session.test.js dist/core/services/secondary-subtitle.test.js dist/core/services/mpv-render-metrics.test.js dist/core/services/overlay-content-measurement.test.js dist/core/services/mpv-control.test.js dist/core/services/mpv.test.js dist/core/services/runtime-options-ipc.test.js dist/core/services/runtime-config.test.js dist/core/services/yomitan-extension-paths.test.js dist/core/services/config-hot-reload.test.js dist/core/services/discord-presence.test.js dist/core/services/tokenizer.test.js dist/core/services/tokenizer/annotation-stage.test.js dist/core/services/tokenizer/parser-selection-stage.test.js dist/core/services/tokenizer/parser-enrichment-stage.test.js dist/core/services/subsync.test.js dist/core/services/overlay-bridge.test.js dist/core/services/overlay-manager.test.js dist/core/services/overlay-shortcut-handler.test.js dist/core/services/mining.test.js dist/core/services/anki-jimaku.test.js dist/core/services/jimaku-download-path.test.js dist/core/services/jellyfin.test.js dist/core/services/jellyfin-remote.test.js dist/core/services/immersion-tracker-service.test.js dist/core/services/overlay-runtime-init.test.js dist/core/services/app-ready.test.js dist/core/services/startup-bootstrap.test.js dist/core/services/subtitle-processing-controller.test.js dist/core/services/anilist/anilist-token-store.test.js dist/core/services/anilist/anilist-update-queue.test.js dist/renderer/error-recovery.test.js dist/renderer/subtitle-render.test.js dist/renderer/handlers/mouse.test.js dist/renderer/handlers/keyboard.test.js dist/renderer/modals/jimaku.test.js dist/subsync/utils.test.js dist/main/anilist-url-guard.test.js dist/window-trackers/hyprland-tracker.test.js dist/window-trackers/x11-tracker.test.js dist/window-trackers/windows-helper.test.js dist/window-trackers/windows-tracker.test.js",
"test:launcher:src": "bun test launcher/config.test.ts launcher/config-domain-parsers.test.ts launcher/config/cli-parser-builder.test.ts launcher/config/args-normalizer.test.ts launcher/mpv.test.ts launcher/picker.test.ts launcher/parse-args.test.ts launcher/main.test.ts launcher/commands/command-modules.test.ts launcher/smoke.e2e.test.ts && bun run test:plugin:src",
"test:core:src": "bun test src/cli/args.test.ts src/cli/help.test.ts src/shared/setup-state.test.ts src/core/services/cli-command.test.ts src/core/services/field-grouping-overlay.test.ts src/core/services/numeric-shortcut-session.test.ts src/core/services/secondary-subtitle.test.ts src/core/services/mpv-render-metrics.test.ts src/core/services/overlay-content-measurement.test.ts src/core/services/mpv-control.test.ts src/core/services/mpv.test.ts src/core/services/runtime-options-ipc.test.ts src/core/services/runtime-config.test.ts src/core/services/yomitan-extension-paths.test.ts src/core/services/config-hot-reload.test.ts src/core/services/discord-presence.test.ts src/core/services/tokenizer.test.ts src/core/services/tokenizer/annotation-stage.test.ts src/core/services/tokenizer/parser-selection-stage.test.ts src/core/services/tokenizer/parser-enrichment-stage.test.ts src/core/services/subsync.test.ts src/core/services/overlay-bridge.test.ts src/core/services/overlay-shortcut-handler.test.ts src/core/services/stats-window.test.ts src/core/services/mining.test.ts src/core/services/anki-jimaku.test.ts src/core/services/jimaku-download-path.test.ts src/core/services/jellyfin.test.ts src/core/services/jellyfin-remote.test.ts src/core/services/immersion-tracker-service.test.ts src/core/services/overlay-runtime-init.test.ts src/core/services/app-ready.test.ts src/core/services/startup-bootstrap.test.ts src/core/services/subtitle-processing-controller.test.ts src/core/services/anilist/anilist-update-queue.test.ts src/core/services/anilist/rate-limiter.test.ts src/core/services/jlpt-token-filter.test.ts src/core/services/subtitle-position.test.ts src/core/utils/shortcut-config.test.ts src/main/runtime/first-run-setup-plugin.test.ts src/main/runtime/first-run-setup-service.test.ts src/main/runtime/first-run-setup-window.test.ts src/main/runtime/tray-runtime.test.ts src/main/runtime/tray-main-actions.test.ts src/main/runtime/tray-main-deps.test.ts src/main/runtime/tray-runtime-handlers.test.ts src/main/runtime/cli-command-context-main-deps.test.ts src/main/runtime/app-ready-main-deps.test.ts src/renderer/error-recovery.test.ts src/renderer/subtitle-render.test.ts src/renderer/handlers/mouse.test.ts src/renderer/handlers/keyboard.test.ts src/renderer/modals/jimaku.test.ts src/subsync/utils.test.ts src/main/anilist-url-guard.test.ts src/window-trackers/hyprland-tracker.test.ts src/window-trackers/x11-tracker.test.ts src/window-trackers/windows-helper.test.ts src/window-trackers/windows-tracker.test.ts launcher/config.test.ts launcher/config-domain-parsers.test.ts launcher/config/cli-parser-builder.test.ts launcher/config/args-normalizer.test.ts launcher/parse-args.test.ts launcher/main.test.ts launcher/commands/command-modules.test.ts launcher/setup-gate.test.ts stats/src/lib/api-client.test.ts",
"test:core:dist": "bun test dist/cli/args.test.js dist/cli/help.test.js dist/core/services/cli-command.test.js dist/core/services/ipc.test.js dist/core/services/anki-jimaku-ipc.test.js dist/core/services/field-grouping-overlay.test.js dist/core/services/numeric-shortcut-session.test.js dist/core/services/secondary-subtitle.test.js dist/core/services/mpv-render-metrics.test.js dist/core/services/overlay-content-measurement.test.js dist/core/services/mpv-control.test.js dist/core/services/mpv.test.js dist/core/services/runtime-options-ipc.test.js dist/core/services/runtime-config.test.js dist/core/services/yomitan-extension-paths.test.js dist/core/services/config-hot-reload.test.js dist/core/services/discord-presence.test.js dist/core/services/tokenizer.test.js dist/core/services/tokenizer/annotation-stage.test.js dist/core/services/tokenizer/parser-selection-stage.test.js dist/core/services/tokenizer/parser-enrichment-stage.test.js dist/core/services/subsync.test.js dist/core/services/overlay-bridge.test.js dist/core/services/overlay-manager.test.js dist/core/services/overlay-shortcut-handler.test.js dist/core/services/mining.test.js dist/core/services/anki-jimaku.test.js dist/core/services/jimaku-download-path.test.js dist/core/services/jellyfin.test.js dist/core/services/jellyfin-remote.test.js dist/core/services/immersion-tracker-service.test.js dist/core/services/overlay-runtime-init.test.js dist/core/services/app-ready.test.js dist/core/services/startup-bootstrap.test.js dist/core/services/subtitle-processing-controller.test.js dist/core/services/anilist/anilist-token-store.test.js dist/core/services/anilist/anilist-update-queue.test.js dist/core/services/anilist/rate-limiter.test.js dist/core/services/jlpt-token-filter.test.js dist/core/services/subtitle-position.test.js dist/renderer/error-recovery.test.js dist/renderer/subtitle-render.test.js dist/renderer/handlers/mouse.test.js dist/renderer/handlers/keyboard.test.js dist/renderer/modals/jimaku.test.js dist/subsync/utils.test.js dist/main/anilist-url-guard.test.js dist/window-trackers/hyprland-tracker.test.js dist/window-trackers/x11-tracker.test.js dist/window-trackers/windows-helper.test.js dist/window-trackers/windows-tracker.test.js",
"test:core:smoke:dist": "bun test dist/cli/help.test.js dist/core/services/runtime-config.test.js dist/core/services/ipc.test.js dist/core/services/overlay-manager.test.js dist/core/services/anilist/anilist-token-store.test.js dist/core/services/startup-bootstrap.test.js dist/renderer/error-recovery.test.js dist/main/anilist-url-guard.test.js dist/window-trackers/x11-tracker.test.js",
"test:smoke:dist": "bun run test:config:smoke:dist && bun run test:core:smoke:dist",
"test:subtitle:src": "bun test src/core/services/subsync.test.ts src/subsync/utils.test.ts",
@@ -63,7 +63,7 @@
"test:launcher": "bun run test:launcher:src",
"test:core": "bun run test:core:src",
"test:subtitle": "bun run test:subtitle:src",
"test:fast": "bun run test:config:src && bun run test:core:src && bun run test:docs:kb && bun test src/main-entry-runtime.test.ts src/anki-integration/anki-connect-proxy.test.ts src/release-workflow.test.ts src/ci-workflow.test.ts scripts/build-changelog.test.ts scripts/mkv-to-readme-video.test.ts scripts/update-aur-package.test.ts && bun run tsc && bun test dist/main/runtime/registry.test.js",
"test:fast": "bun run test:config:src && bun run test:core:src && bun run test:docs:kb && bun test src/main-entry-runtime.test.ts src/anki-integration.test.ts src/anki-integration/anki-connect-proxy.test.ts src/anki-integration/field-grouping-workflow.test.ts src/anki-integration/field-grouping.test.ts src/anki-integration/field-grouping-merge.test.ts src/release-workflow.test.ts src/ci-workflow.test.ts scripts/build-changelog.test.ts scripts/mkv-to-readme-video.test.ts scripts/update-aur-package.test.ts && bun test src/core/services/immersion-tracker/__tests__/query.test.ts src/core/services/immersion-tracker/__tests__/query-split-modules.test.ts && bun run tsc && bun test dist/main/runtime/registry.test.js",
"generate:config-example": "bun run src/generate-config-example.ts",
"verify:config-example": "bun run src/verify-config-example.ts",
"start": "bun run build && electron . --start",
@@ -98,7 +98,6 @@
"dependencies": {
"@fontsource-variable/geist": "^5.2.8",
"@fontsource-variable/geist-mono": "^5.2.7",
"@hono/node-server": "^1.19.11",
"axios": "^1.13.5",
"commander": "^14.0.3",
"discord-rpc": "^4.0.1",

View File

@@ -0,0 +1,30 @@
{
"name": "subminer-workflow",
"version": "0.1.0",
"description": "Repo-local SubMiner agent workflow plugin for backlog-first orchestration and change verification.",
"author": {
"name": "Kyle Yasuda",
"email": "suda@sudacode.com",
"url": "https://github.com/sudacode"
},
"homepage": "https://github.com/sudacode/SubMiner/tree/main/plugins/subminer-workflow",
"repository": "https://github.com/sudacode/SubMiner",
"license": "GPL-3.0-or-later",
"keywords": ["subminer", "workflow", "backlog", "verification", "skills"],
"skills": "./skills/",
"interface": {
"displayName": "SubMiner Workflow",
"shortDescription": "Backlog-first SubMiner orchestration and verification.",
"longDescription": "Canonical repo-local plugin for SubMiner agent workflow packaging. Owns the scrum-master and change-verification skills plus helper scripts used to plan, verify, and validate changes reproducibly inside this repo.",
"developerName": "Kyle Yasuda",
"category": "Productivity",
"capabilities": ["Interactive", "Write"],
"websiteURL": "https://github.com/sudacode/SubMiner",
"defaultPrompt": [
"Use SubMiner workflow to plan and ship a feature.",
"Verify a SubMiner change with the plugin-owned verifier.",
"Run backlog-first intake for this SubMiner task."
],
"brandColor": "#2F6B4F"
}
}

View File

@@ -0,0 +1,49 @@
<!-- read_when: migrating or using the repo-local SubMiner workflow plugin -->
# SubMiner Workflow Plugin
Status: active
Last verified: 2026-03-26
Owner: Kyle Yasuda
Read when: using or updating the repo-local plugin that owns SubMiner agent workflow skills
This plugin is the canonical source of truth for the SubMiner agent workflow packaging.
## Contents
- `skills/subminer-scrum-master/`
- backlog-first intake, planning, dispatch, and handoff workflow
- `skills/subminer-change-verification/`
- cheap-first verification workflow plus helper scripts
## Backlog MCP
- This plugin assumes Backlog.md MCP is available in the host environment when the client exposes it.
- Canonical backlog behavior remains:
- read `backlog://workflow/overview` when resources are available
- otherwise use the matching backlog tool overview
- If backlog MCP is unavailable in the current session, fall back to direct repo-local `backlog/` edits and record that blocker in the task or handoff.
## Compatibility
- `.agents/skills/subminer-scrum-master/` is a compatibility shim that redirects to the plugin-owned skill.
- `.agents/skills/subminer-change-verification/` is a compatibility shim.
- `.agents/skills/subminer-change-verification/scripts/*.sh` remain as wrapper entrypoints so existing docs, backlog tasks, and shell history keep working.
## Verification
For plugin/doc/shim changes, prefer:
```bash
bun run test:docs:kb
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh --lane docs --lane core \
plugins/subminer-workflow \
.agents/skills/subminer-scrum-master/SKILL.md \
.agents/skills/subminer-change-verification/SKILL.md \
.agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh \
.agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh \
.agents/plugins/marketplace.json \
docs/workflow/README.md \
docs/workflow/agent-plugins.md \
backlog/tasks/task-240\ -\ Migrate-SubMiner-agent-skills-into-a-repo-local-plugin-workflow.md
```

View File

@@ -0,0 +1,141 @@
---
name: 'subminer-change-verification'
description: 'Use when working in the SubMiner repo and you need to verify code changes actually work. Covers targeted regression checks during debugging and pre-handoff verification, with cheap-first lane selection for config, docs, launcher/plugin, runtime-compat, and optional real-runtime escalation.'
---
# SubMiner Change Verification
Canonical source: this plugin path.
Use this skill for SubMiner code changes. Default to cheap, repo-native verification first. Escalate only when the changed behavior actually depends on Electron, mpv, overlay/window tracking, or other GUI-sensitive runtime behavior.
## Scripts
- `scripts/classify_subminer_diff.sh`
- Emits suggested lanes and flags from explicit paths or current git changes.
- `scripts/verify_subminer_change.sh`
- Runs selected lanes, captures artifacts, and writes a compact summary.
If you need an explicit installed path, use the directory that contains this `SKILL.md`. The helper scripts live under:
```bash
export SUBMINER_VERIFY_SKILL="<path-to-plugin-skill>"
```
## Default workflow
1. Inspect the changed files or user-requested area.
2. Run the classifier unless you already know the right lane.
3. Run the verifier with the cheapest sufficient lane set.
4. If the classifier emits `flag:real-runtime-candidate`, do not jump straight to runtime verification. First run the non-runtime lanes.
5. Escalate to explicit `--lane real-runtime --allow-real-runtime` only when cheaper lanes cannot validate the behavior claim.
6. Return:
- verification summary
- exact commands run
- artifact paths
- skipped lanes and blockers
## Quick start
Plugin-source quick start:
```bash
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/classify_subminer_diff.sh
```
Installed-skill quick start:
```bash
bash "$SUBMINER_VERIFY_SKILL/scripts/classify_subminer_diff.sh"
```
Compatibility entrypoint:
```bash
bash .agents/skills/subminer-change-verification/scripts/classify_subminer_diff.sh
```
Classify explicit files:
```bash
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/classify_subminer_diff.sh \
launcher/main.ts \
plugin/subminer/lifecycle.lua \
src/main/runtime/mpv-client-runtime-service.ts
```
Run automatic lane selection:
```bash
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh
```
Installed-skill form:
```bash
bash "$SUBMINER_VERIFY_SKILL/scripts/verify_subminer_change.sh"
```
Compatibility entrypoint:
```bash
bash .agents/skills/subminer-change-verification/scripts/verify_subminer_change.sh
```
Run targeted lanes:
```bash
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh \
--lane launcher-plugin \
--lane runtime-compat
```
Dry-run to inspect planned commands and artifact layout:
```bash
bash plugins/subminer-workflow/skills/subminer-change-verification/scripts/verify_subminer_change.sh \
--dry-run \
launcher/main.ts \
src/main.ts
```
## Lane guidance
- `docs`
- For `docs-site/`, `docs/`, and doc-only edits.
- `config`
- For `src/config/` and config-template-sensitive edits.
- `core`
- For general source changes where `typecheck` + `test:fast` is the best cheap signal.
- `launcher-plugin`
- For `launcher/`, `plugin/subminer/`, plugin gating scripts, and wrapper/mpv routing work.
- `runtime-compat`
- For `src/main*`, runtime/composer wiring, mpv/overlay services, window trackers, and dist-sensitive behavior.
- `real-runtime`
- Only after deliberate escalation.
## Real Runtime Escalation
Escalate only when the change claim depends on actual runtime behavior, for example:
- overlay appears, hides, or tracks a real mpv window
- mpv launch flags or pause-until-ready behavior
- plugin/socket/auto-start handshake under a real player
- macOS/window-tracker/focus-sensitive behavior
If the environment cannot support authoritative runtime verification, report the blocker explicitly. Do not silently downgrade a runtime-required claim to a pass.
## Artifact contract
The verifier writes under `.tmp/skill-verification/<timestamp>/`:
- `summary.json`
- `summary.txt`
- `classification.txt`
- `env.txt`
- `lanes.txt`
- `steps.tsv`
- `steps/*.stdout.log`
- `steps/*.stderr.log`
On failure, quote the exact failing command and point at the artifact directory.

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: classify_subminer_diff.sh [path ...]
Emit suggested verification lanes for explicit paths or current local git changes.
Output format:
lane:<name>
flag:<name>
reason:<text>
EOF
}
has_item() {
local needle=$1
shift || true
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
add_lane() {
local lane=$1
if ! has_item "$lane" "${LANES[@]:-}"; then
LANES+=("$lane")
fi
}
add_flag() {
local flag=$1
if ! has_item "$flag" "${FLAGS[@]:-}"; then
FLAGS+=("$flag")
fi
}
add_reason() {
REASONS+=("$1")
}
collect_git_paths() {
local top_level
if ! top_level=$(git rev-parse --show-toplevel 2>/dev/null); then
return 0
fi
(
cd "$top_level"
if git rev-parse --verify HEAD >/dev/null 2>&1; then
git diff --name-only --relative HEAD --
git diff --name-only --relative --cached --
else
git diff --name-only --relative --
git diff --name-only --relative --cached --
fi
git ls-files --others --exclude-standard
) | awk 'NF' | sort -u
}
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then
usage
exit 0
fi
declare -a PATHS=()
declare -a LANES=()
declare -a FLAGS=()
declare -a REASONS=()
if [[ $# -gt 0 ]]; then
while [[ $# -gt 0 ]]; do
PATHS+=("$1")
shift
done
else
while IFS= read -r line; do
[[ -n "$line" ]] && PATHS+=("$line")
done < <(collect_git_paths)
fi
if [[ ${#PATHS[@]} -eq 0 ]]; then
add_lane "core"
add_reason "no changed paths detected -> default to core"
fi
for path in "${PATHS[@]}"; do
specialized=0
case "$path" in
docs-site/*|docs/*|changes/*|README.md)
add_lane "docs"
add_reason "$path -> docs"
specialized=1
;;
esac
case "$path" in
src/config/*|src/generate-config-example.ts|src/verify-config-example.ts|docs-site/public/config.example.jsonc|config.example.jsonc)
add_lane "config"
add_reason "$path -> config"
specialized=1
;;
esac
case "$path" in
launcher/*|plugin/subminer/*|plugin/subminer.conf|scripts/test-plugin-*|scripts/get-mpv-window-*|scripts/configure-plugin-binary-path.mjs)
add_lane "launcher-plugin"
add_reason "$path -> launcher-plugin"
add_flag "real-runtime-candidate"
add_reason "$path -> real-runtime-candidate"
specialized=1
;;
esac
case "$path" in
src/main.ts|src/main-entry.ts|src/preload.ts|src/main/*|src/core/services/mpv*|src/core/services/overlay*|src/renderer/*|src/window-trackers/*|scripts/prepare-build-assets.mjs)
add_lane "runtime-compat"
add_reason "$path -> runtime-compat"
add_flag "real-runtime-candidate"
add_reason "$path -> real-runtime-candidate"
specialized=1
;;
esac
if [[ "$specialized" == "0" ]]; then
case "$path" in
src/*|package.json|tsconfig*.json|scripts/*|Makefile)
add_lane "core"
add_reason "$path -> core"
;;
esac
fi
case "$path" in
package.json|src/main.ts|src/main-entry.ts|src/preload.ts)
add_flag "broad-impact"
add_reason "$path -> broad-impact"
;;
esac
done
if [[ ${#LANES[@]} -eq 0 ]]; then
add_lane "core"
add_reason "no lane-specific matches -> default to core"
fi
for lane in "${LANES[@]}"; do
printf 'lane:%s\n' "$lane"
done
for flag in "${FLAGS[@]}"; do
printf 'flag:%s\n' "$flag"
done
for reason in "${REASONS[@]}"; do
printf 'reason:%s\n' "$reason"
done

View File

@@ -0,0 +1,511 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: verify_subminer_change.sh [options] [path ...]
Options:
--lane <name> Force a verification lane. Repeatable.
--artifact-dir <dir> Use an explicit artifact directory.
--allow-real-runtime Allow explicit real-runtime execution.
--allow-real-gui Deprecated alias for --allow-real-runtime.
--dry-run Record planned steps without executing commands.
--help Show this help text.
If no lanes are supplied, the script classifies the provided paths. If no paths are
provided, it classifies the current local git changes.
Authoritative real-runtime verification should be requested with explicit path
arguments instead of relying on inferred local git changes.
EOF
}
timestamp() {
date +%Y%m%d-%H%M%S
}
timestamp_iso() {
date -u +%Y-%m-%dT%H:%M:%SZ
}
generate_session_id() {
local tmp_dir
tmp_dir=$(mktemp -d "${TMPDIR:-/tmp}/subminer-verify-$(timestamp)-XXXXXX")
basename "$tmp_dir"
rmdir "$tmp_dir"
}
has_item() {
local needle=$1
shift || true
local item
for item in "$@"; do
if [[ "$item" == "$needle" ]]; then
return 0
fi
done
return 1
}
normalize_lane_name() {
case "$1" in
real-gui)
printf '%s' "real-runtime"
;;
*)
printf '%s' "$1"
;;
esac
}
add_lane() {
local lane
lane=$(normalize_lane_name "$1")
if ! has_item "$lane" "${SELECTED_LANES[@]:-}"; then
SELECTED_LANES+=("$lane")
fi
}
add_blocker() {
BLOCKERS+=("$1")
BLOCKED=1
}
validate_artifact_dir() {
local candidate=$1
if [[ ! "$candidate" =~ ^[A-Za-z0-9._/@:+-]+$ ]]; then
echo "Invalid characters in --artifact-dir path" >&2
exit 2
fi
}
append_step_record() {
printf '%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n' \
"$1" "$2" "$3" "$4" "$5" "$6" "$7" "$8" >>"$STEPS_TSV"
}
record_env() {
{
printf 'repo_root=%s\n' "$REPO_ROOT"
printf 'session_id=%s\n' "$SESSION_ID"
printf 'artifact_dir=%s\n' "$ARTIFACT_DIR"
printf 'path_selection_mode=%s\n' "$PATH_SELECTION_MODE"
printf 'dry_run=%s\n' "$DRY_RUN"
printf 'allow_real_runtime=%s\n' "$ALLOW_REAL_RUNTIME"
printf 'session_home=%s\n' "$SESSION_HOME"
printf 'session_xdg_config_home=%s\n' "$SESSION_XDG_CONFIG_HOME"
printf 'session_mpv_dir=%s\n' "$SESSION_MPV_DIR"
printf 'session_logs_dir=%s\n' "$SESSION_LOGS_DIR"
printf 'session_mpv_log=%s\n' "$SESSION_MPV_LOG"
printf 'pwd=%s\n' "$(pwd)"
git rev-parse --short HEAD 2>/dev/null | sed 's/^/git_head=/' || true
git status --short 2>/dev/null || true
if [[ ${#PATH_ARGS[@]} -gt 0 ]]; then
printf 'requested_paths=\n'
printf ' %s\n' "${PATH_ARGS[@]}"
fi
} >"$ARTIFACT_DIR/env.txt"
}
run_step() {
local lane=$1
local name=$2
local command=$3
local note=${4:-}
local slug=${name//[^a-zA-Z0-9_-]/-}
local stdout_rel="steps/${slug}.stdout.log"
local stderr_rel="steps/${slug}.stderr.log"
local stdout_path="$ARTIFACT_DIR/$stdout_rel"
local stderr_path="$ARTIFACT_DIR/$stderr_rel"
local status exit_code
COMMANDS_RUN+=("$command")
printf '%s\n' "$command" >"$ARTIFACT_DIR/steps/${slug}.command.txt"
if [[ "$DRY_RUN" == "1" ]]; then
printf '[dry-run] %s\n' "$command" >"$stdout_path"
: >"$stderr_path"
status="dry-run"
exit_code=0
else
if bash -lc "cd \"$REPO_ROOT\" && $command" >"$stdout_path" 2>"$stderr_path"; then
status="passed"
exit_code=0
EXECUTED_REAL_STEPS=1
else
exit_code=$?
status="failed"
FAILED=1
fi
fi
append_step_record "$lane" "$name" "$status" "$exit_code" "$command" "$stdout_rel" "$stderr_rel" "$note"
printf '%s\t%s\t%s\n' "$lane" "$name" "$status"
if [[ "$status" == "failed" ]]; then
FAILURE_STEP="$name"
FAILURE_COMMAND="$command"
FAILURE_STDOUT="$stdout_rel"
FAILURE_STDERR="$stderr_rel"
return "$exit_code"
fi
}
record_nonpassing_step() {
local lane=$1
local name=$2
local status=$3
local note=$4
local slug=${name//[^a-zA-Z0-9_-]/-}
local stdout_rel="steps/${slug}.stdout.log"
local stderr_rel="steps/${slug}.stderr.log"
printf '%s\n' "$note" >"$ARTIFACT_DIR/$stdout_rel"
: >"$ARTIFACT_DIR/$stderr_rel"
append_step_record "$lane" "$name" "$status" "0" "" "$stdout_rel" "$stderr_rel" "$note"
printf '%s\t%s\t%s\n' "$lane" "$name" "$status"
}
record_skipped_step() {
record_nonpassing_step "$1" "$2" "skipped" "$3"
}
record_blocked_step() {
add_blocker "$3"
record_nonpassing_step "$1" "$2" "blocked" "$3"
}
record_failed_step() {
FAILED=1
FAILURE_STEP=$2
FAILURE_COMMAND=${FAILURE_COMMAND:-"(validation)"}
FAILURE_STDOUT="steps/${2//[^a-zA-Z0-9_-]/-}.stdout.log"
FAILURE_STDERR="steps/${2//[^a-zA-Z0-9_-]/-}.stderr.log"
add_blocker "$3"
record_nonpassing_step "$1" "$2" "failed" "$3"
}
find_real_runtime_helper() {
local candidate
for candidate in \
"$SCRIPT_DIR/run_real_runtime_smoke.sh" \
"$SCRIPT_DIR/run_real_mpv_smoke.sh"; do
if [[ -x "$candidate" ]]; then
printf '%s' "$candidate"
return 0
fi
done
return 1
}
acquire_real_runtime_lease() {
local lease_root="$REPO_ROOT/.tmp/skill-verification/locks"
local lease_dir="$lease_root/exclusive-real-runtime"
mkdir -p "$lease_root"
if mkdir "$lease_dir" 2>/dev/null; then
REAL_RUNTIME_LEASE_DIR="$lease_dir"
printf '%s\n' "$SESSION_ID" >"$lease_dir/session_id"
return 0
fi
local owner=""
if [[ -f "$lease_dir/session_id" ]]; then
owner=$(cat "$lease_dir/session_id")
fi
add_blocker "real-runtime lease already held${owner:+ by $owner}"
return 1
}
release_real_runtime_lease() {
if [[ -n "$REAL_RUNTIME_LEASE_DIR" && -d "$REAL_RUNTIME_LEASE_DIR" ]]; then
if [[ -f "$REAL_RUNTIME_LEASE_DIR/session_id" ]]; then
local owner
owner=$(cat "$REAL_RUNTIME_LEASE_DIR/session_id")
if [[ "$owner" != "$SESSION_ID" ]]; then
return 0
fi
fi
rm -rf "$REAL_RUNTIME_LEASE_DIR"
fi
}
compute_final_status() {
if [[ "$FAILED" == "1" ]]; then
FINAL_STATUS="failed"
elif [[ "$BLOCKED" == "1" ]]; then
FINAL_STATUS="blocked"
elif [[ "$EXECUTED_REAL_STEPS" == "1" ]]; then
FINAL_STATUS="passed"
else
FINAL_STATUS="skipped"
fi
}
write_summary_files() {
local lane_lines
lane_lines=$(printf '%s\n' "${SELECTED_LANES[@]}")
printf '%s\n' "$lane_lines" >"$ARTIFACT_DIR/lanes.txt"
printf '%s\n' "${BLOCKERS[@]}" >"$ARTIFACT_DIR/blockers.txt"
printf '%s\n' "${PATH_ARGS[@]}" >"$ARTIFACT_DIR/requested-paths.txt"
ARTIFACT_DIR_ENV="$ARTIFACT_DIR" \
SESSION_ID_ENV="$SESSION_ID" \
FINAL_STATUS_ENV="$FINAL_STATUS" \
PATH_SELECTION_MODE_ENV="$PATH_SELECTION_MODE" \
ALLOW_REAL_RUNTIME_ENV="$ALLOW_REAL_RUNTIME" \
SESSION_HOME_ENV="$SESSION_HOME" \
SESSION_XDG_CONFIG_HOME_ENV="$SESSION_XDG_CONFIG_HOME" \
SESSION_MPV_DIR_ENV="$SESSION_MPV_DIR" \
SESSION_LOGS_DIR_ENV="$SESSION_LOGS_DIR" \
SESSION_MPV_LOG_ENV="$SESSION_MPV_LOG" \
STARTED_AT_ENV="$STARTED_AT" \
FINISHED_AT_ENV="$FINISHED_AT" \
FAILED_ENV="$FAILED" \
FAILURE_COMMAND_ENV="${FAILURE_COMMAND:-}" \
FAILURE_STDOUT_ENV="${FAILURE_STDOUT:-}" \
FAILURE_STDERR_ENV="${FAILURE_STDERR:-}" \
bun -e '
const fs = require("fs");
const path = require("path");
const lines = fs
.readFileSync(path.join(process.env.ARTIFACT_DIR_ENV, "steps.tsv"), "utf8")
.trim()
.split("\n")
.filter(Boolean)
.slice(1)
.map((line) => {
const [lane, name, status, exitCode, command, stdout, stderr, note] = line.split("\t");
return { lane, name, status, exitCode: Number(exitCode), command, stdout, stderr, note };
});
const payload = {
sessionId: process.env.SESSION_ID_ENV,
startedAt: process.env.STARTED_AT_ENV,
finishedAt: process.env.FINISHED_AT_ENV,
status: process.env.FINAL_STATUS_ENV,
pathSelectionMode: process.env.PATH_SELECTION_MODE_ENV,
allowRealRuntime: process.env.ALLOW_REAL_RUNTIME_ENV === "1",
sessionHome: process.env.SESSION_HOME_ENV,
sessionXdgConfigHome: process.env.SESSION_XDG_CONFIG_HOME_ENV,
sessionMpvDir: process.env.SESSION_MPV_DIR_ENV,
sessionLogsDir: process.env.SESSION_LOGS_DIR_ENV,
sessionMpvLog: process.env.SESSION_MPV_LOG_ENV,
failed: process.env.FAILED_ENV === "1",
failure: process.env.FAILURE_COMMAND_ENV
? {
command: process.env.FAILURE_COMMAND_ENV,
stdout: process.env.FAILURE_STDOUT_ENV,
stderr: process.env.FAILURE_STDERR_ENV,
}
: null,
blockers: fs
.readFileSync(path.join(process.env.ARTIFACT_DIR_ENV, "blockers.txt"), "utf8")
.split("\n")
.filter(Boolean),
lanes: fs
.readFileSync(path.join(process.env.ARTIFACT_DIR_ENV, "lanes.txt"), "utf8")
.split("\n")
.filter(Boolean),
requestedPaths: fs
.readFileSync(path.join(process.env.ARTIFACT_DIR_ENV, "requested-paths.txt"), "utf8")
.split("\n")
.filter(Boolean),
steps: lines,
};
fs.writeFileSync(
path.join(process.env.ARTIFACT_DIR_ENV, "summary.json"),
JSON.stringify(payload, null, 2) + "\n",
);
const summaryLines = [
`status: ${payload.status}`,
`session: ${payload.sessionId}`,
`artifacts: ${process.env.ARTIFACT_DIR_ENV}`,
`lanes: ${payload.lanes.join(", ") || "(none)"}`,
];
if (payload.requestedPaths.length > 0) {
summaryLines.push("requested paths:");
for (const entry of payload.requestedPaths) {
summaryLines.push(`- ${entry}`);
}
}
if (payload.failure) {
summaryLines.push(`failure command: ${payload.failure.command}`);
summaryLines.push(`failure stdout: ${payload.failure.stdout}`);
summaryLines.push(`failure stderr: ${payload.failure.stderr}`);
}
if (payload.blockers.length > 0) {
summaryLines.push("blockers:");
for (const blocker of payload.blockers) {
summaryLines.push(`- ${blocker}`);
}
}
summaryLines.push("steps:");
for (const step of payload.steps) {
summaryLines.push(`- ${step.lane}/${step.name}: ${step.status}`);
}
fs.writeFileSync(
path.join(process.env.ARTIFACT_DIR_ENV, "summary.txt"),
summaryLines.join("\n") + "\n",
);
'
}
SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
SKILL_DIR=$(cd "$SCRIPT_DIR/.." && pwd)
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
declare -a PATH_ARGS=()
declare -a SELECTED_LANES=()
declare -a COMMANDS_RUN=()
declare -a BLOCKERS=()
ALLOW_REAL_RUNTIME=0
DRY_RUN=0
FAILED=0
BLOCKED=0
EXECUTED_REAL_STEPS=0
FAILURE_STEP=""
FAILURE_COMMAND=""
FAILURE_STDOUT=""
FAILURE_STDERR=""
REAL_RUNTIME_LEASE_DIR=""
PATH_SELECTION_MODE="auto"
while [[ $# -gt 0 ]]; do
case "$1" in
--lane)
shift
[[ $# -gt 0 ]] || {
echo "Missing value for --lane" >&2
exit 2
}
add_lane "$1"
PATH_SELECTION_MODE="explicit-lanes"
;;
--artifact-dir)
shift
[[ $# -gt 0 ]] || {
echo "Missing value for --artifact-dir" >&2
exit 2
}
ARTIFACT_DIR=$1
;;
--allow-real-runtime|--allow-real-gui)
ALLOW_REAL_RUNTIME=1
;;
--dry-run)
DRY_RUN=1
;;
--help|-h)
usage
exit 0
;;
*)
PATH_ARGS+=("$1")
;;
esac
shift || true
done
if [[ -z "${ARTIFACT_DIR:-}" ]]; then
SESSION_ID=$(generate_session_id)
ARTIFACT_DIR="$REPO_ROOT/.tmp/skill-verification/$SESSION_ID"
else
validate_artifact_dir "$ARTIFACT_DIR"
SESSION_ID=$(basename "$ARTIFACT_DIR")
fi
mkdir -p "$ARTIFACT_DIR/steps"
STEPS_TSV="$ARTIFACT_DIR/steps.tsv"
printf 'lane\tstep\tstatus\texit_code\tcommand\tstdout\tstderr\tnote\n' >"$STEPS_TSV"
STARTED_AT=$(timestamp_iso)
SESSION_HOME="$REPO_ROOT/.tmp/skill-verification/runtime/$SESSION_ID/home"
SESSION_XDG_CONFIG_HOME="$REPO_ROOT/.tmp/skill-verification/runtime/$SESSION_ID/xdg-config"
SESSION_MPV_DIR="$SESSION_XDG_CONFIG_HOME/mpv"
SESSION_LOGS_DIR="$REPO_ROOT/.tmp/skill-verification/runtime/$SESSION_ID/logs"
SESSION_MPV_LOG="$SESSION_LOGS_DIR/mpv.log"
mkdir -p "$SESSION_HOME" "$SESSION_MPV_DIR" "$SESSION_LOGS_DIR"
CLASSIFIER_OUTPUT="$ARTIFACT_DIR/classification.txt"
if [[ ${#SELECTED_LANES[@]} -eq 0 ]]; then
if [[ ${#PATH_ARGS[@]} -gt 0 ]]; then
PATH_SELECTION_MODE="explicit-paths"
fi
if "$SCRIPT_DIR/classify_subminer_diff.sh" "${PATH_ARGS[@]}" >"$CLASSIFIER_OUTPUT"; then
while IFS= read -r line; do
case "$line" in
lane:*)
add_lane "${line#lane:}"
;;
esac
done <"$CLASSIFIER_OUTPUT"
else
record_failed_step "meta" "classify" "classification failed"
fi
else
: >"$CLASSIFIER_OUTPUT"
fi
record_env
if [[ ${#SELECTED_LANES[@]} -eq 0 ]]; then
add_lane "core"
fi
for lane in "${SELECTED_LANES[@]}"; do
case "$lane" in
docs)
run_step "$lane" "docs-kb" "bun run test:docs:kb" || break
;;
config)
run_step "$lane" "config" "bun run test:config" || break
;;
core)
run_step "$lane" "typecheck" "bun run typecheck" || break
run_step "$lane" "fast-tests" "bun run test:fast" || break
;;
launcher-plugin)
run_step "$lane" "launcher" "bun run test:launcher" || break
run_step "$lane" "plugin-src" "bun run test:plugin:src" || break
;;
runtime-compat)
run_step "$lane" "runtime-compat" "bun run test:runtime:compat" || break
;;
real-runtime)
if [[ "$ALLOW_REAL_RUNTIME" != "1" ]]; then
record_blocked_step "$lane" "real-runtime" "real-runtime requested without --allow-real-runtime"
continue
fi
if ! acquire_real_runtime_lease; then
record_blocked_step "$lane" "real-runtime-lease" "${BLOCKERS[-1]}"
continue
fi
helper=$(find_real_runtime_helper || true)
if [[ -z "${helper:-}" ]]; then
record_blocked_step "$lane" "real-runtime-helper" "no real-runtime helper script available in $SCRIPT_DIR"
continue
fi
run_step "$lane" "real-runtime" "\"$helper\" \"$SESSION_ID\" \"$ARTIFACT_DIR\"" || break
;;
*)
record_blocked_step "$lane" "unknown-lane" "unknown lane: $lane"
;;
esac
done
release_real_runtime_lease
FINISHED_AT=$(timestamp_iso)
compute_final_status
write_summary_files
printf 'summary:%s\n' "$ARTIFACT_DIR/summary.txt"
cat "$ARTIFACT_DIR/summary.txt"

View File

@@ -0,0 +1,162 @@
---
name: 'subminer-scrum-master'
description: 'Use in the SubMiner repo when a request should be turned into planned work and driven through execution. Assesses whether backlog tracking is warranted, creates or updates tasks when needed, records a plan, dispatches one or more subagents, and requires verification before handoff.'
---
# SubMiner Scrum Master
Canonical source: this plugin path.
Own workflow, not code by default.
Use this skill when the user gives a feature request, bug report, issue, refactor, or implementation ask and the agent should manage intake, planning, backlog hygiene, worker dispatch, and verification through completion.
## Core Rules
1. Decide first whether backlog tracking is warranted.
2. If backlog is needed, search first. Update existing work when it clearly matches.
3. If backlog is not needed, keep the process light. Do not invent ticket ceremony.
4. Record a plan before dispatching coding work.
5. Use parent + subtasks for multi-part work when backlog is used.
6. Dispatch conservatively. Parallelize only disjoint write scopes.
7. Require verification before handoff, typically via `subminer-change-verification`.
8. Report backlog actions, dispatched workers, verification, blockers, and remaining risks.
## Backlog Workflow
Preferred order:
1. Read `backlog://workflow/overview` when MCP resources are available.
2. If resources are unavailable, use the corresponding backlog tool overview.
3. If backlog MCP is unavailable in the session, work directly in repo-local `backlog/` files and record that constraint explicitly.
## Backlog Decision
Skip backlog when the request is:
- question only
- obvious mechanical edit
- tiny isolated change with no real planning
Use backlog when the work:
- needs planning or scope decisions
- spans multiple phases or subsystems
- is likely to need subagent dispatch
- should remain traceable for handoff/resume
If backlog is used:
- search existing tasks first
- create/update a standalone task for one focused deliverable
- create/update a parent task plus subtasks for multi-part work
- record the implementation plan in the task before implementation begins
## Intake Workflow
1. Parse the request.
Classify it as question, mechanical edit, bugfix, feature, refactor, investigation, or follow-up.
2. Decide whether backlog is needed.
3. If backlog is needed:
- search first
- update existing task if clearly relevant
- otherwise create the right structure
- write the implementation plan before dispatch
4. If backlog is skipped:
- write a short working plan in-thread
- proceed without fake ticketing
5. Choose execution mode:
- no subagents for trivial work
- one worker for focused work
- parallel workers only for disjoint scopes
6. Run verification before handoff.
## Dispatch Rules
The scrum master orchestrates. Workers implement.
- Do not become the default implementer unless delegation is unnecessary.
- Do not parallelize overlapping files or tightly coupled runtime work.
- Give every worker explicit ownership of files/modules.
- Tell every worker other agents may be active and they must not revert unrelated edits.
- Require each worker to report:
- changed files
- tests run
- blockers
Use worker agents for implementation and explorer agents only for bounded codebase questions.
## Verification
Every nontrivial code task gets verification.
Preferred flow:
1. use `subminer-change-verification`
2. start with the cheapest sufficient lane
3. escalate only when needed
4. if worker verification is sufficient, accept it or run one final consolidating pass
Never hand off nontrivial work without stating what was verified and what was skipped.
## Pre-Handoff Policy Checks
Before handoff, always ask and answer both questions explicitly:
1. Docs update required?
2. Changelog fragment required?
Rules:
- Do not assume silence implies "no."
- If the answer is yes, complete the update or report the blocker.
- Include final yes/no answers in the handoff summary even when both answers are "no."
## Failure / Scope Handling
- If a worker hits ambiguity, pause and ask the user.
- If verification fails, either:
- send the worker back with exact failure context, or
- fix it directly if it is tiny and clearly in scope
- If new scope appears, revisit backlog structure before silently expanding work.
## Representative Flows
### Trivial no-ticket work
- decide backlog is unnecessary
- keep a short plan
- implement directly or with one worker if helpful
- run targeted verification
- report outcome concisely
### Single-task implementation
- search/create/update one task
- record plan
- dispatch one worker
- integrate
- verify
- update task and report outcome
### Parent + subtasks execution
- search/create/update parent task
- create subtasks for distinct deliverables/phases
- record sequencing in the plan
- dispatch workers only where scopes are disjoint
- integrate
- run consolidated verification
- update task state and report outcome
## Output Expectations
At the end, report:
- whether backlog was used and what changed
- which workers were dispatched and what they owned
- what verification ran
- explicit answers to:
- docs update required?
- changelog fragment required?
- blockers, skips, and risks

View File

@@ -111,7 +111,11 @@ test('writeChangelogArtifacts skips changelog prepend when release section alrea
fs.mkdirSync(projectRoot, { recursive: true });
fs.mkdirSync(path.join(projectRoot, 'changes'), { recursive: true });
fs.writeFileSync(path.join(projectRoot, 'CHANGELOG.md'), existingChangelog, 'utf8');
fs.writeFileSync(path.join(projectRoot, 'changes', '001.md'), ['type: added', 'area: overlay', '', '- Stale release fragment.'].join('\n'), 'utf8');
fs.writeFileSync(
path.join(projectRoot, 'changes', '001.md'),
['type: added', 'area: overlay', '', '- Stale release fragment.'].join('\n'),
'utf8',
);
try {
const result = writeChangelogArtifacts({
@@ -125,7 +129,10 @@ test('writeChangelogArtifacts skips changelog prepend when release section alrea
const changelog = fs.readFileSync(path.join(projectRoot, 'CHANGELOG.md'), 'utf8');
assert.equal(changelog, existingChangelog);
const releaseNotes = fs.readFileSync(path.join(projectRoot, 'release', 'release-notes.md'), 'utf8');
const releaseNotes = fs.readFileSync(
path.join(projectRoot, 'release', 'release-notes.md'),
'utf8',
);
assert.match(releaseNotes, /## Highlights\n### Added\n- Existing release bullet\./);
} finally {
fs.rmSync(workspace, { recursive: true, force: true });

View File

@@ -354,11 +354,7 @@ export function writeChangelogArtifacts(options?: ChangelogOptions): {
log(`Removed ${fragment.path}`);
}
const releaseNotesPath = writeReleaseNotesFile(
cwd,
existingReleaseSection,
options?.deps,
);
const releaseNotesPath = writeReleaseNotesFile(cwd, existingReleaseSection, options?.deps);
log(`Generated ${releaseNotesPath}`);
return {

View File

@@ -55,19 +55,15 @@ exit 1
`,
);
const result = spawnSync(
'bash',
['scripts/patch-modernz.sh', '--target', target],
{
cwd: process.cwd(),
encoding: 'utf8',
env: {
...process.env,
HOME: path.join(root, 'home'),
PATH: `${binDir}:${process.env.PATH || ''}`,
},
const result = spawnSync('bash', ['scripts/patch-modernz.sh', '--target', target], {
cwd: process.cwd(),
encoding: 'utf8',
env: {
...process.env,
HOME: path.join(root, 'home'),
PATH: `${binDir}:${process.env.PATH || ''}`,
},
);
});
assert.equal(result.status, 1, result.stderr || result.stdout);
assert.match(result.stderr, /failed to apply patch to/);

View File

@@ -47,8 +47,8 @@ test('update-aur-package updates PKGBUILD and .SRCINFO without makepkg', () => {
const pkgbuild = fs.readFileSync(path.join(pkgDir, 'PKGBUILD'), 'utf8');
const srcinfo = fs.readFileSync(path.join(pkgDir, '.SRCINFO'), 'utf8');
const expectedSums = [appImagePath, wrapperPath, assetsPath].map((filePath) =>
execFileSync('sha256sum', [filePath], { encoding: 'utf8' }).split(/\s+/)[0],
const expectedSums = [appImagePath, wrapperPath, assetsPath].map(
(filePath) => execFileSync('sha256sum', [filePath], { encoding: 'utf8' }).split(/\s+/)[0],
);
assert.match(pkgbuild, /^pkgver=0\.6\.3$/m);

View File

@@ -1,4 +1,4 @@
import type { AnkiConnectConfig } from './types';
import type { AnkiConnectConfig } from './types/anki';
type NoteFieldValue = { value?: string } | string | null | undefined;
@@ -8,7 +8,9 @@ function normalizeFieldName(value: string | null | undefined): string | null {
return trimmed.length > 0 ? trimmed : null;
}
export function getConfiguredWordFieldName(config?: Pick<AnkiConnectConfig, 'fields'> | null): string {
export function getConfiguredWordFieldName(
config?: Pick<AnkiConnectConfig, 'fields'> | null,
): string {
return normalizeFieldName(config?.fields?.word) ?? 'Expression';
}

View File

@@ -21,15 +21,15 @@ import { SubtitleTimingTracker } from './subtitle-timing-tracker';
import { MediaGenerator } from './media-generator';
import path from 'path';
import {
AiConfig,
AnkiConnectConfig,
KikuDuplicateCardInfo,
KikuFieldGroupingChoice,
KikuMergePreviewResponse,
MpvClient,
NotificationOptions,
NPlusOneMatchMode,
} from './types';
} from './types/anki';
import { AiConfig } from './types/integrations';
import { MpvClient } from './types/runtime';
import { NPlusOneMatchMode } from './types/subtitle';
import { DEFAULT_ANKI_CONNECT_CONFIG } from './config';
import {
getConfiguredWordFieldCandidates,
@@ -212,10 +212,7 @@ export class AnkiIntegration {
try {
this.recordCardsMinedCallback(count, noteIds);
} catch (error) {
log.warn(
`recordCardsMined callback failed during ${source}:`,
(error as Error).message,
);
log.warn(`recordCardsMined callback failed during ${source}:`, (error as Error).message);
}
}

View File

@@ -1,4 +1,4 @@
import type { AiConfig } from '../types';
import type { AiConfig } from '../types/integrations';
import { requestAiChatCompletion } from '../ai/client';
const DEFAULT_AI_SYSTEM_PROMPT =

View File

@@ -4,10 +4,10 @@ import test from 'node:test';
import { resolveAnimatedImageLeadInSeconds, extractSoundFilenames } from './animated-image-sync';
test('extractSoundFilenames returns ordered sound filenames from an Anki field value', () => {
assert.deepEqual(
extractSoundFilenames('before [sound:word.mp3] middle [sound:alt.ogg] after'),
['word.mp3', 'alt.ogg'],
);
assert.deepEqual(extractSoundFilenames('before [sound:word.mp3] middle [sound:alt.ogg] after'), [
'word.mp3',
'alt.ogg',
]);
});
test('resolveAnimatedImageLeadInSeconds sums configured word audio durations for animated images', async () => {

View File

@@ -4,7 +4,7 @@ import * as os from 'node:os';
import * as path from 'node:path';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
type NoteInfoLike = {
noteId: number;
@@ -36,9 +36,7 @@ export function extractSoundFilenames(value: string): string[] {
}
function shouldSyncAnimatedImageToWordAudio(config: Pick<AnkiConnectConfig, 'media'>): boolean {
return (
config.media?.imageType === 'avif' && config.media?.syncAnimatedImageToWordAudio !== false
);
return config.media?.imageType === 'avif' && config.media?.syncAnimatedImageToWordAudio !== false;
}
export async function probeAudioDurationSeconds(

View File

@@ -2,7 +2,7 @@ import assert from 'node:assert/strict';
import test from 'node:test';
import { CardCreationService } from './card-creation';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
test('CardCreationService counts locally created sentence cards', async () => {
const minedCards: Array<{ count: number; noteIds?: number[] }> = [];

View File

@@ -3,10 +3,11 @@ import {
getConfiguredWordFieldName,
getPreferredWordValueFromExtractedFields,
} from '../anki-field-config';
import { AiConfig, AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { createLogger } from '../logger';
import { SubtitleTimingTracker } from '../subtitle-timing-tracker';
import { MpvClient } from '../types';
import { AiConfig } from '../types/integrations';
import { MpvClient } from '../types/runtime';
import { resolveSentenceBackText } from './ai';
import { resolveMediaGenerationInputPath } from './media-source';

View File

@@ -179,7 +179,10 @@ function getDuplicateSourceCandidates(
const fallbackFieldName = configuredFieldNames[0]?.toLowerCase() || 'expression';
const fallbackKey = `${fallbackFieldName}:${normalizeDuplicateValue(trimmedFallback)}`;
if (!dedupeKey.has(fallbackKey)) {
candidates.push({ fieldName: configuredFieldNames[0] || 'Expression', value: trimmedFallback });
candidates.push({
fieldName: configuredFieldNames[0] || 'Expression',
value: trimmedFallback,
});
}
}

View File

@@ -0,0 +1,201 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import {
FieldGroupingMergeCollaborator,
type FieldGroupingMergeNoteInfo,
} from './field-grouping-merge';
import type { AnkiConnectConfig } from '../types/anki';
function resolveFieldName(availableFieldNames: string[], preferredName: string): string | null {
return (
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null
);
}
function createCollaborator(
options: {
config?: Partial<AnkiConnectConfig>;
currentSubtitleText?: string;
generatedMedia?: {
audioField?: string;
audioValue?: string;
imageField?: string;
imageValue?: string;
miscInfoValue?: string;
};
warnings?: Array<{ fieldName: string; reason: string; detail?: string }>;
} = {},
) {
const warnings = options.warnings ?? [];
const config = {
fields: {
sentence: 'Sentence',
audio: 'ExpressionAudio',
image: 'Picture',
miscInfo: 'MiscInfo',
...(options.config?.fields ?? {}),
},
...(options.config ?? {}),
} as AnkiConnectConfig;
return {
collaborator: new FieldGroupingMergeCollaborator({
getConfig: () => config,
getEffectiveSentenceCardConfig: () => ({
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
}),
getCurrentSubtitleText: () => options.currentSubtitleText,
resolveFieldName,
resolveNoteFieldName: (noteInfo, preferredName) => {
if (!preferredName) return null;
return resolveFieldName(Object.keys(noteInfo.fields), preferredName);
},
extractFields: (fields) =>
Object.fromEntries(
Object.entries(fields).map(([key, value]) => [key.toLowerCase(), value.value || '']),
),
processSentence: (mpvSentence) => `${mpvSentence}::processed`,
generateMediaForMerge: async () => options.generatedMedia ?? {},
warnFieldParseOnce: (fieldName, reason, detail) => {
warnings.push({ fieldName, reason, detail });
},
}),
warnings,
};
}
function makeNote(noteId: number, fields: Record<string, string>): FieldGroupingMergeNoteInfo {
return {
noteId,
fields: Object.fromEntries(Object.entries(fields).map(([key, value]) => [key, { value }])),
};
}
test('getGroupableFieldNames includes configured fields without duplicating ExpressionAudio', () => {
const { collaborator } = createCollaborator({
config: {
fields: {
image: 'Illustration',
sentence: 'SentenceText',
audio: 'ExpressionAudio',
miscInfo: 'ExtraInfo',
},
},
});
assert.deepEqual(collaborator.getGroupableFieldNames(), [
'Sentence',
'SentenceAudio',
'Picture',
'Illustration',
'SentenceText',
'ExtraInfo',
'SentenceFurigana',
]);
});
test('computeFieldGroupingMergedFields syncs a custom audio field from merged SentenceAudio', async () => {
const { collaborator } = createCollaborator({
config: {
fields: {
audio: 'CustomAudio',
},
},
});
const merged = await collaborator.computeFieldGroupingMergedFields(
1,
2,
makeNote(1, {
SentenceAudio: '[sound:keep.mp3]',
CustomAudio: '[sound:stale.mp3]',
}),
makeNote(2, {
SentenceAudio: '[sound:new.mp3]',
}),
false,
);
assert.equal(
merged.SentenceAudio,
'<span data-group-id="1">[sound:keep.mp3]</span><span data-group-id="2">[sound:new.mp3]</span>',
);
assert.equal(merged.CustomAudio, merged.SentenceAudio);
});
test('computeFieldGroupingMergedFields keeps strict fields when source is empty and warns on malformed spans', async () => {
const { collaborator, warnings } = createCollaborator({
currentSubtitleText: 'subtitle line',
});
const merged = await collaborator.computeFieldGroupingMergedFields(
3,
4,
makeNote(3, {
Sentence: '<span data-group-id="abc">keep sentence</span>',
SentenceAudio: '',
}),
makeNote(4, {
Sentence: 'source sentence',
SentenceAudio: '[sound:source.mp3]',
}),
false,
);
assert.equal(
merged.Sentence,
'<span data-group-id="3"><span data-group-id="abc">keep sentence</span></span><span data-group-id="4">source sentence</span>',
);
assert.equal(merged.SentenceAudio, '<span data-group-id="4">[sound:source.mp3]</span>');
assert.equal(warnings.length, 4);
assert.deepEqual(
warnings.map((entry) => entry.reason),
['invalid-group-id', 'no-usable-span-entries', 'invalid-group-id', 'no-usable-span-entries'],
);
});
test('computeFieldGroupingMergedFields uses generated media only when includeGeneratedMedia is true', async () => {
const generatedMedia = {
audioField: 'SentenceAudio',
audioValue: '[sound:generated.mp3]',
imageField: 'Picture',
imageValue: '<img src="generated.png">',
miscInfoValue: 'generated misc',
};
const { collaborator: withoutGenerated } = createCollaborator({ generatedMedia });
const { collaborator: withGenerated } = createCollaborator({ generatedMedia });
const keep = makeNote(10, {
SentenceAudio: '',
Picture: '',
MiscInfo: '',
});
const source = makeNote(11, {
SentenceAudio: '',
Picture: '',
MiscInfo: '',
});
const without = await withoutGenerated.computeFieldGroupingMergedFields(
10,
11,
keep,
source,
false,
);
const withMedia = await withGenerated.computeFieldGroupingMergedFields(
10,
11,
keep,
source,
true,
);
assert.deepEqual(without, {});
assert.equal(withMedia.SentenceAudio, '<span data-group-id="11">[sound:generated.mp3]</span>');
assert.equal(withMedia.Picture, '<img data-group-id="11" src="generated.png">');
assert.equal(withMedia.MiscInfo, '<span data-group-id="11">generated misc</span>');
});

View File

@@ -1,4 +1,4 @@
import { AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { getConfiguredWordFieldName } from '../anki-field-config';
interface FieldGroupingMergeMedia {

View File

@@ -1,7 +1,7 @@
import test from 'node:test';
import assert from 'node:assert/strict';
import { FieldGroupingWorkflow } from './field-grouping-workflow';
import type { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types';
import type { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types/anki';
type NoteInfo = {
noteId: number;

View File

@@ -1,4 +1,4 @@
import { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types';
import { KikuDuplicateCardInfo, KikuFieldGroupingChoice } from '../types/anki';
import { getPreferredWordValueFromExtractedFields } from '../anki-field-config';
export interface FieldGroupingWorkflowNoteInfo {
@@ -181,7 +181,8 @@ export class FieldGroupingWorkflow {
return {
noteId: noteInfo.noteId,
expression:
getPreferredWordValueFromExtractedFields(fields, this.deps.getConfig()) || fallbackExpression,
getPreferredWordValueFromExtractedFields(fields, this.deps.getConfig()) ||
fallbackExpression,
sentencePreview: this.deps.truncateSentence(
fields[(sentenceCardConfig.sentenceField || 'sentence').toLowerCase()] ||
(isOriginal ? '' : this.deps.getCurrentSubtitleText() || ''),

View File

@@ -0,0 +1,411 @@
import assert from 'node:assert/strict';
import test from 'node:test';
import { FieldGroupingService } from './field-grouping';
import type { KikuMergePreviewResponse } from '../types/anki';
type NoteInfo = {
noteId: number;
fields: Record<string, { value: string }>;
};
function createHarness(
options: {
kikuEnabled?: boolean;
kikuFieldGrouping?: 'auto' | 'manual' | 'disabled';
deck?: string;
noteIds?: number[];
notesInfo?: NoteInfo[][];
duplicateNoteId?: number | null;
hasAllConfiguredFields?: boolean;
manualHandled?: boolean;
expression?: string | null;
currentSentenceImageField?: string | undefined;
onProcessNewCard?: (noteId: number, options?: { skipKikuFieldGrouping?: boolean }) => void;
} = {},
) {
const calls: string[] = [];
const findNotesQueries: Array<{ query: string; maxRetries?: number }> = [];
const noteInfoRequests: number[][] = [];
const duplicateRequests: Array<{ expression: string; excludeNoteId: number }> = [];
const processCalls: Array<{ noteId: number; options?: { skipKikuFieldGrouping?: boolean } }> = [];
const autoCalls: Array<{ originalNoteId: number; newNoteId: number; expression: string }> = [];
const manualCalls: Array<{ originalNoteId: number; newNoteId: number; expression: string }> = [];
const noteInfoQueue = [...(options.notesInfo ?? [])];
const notes = options.noteIds ?? [2];
const service = new FieldGroupingService({
getConfig: () => ({
fields: {
word: 'Expression',
},
}),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: options.kikuEnabled ?? true,
kikuFieldGrouping: options.kikuFieldGrouping ?? 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
getDeck: options.deck ? () => options.deck : undefined,
withUpdateProgress: async (_message, action) => {
calls.push('withUpdateProgress');
return action();
},
showOsdNotification: (text) => {
calls.push(`osd:${text}`);
},
findNotes: async (query, findNotesOptions) => {
findNotesQueries.push({ query, maxRetries: findNotesOptions?.maxRetries });
return notes;
},
notesInfo: async (noteIds) => {
noteInfoRequests.push([...noteIds]);
return noteInfoQueue.shift() ?? [];
},
extractFields: (fields) =>
Object.fromEntries(
Object.entries(fields).map(([key, value]) => [key.toLowerCase(), value.value || '']),
),
findDuplicateNote: async (expression, excludeNoteId) => {
duplicateRequests.push({ expression, excludeNoteId });
return options.duplicateNoteId ?? 99;
},
hasAllConfiguredFields: () => options.hasAllConfiguredFields ?? true,
processNewCard: async (noteId, processOptions) => {
processCalls.push({ noteId, options: processOptions });
options.onProcessNewCard?.(noteId, processOptions);
},
getSentenceCardImageFieldName: () => options.currentSentenceImageField,
resolveFieldName: (availableFieldNames, preferredName) =>
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: (noteInfo) =>
Object.fromEntries(
Object.entries(noteInfo.fields).map(([key, value]) => [key, value.value || '']),
),
handleFieldGroupingAuto: async (originalNoteId, newNoteId, _newNoteInfo, expression) => {
autoCalls.push({ originalNoteId, newNoteId, expression });
},
handleFieldGroupingManual: async (originalNoteId, newNoteId, _newNoteInfo, expression) => {
manualCalls.push({ originalNoteId, newNoteId, expression });
return options.manualHandled ?? true;
},
});
return {
service,
calls,
findNotesQueries,
noteInfoRequests,
duplicateRequests,
processCalls,
autoCalls,
manualCalls,
};
}
type SuccessfulPreview = KikuMergePreviewResponse & {
ok: true;
compact: {
action: {
keepNoteId: number;
deleteNoteId: number;
deleteDuplicate: boolean;
};
mergedFields: Record<string, string>;
};
full: {
result: {
wouldDeleteNoteId: number | null;
};
};
};
test('triggerFieldGroupingForLastAddedCard stops when kiku mode is disabled', async () => {
const harness = createHarness({ kikuEnabled: false });
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.calls, ['osd:Kiku mode is not enabled']);
assert.equal(harness.findNotesQueries.length, 0);
});
test('triggerFieldGroupingForLastAddedCard stops when field grouping is disabled', async () => {
const harness = createHarness({ kikuFieldGrouping: 'disabled' });
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.calls, ['osd:Kiku field grouping is disabled']);
assert.equal(harness.findNotesQueries.length, 0);
});
test('triggerFieldGroupingForLastAddedCard stops when an update is already in progress', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => true,
withUpdateProgress: async () => {
throw new Error('should not be called');
},
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async () => [],
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: () => null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: () => ({}),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
await service.triggerFieldGroupingForLastAddedCard();
});
test('triggerFieldGroupingForLastAddedCard finds the newest note and hands off to auto grouping', async () => {
const harness = createHarness({
deck: 'Anime Deck',
noteIds: [3, 7, 5],
notesInfo: [
[
{
noteId: 7,
fields: {
Expression: { value: 'word-7' },
Sentence: { value: 'line-7' },
},
},
],
[
{
noteId: 7,
fields: {
Expression: { value: 'word-7' },
Sentence: { value: 'line-7' },
},
},
],
],
duplicateNoteId: 42,
hasAllConfiguredFields: true,
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.findNotesQueries, [
{ query: '"deck:Anime Deck" added:1', maxRetries: undefined },
]);
assert.deepEqual(harness.noteInfoRequests, [[7], [7]]);
assert.deepEqual(harness.duplicateRequests, [{ expression: 'word-7', excludeNoteId: 7 }]);
assert.deepEqual(harness.autoCalls, [
{
originalNoteId: 42,
newNoteId: 7,
expression: 'word-7',
},
]);
});
test('triggerFieldGroupingForLastAddedCard refreshes the card when configured fields are missing', async () => {
const processCalls: Array<{ noteId: number; options?: { skipKikuFieldGrouping?: boolean } }> = [];
const harness = createHarness({
noteIds: [11],
notesInfo: [
[
{
noteId: 11,
fields: {
Expression: { value: 'word-11' },
Sentence: { value: 'line-11' },
},
},
],
[
{
noteId: 11,
fields: {
Expression: { value: 'word-11' },
Sentence: { value: 'line-11' },
},
},
],
],
duplicateNoteId: 13,
hasAllConfiguredFields: false,
onProcessNewCard: (noteId, options) => {
processCalls.push({ noteId, options });
},
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(processCalls, [{ noteId: 11, options: { skipKikuFieldGrouping: true } }]);
assert.deepEqual(harness.manualCalls, []);
});
test('triggerFieldGroupingForLastAddedCard shows a cancellation message when manual grouping is declined', async () => {
const harness = createHarness({
kikuFieldGrouping: 'manual',
noteIds: [9],
notesInfo: [
[
{
noteId: 9,
fields: {
Expression: { value: 'word-9' },
Sentence: { value: 'line-9' },
},
},
],
[
{
noteId: 9,
fields: {
Expression: { value: 'word-9' },
Sentence: { value: 'line-9' },
},
},
],
],
duplicateNoteId: 77,
manualHandled: false,
});
await harness.service.triggerFieldGroupingForLastAddedCard();
assert.deepEqual(harness.manualCalls, [
{
originalNoteId: 77,
newNoteId: 9,
expression: 'word-9',
},
]);
assert.equal(harness.calls.at(-1), 'osd:Field grouping cancelled');
});
test('buildFieldGroupingPreview returns merged compact and full previews', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
withUpdateProgress: async (_message, action) => action(),
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async (noteIds) =>
noteIds.map((noteId) => ({
noteId,
fields: {
Sentence: { value: `sentence-${noteId}` },
SentenceAudio: { value: `[sound:${noteId}.mp3]` },
Picture: { value: `<img src="${noteId}.png">` },
MiscInfo: { value: `misc-${noteId}` },
},
})),
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: (availableFieldNames, preferredName) =>
availableFieldNames.find(
(name) => name === preferredName || name.toLowerCase() === preferredName.toLowerCase(),
) ?? null,
computeFieldGroupingMergedFields: async () => ({
Sentence: 'merged sentence',
SentenceAudio: 'merged audio',
Picture: 'merged picture',
MiscInfo: 'merged misc',
}),
getNoteFieldMap: (noteInfo) =>
Object.fromEntries(
Object.entries(noteInfo.fields).map(([key, value]) => [key, value.value || '']),
),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
const preview = await service.buildFieldGroupingPreview(1, 2, true);
assert.equal(preview.ok, true);
if (!preview.ok) {
throw new Error(preview.error);
}
const successPreview = preview as SuccessfulPreview;
assert.deepEqual(successPreview.compact.action, {
keepNoteId: 1,
deleteNoteId: 2,
deleteDuplicate: true,
});
assert.equal(successPreview.compact.mergedFields.Sentence, 'merged sentence');
assert.equal(successPreview.full.result.wouldDeleteNoteId, 2);
});
test('buildFieldGroupingPreview reports missing notes cleanly', async () => {
const service = new FieldGroupingService({
getConfig: () => ({ fields: { word: 'Expression' } }),
getEffectiveSentenceCardConfig: () => ({
model: 'Sentence',
sentenceField: 'Sentence',
audioField: 'SentenceAudio',
lapisEnabled: false,
kikuEnabled: true,
kikuFieldGrouping: 'auto',
kikuDeleteDuplicateInAuto: true,
}),
isUpdateInProgress: () => false,
withUpdateProgress: async (_message, action) => action(),
showOsdNotification: () => {},
findNotes: async () => [],
notesInfo: async () => [
{
noteId: 1,
fields: {
Sentence: { value: 'sentence-1' },
},
},
],
extractFields: () => ({}),
findDuplicateNote: async () => null,
hasAllConfiguredFields: () => true,
processNewCard: async () => {},
getSentenceCardImageFieldName: () => undefined,
resolveFieldName: () => null,
computeFieldGroupingMergedFields: async () => ({}),
getNoteFieldMap: () => ({}),
handleFieldGroupingAuto: async () => {},
handleFieldGroupingManual: async () => true,
});
const preview = await service.buildFieldGroupingPreview(1, 2, false);
assert.equal(preview.ok, false);
if (preview.ok) {
throw new Error('expected preview to fail');
}
assert.equal(preview.error, 'Could not load selected notes');
});

View File

@@ -1,4 +1,4 @@
import { KikuMergePreviewResponse } from '../types';
import { KikuMergePreviewResponse } from '../types/anki';
import { createLogger } from '../logger';
import { getPreferredWordValueFromExtractedFields } from '../anki-field-config';

View File

@@ -4,7 +4,7 @@ import fs from 'node:fs';
import os from 'node:os';
import path from 'node:path';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import { KnownWordCacheManager } from './known-word-cache';
async function waitForCondition(
@@ -351,10 +351,7 @@ test('KnownWordCacheManager preserves cache state key captured before refresh wo
scope: string;
words: string[];
};
assert.equal(
persisted.scope,
'{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}',
);
assert.equal(persisted.scope, '{"refreshMinutes":1,"scope":"is:note","fieldsWord":"Word"}');
assert.deepEqual(persisted.words, ['猫']);
} finally {
fs.rmSync(stateDir, { recursive: true, force: true });

View File

@@ -3,7 +3,7 @@ import path from 'path';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import { getConfiguredWordFieldName } from '../anki-field-config';
import { AnkiConnectConfig } from '../types';
import { AnkiConnectConfig } from '../types/anki';
import { createLogger } from '../logger';
const log = createLogger('anki').child('integration.known-word-cache');
@@ -316,9 +316,9 @@ export class KnownWordCacheManager {
const currentDeck = this.deps.getConfig().deck?.trim();
const selectedDeckEntry =
currentDeck !== undefined && currentDeck.length > 0
? trimmedDeckEntries.find(([deckName]) => deckName === currentDeck) ?? null
? (trimmedDeckEntries.find(([deckName]) => deckName === currentDeck) ?? null)
: trimmedDeckEntries.length === 1
? trimmedDeckEntries[0] ?? null
? (trimmedDeckEntries[0] ?? null)
: null;
if (!selectedDeckEntry) {
@@ -329,7 +329,10 @@ export class KnownWordCacheManager {
if (Array.isArray(deckFields)) {
const normalizedFields = [
...new Set(
deckFields.map(String).map((field) => field.trim()).filter((field) => field.length > 0),
deckFields
.map(String)
.map((field) => field.trim())
.filter((field) => field.length > 0),
),
];
if (normalizedFields.length > 0) {
@@ -353,7 +356,14 @@ export class KnownWordCacheManager {
continue;
}
const normalizedFields = Array.isArray(fields)
? [...new Set(fields.map(String).map((field) => field.trim()).filter(Boolean))]
? [
...new Set(
fields
.map(String)
.map((field) => field.trim())
.filter(Boolean),
),
]
: [];
scopes.push({
query: `deck:"${escapeAnkiSearchValue(trimmedDeckName)}"`,
@@ -402,7 +412,10 @@ export class KnownWordCacheManager {
private async fetchKnownWordNoteFieldsById(): Promise<Map<number, string[]>> {
const scopes = this.getKnownWordQueryScopes();
const noteFieldsById = new Map<number, string[]>();
log.debug('Refreshing known-word cache', `queries=${scopes.map((scope) => scope.query).join(' | ')}`);
log.debug(
'Refreshing known-word cache',
`queries=${scopes.map((scope) => scope.query).join(' | ')}`,
);
for (const scope of scopes) {
const noteIds = (await this.deps.client.findNotes(scope.query, {
@@ -414,10 +427,7 @@ export class KnownWordCacheManager {
continue;
}
const existingFields = noteFieldsById.get(noteId) ?? [];
noteFieldsById.set(
noteId,
[...new Set([...existingFields, ...scope.fields])],
);
noteFieldsById.set(noteId, [...new Set([...existingFields, ...scope.fields])]);
}
}

View File

@@ -1,5 +1,5 @@
import { isRemoteMediaPath } from '../jimaku/utils';
import type { MpvClient } from '../types';
import type { MpvClient } from '../types/runtime';
export type MediaGenerationKind = 'audio' | 'video';
@@ -50,7 +50,7 @@ function resolvePreferredUrlFromMpvEdlSource(
// mpv EDL sources usually list audio streams first and video streams last, so
// when classifyMediaUrl cannot identify a typed URL we fall back to stream order.
return kind === 'audio' ? urls[0] ?? null : urls[urls.length - 1] ?? null;
return kind === 'audio' ? (urls[0] ?? null) : (urls[urls.length - 1] ?? null);
}
export async function resolveMediaGenerationInputPath(

View File

@@ -2,7 +2,7 @@ import test from 'node:test';
import assert from 'node:assert/strict';
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import { AnkiIntegrationRuntime } from './runtime';
function createRuntime(

View File

@@ -1,5 +1,5 @@
import { DEFAULT_ANKI_CONNECT_CONFIG } from '../config';
import type { AnkiConnectConfig } from '../types';
import type { AnkiConnectConfig } from '../types/anki';
import {
getKnownWordCacheLifecycleConfig,
getKnownWordCacheRefreshIntervalMinutes,

View File

@@ -1,4 +1,4 @@
import { NotificationOptions } from '../types';
import { NotificationOptions } from '../types/anki';
export interface UiFeedbackState {
progressDepth: number;

View File

@@ -1325,8 +1325,14 @@ test('controller descriptor config rejects malformed binding objects', () => {
config.controller.bindings.leftStickHorizontal,
DEFAULT_CONFIG.controller.bindings.leftStickHorizontal,
);
assert.equal(warnings.some((warning) => warning.path === 'controller.bindings.toggleLookup'), true);
assert.equal(warnings.some((warning) => warning.path === 'controller.bindings.closeLookup'), true);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.toggleLookup'),
true,
);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.closeLookup'),
true,
);
assert.equal(
warnings.some((warning) => warning.path === 'controller.bindings.leftStickHorizontal'),
true,

View File

@@ -1,4 +1,4 @@
import { RawConfig, ResolvedConfig } from '../types';
import { RawConfig, ResolvedConfig } from '../types/config';
import { CORE_DEFAULT_CONFIG } from './definitions/defaults-core';
import { IMMERSION_DEFAULT_CONFIG } from './definitions/defaults-immersion';
import { INTEGRATIONS_DEFAULT_CONFIG } from './definitions/defaults-integrations';

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const CORE_DEFAULT_CONFIG: Pick<
ResolvedConfig,

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const IMMERSION_DEFAULT_CONFIG: Pick<ResolvedConfig, 'immersionTracking'> = {
immersionTracking: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const INTEGRATIONS_DEFAULT_CONFIG: Pick<
ResolvedConfig,

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types.js';
import { ResolvedConfig } from '../../types/config.js';
export const STATS_DEFAULT_CONFIG: Pick<ResolvedConfig, 'stats'> = {
stats: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
export const SUBTITLE_DEFAULT_CONFIG: Pick<ResolvedConfig, 'subtitleStyle' | 'subtitleSidebar'> = {
subtitleStyle: {

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildCoreConfigOptionRegistry(
@@ -263,7 +263,8 @@ export function buildCoreConfigOptionRegistry(
{
path: `controller.bindings.${binding.id}.axisIndex`,
kind: 'number' as const,
defaultValue: binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
description: 'Raw axis index captured for this discrete controller action.',
},
{
@@ -293,7 +294,8 @@ export function buildCoreConfigOptionRegistry(
{
path: `controller.bindings.${binding.id}.axisIndex`,
kind: 'number' as const,
defaultValue: binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.axisIndex : undefined,
description: 'Raw axis index captured for this analog controller action.',
},
{
@@ -302,7 +304,8 @@ export function buildCoreConfigOptionRegistry(
enumValues: ['none', 'horizontal', 'vertical'],
defaultValue:
binding.defaultValue.kind === 'axis' ? binding.defaultValue.dpadFallback : undefined,
description: 'Optional D-pad fallback used when this analog controller action should also read D-pad input.',
description:
'Optional D-pad fallback used when this analog controller action should also read D-pad input.',
},
]),
{

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildImmersionConfigOptionRegistry(

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry, RuntimeOptionRegistryEntry } from './shared';
export function buildIntegrationConfigOptionRegistry(
@@ -369,13 +369,15 @@ export function buildIntegrationConfigOptionRegistry(
path: 'youtubeSubgen.whisperBin',
kind: 'string',
defaultValue: defaultConfig.youtubeSubgen.whisperBin,
description: 'Legacy compatibility path kept for external subtitle fallback tools; not used by default.',
description:
'Legacy compatibility path kept for external subtitle fallback tools; not used by default.',
},
{
path: 'youtubeSubgen.whisperModel',
kind: 'string',
defaultValue: defaultConfig.youtubeSubgen.whisperModel,
description: 'Legacy compatibility model path kept for external subtitle fallback tooling; not used by default.',
description:
'Legacy compatibility model path kept for external subtitle fallback tooling; not used by default.',
},
{
path: 'youtubeSubgen.whisperVadModel',

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types.js';
import { ResolvedConfig } from '../../types/config.js';
import { ConfigOptionRegistryEntry } from './shared.js';
export function buildStatsConfigOptionRegistry(
@@ -15,7 +15,8 @@ export function buildStatsConfigOptionRegistry(
path: 'stats.markWatchedKey',
kind: 'string',
defaultValue: defaultConfig.stats.markWatchedKey,
description: 'Key code to mark the current video as watched and advance to the next playlist entry.',
description:
'Key code to mark the current video as watched and advance to the next playlist entry.',
},
{
path: 'stats.serverPort',

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ConfigOptionRegistryEntry } from './shared';
export function buildSubtitleConfigOptionRegistry(

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { RuntimeOptionRegistryEntry } from './shared';
export function buildRuntimeOptionRegistry(

View File

@@ -1,11 +1,11 @@
import {
AnkiConnectConfig,
ResolvedConfig,
import type { AnkiConnectConfig } from '../../types/anki';
import type { ResolvedConfig } from '../../types/config';
import type {
RuntimeOptionId,
RuntimeOptionScope,
RuntimeOptionValue,
RuntimeOptionValueType,
} from '../../types';
} from '../../types/runtime-options';
export type ConfigValueKind = 'boolean' | 'number' | 'string' | 'enum' | 'array' | 'object';

View File

@@ -1,5 +1,5 @@
import * as fs from 'fs';
import { RawConfig } from '../types';
import { RawConfig } from '../types/config';
import { parseConfigContent } from './parse';
export interface ConfigPaths {

View File

@@ -1,4 +1,4 @@
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types/config';
import { applyAnkiConnectResolution } from './resolve/anki-connect';
import { createResolveContext } from './resolve/context';
import { applyCoreDomainConfig } from './resolve/core-domains';

View File

@@ -1,4 +1,4 @@
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../../types/config';
import { DEFAULT_CONFIG, deepCloneConfig } from '../definitions';
import { createWarningCollector } from '../warnings';
import { isObject } from './shared';

View File

@@ -8,7 +8,7 @@ import type {
ControllerDiscreteBindingConfig,
ResolvedControllerAxisBinding,
ResolvedControllerDiscreteBinding,
} from '../../types';
} from '../../types/runtime';
import { ResolveContext } from './context';
import { asBoolean, asNumber, asString, isObject } from './shared';
@@ -27,7 +27,12 @@ const CONTROLLER_BUTTON_BINDINGS = [
'rightTrigger',
] as const;
const CONTROLLER_AXIS_BINDINGS = ['leftStickX', 'leftStickY', 'rightStickX', 'rightStickY'] as const;
const CONTROLLER_AXIS_BINDINGS = [
'leftStickX',
'leftStickY',
'rightStickX',
'rightStickY',
] as const;
const CONTROLLER_AXIS_INDEX_BY_BINDING: Record<ControllerAxisBinding, number> = {
leftStickX: 0,
@@ -98,7 +103,9 @@ function parseDiscreteBindingObject(value: unknown): ResolvedControllerDiscreteB
return { kind: 'none' };
}
if (value.kind === 'button') {
return typeof value.buttonIndex === 'number' && Number.isInteger(value.buttonIndex) && value.buttonIndex >= 0
return typeof value.buttonIndex === 'number' &&
Number.isInteger(value.buttonIndex) &&
value.buttonIndex >= 0
? { kind: 'button', buttonIndex: value.buttonIndex }
: null;
}
@@ -121,7 +128,11 @@ function parseAxisBindingObject(
return { kind: 'none' };
}
if (!isObject(value) || value.kind !== 'axis') return null;
if (typeof value.axisIndex !== 'number' || !Number.isInteger(value.axisIndex) || value.axisIndex < 0) {
if (
typeof value.axisIndex !== 'number' ||
!Number.isInteger(value.axisIndex) ||
value.axisIndex < 0
) {
return null;
}
if (value.dpadFallback !== undefined && !isControllerDpadFallback(value.dpadFallback)) {
@@ -368,7 +379,9 @@ export function applyCoreDomainConfig(context: ResolveContext): void {
const legacyValue = asString(bindingValue);
if (
legacyValue !== undefined &&
CONTROLLER_BUTTON_BINDINGS.includes(legacyValue as (typeof CONTROLLER_BUTTON_BINDINGS)[number])
CONTROLLER_BUTTON_BINDINGS.includes(
legacyValue as (typeof CONTROLLER_BUTTON_BINDINGS)[number],
)
) {
resolved.controller.bindings[key] = resolveLegacyDiscreteBinding(
legacyValue as ControllerButtonBinding,
@@ -401,7 +414,9 @@ export function applyCoreDomainConfig(context: ResolveContext): void {
const legacyValue = asString(bindingValue);
if (
legacyValue !== undefined &&
CONTROLLER_AXIS_BINDINGS.includes(legacyValue as (typeof CONTROLLER_AXIS_BINDINGS)[number])
CONTROLLER_AXIS_BINDINGS.includes(
legacyValue as (typeof CONTROLLER_AXIS_BINDINGS)[number],
)
) {
resolved.controller.bindings[key] = resolveLegacyAxisBinding(
legacyValue as ControllerAxisBinding,

View File

@@ -1,5 +1,8 @@
import { ResolveContext } from './context';
import { ImmersionTrackingRetentionMode, ImmersionTrackingRetentionPreset } from '../../types';
import {
ImmersionTrackingRetentionMode,
ImmersionTrackingRetentionPreset,
} from '../../types/integrations';
import { asBoolean, asNumber, asString, isObject } from './shared';
const DEFAULT_RETENTION_MODE: ImmersionTrackingRetentionMode = 'preset';

View File

@@ -17,7 +17,12 @@ export function applyStatsConfig(context: ResolveContext): void {
if (markWatchedKey !== undefined) {
resolved.stats.markWatchedKey = markWatchedKey;
} else if (src.stats.markWatchedKey !== undefined) {
warn('stats.markWatchedKey', src.stats.markWatchedKey, resolved.stats.markWatchedKey, 'Expected string.');
warn(
'stats.markWatchedKey',
src.stats.markWatchedKey,
resolved.stats.markWatchedKey,
'Expected string.',
);
}
const serverPort = asNumber(src.stats.serverPort);

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../../types';
import { ResolvedConfig } from '../../types/config';
import { ResolveContext } from './context';
import {
asBoolean,
@@ -467,7 +467,9 @@ export function applySubtitleDomainConfig(context: ResolveContext): void {
);
if (pauseVideoOnHover !== undefined) {
resolved.subtitleSidebar.pauseVideoOnHover = pauseVideoOnHover;
} else if ((src.subtitleSidebar as { pauseVideoOnHover?: unknown }).pauseVideoOnHover !== undefined) {
} else if (
(src.subtitleSidebar as { pauseVideoOnHover?: unknown }).pauseVideoOnHover !== undefined
) {
resolved.subtitleSidebar.pauseVideoOnHover = fallback.pauseVideoOnHover;
warn(
'subtitleSidebar.pauseVideoOnHover',

View File

@@ -49,7 +49,10 @@ test('subtitleSidebar accepts zero opacity', () => {
applySubtitleDomainConfig(context);
assert.equal(context.resolved.subtitleSidebar.opacity, 0);
assert.equal(warnings.some((warning) => warning.path === 'subtitleSidebar.opacity'), false);
assert.equal(
warnings.some((warning) => warning.path === 'subtitleSidebar.opacity'),
false,
);
});
test('subtitleSidebar falls back and warns on invalid values', () => {

View File

@@ -1,6 +1,6 @@
import * as fs from 'fs';
import * as path from 'path';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types';
import { ConfigValidationWarning, RawConfig, ResolvedConfig } from '../types/config';
import { DEFAULT_CONFIG, deepCloneConfig, deepMergeRawConfig } from './definitions';
import { ConfigPaths, loadRawConfig, loadRawConfigStrict } from './load';
import { resolveConfig } from './resolve';

View File

@@ -1,4 +1,4 @@
import { ResolvedConfig } from '../types';
import { ResolvedConfig } from '../types/config';
import {
CONFIG_OPTION_REGISTRY,
CONFIG_TEMPLATE_SECTIONS,

Some files were not shown because too many files have changed in this diff Show More