Compare commits

..

16 commits

Author SHA1 Message Date
johnfrum1234
db1729730f Full pipeline flow through web interface with live updates
API:
  POST /api/tasks/{id}/run — sets task to in_progress immediately,
    launches subprocess with error handling and logging.
  GET /api/tasks/{id}/running — checks pipelines table for active run.
  Fixed --db flag position in subprocess command.

TaskDetail (live pipeline):
  - Run button starts pipeline, auto-starts 3s polling
  - Pipeline cards update in real-time as agent_logs appear
  - Pulsing blue dot on header while in_progress
  - Spinner on run button during execution
  - Auto-stops polling when status changes from in_progress
  - Cleanup on component unmount (no leaked timers)

ProjectView (run from list):
  - [>] button on each pending task row
  - Confirm dialog before starting
  - Pulsing blue dot for in_progress tasks
  - Click task row → /task/:id with live view

Dashboard (live statuses):
  - Pulsing blue dot next to active task count
  - Auto-poll every 5s when any project has active tasks
  - Stops polling when no active tasks

5 new API tests (running endpoint, run sets status, not found).
141 tests total, all passing. Frontend builds clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 15:29:05 +02:00
johnfrum1234
ab693d3c4d Add permission-aware follow-up flow with interactive resolution
When follow-up agent detects permission-blocked items ("ручное
применение", "permission denied", etc.), they become pending_actions
instead of auto-created tasks. User chooses per item:
  1. Rerun with --dangerously-skip-permissions
  2. Create manual task
  3. Skip

core/followup.py:
  _is_permission_blocked() — regex detection of 9 permission patterns
  generate_followups() returns {created, pending_actions}
  resolve_pending_action() — handles rerun/manual_task/skip

agents/runner.py:
  _run_claude(allow_write=True) adds --dangerously-skip-permissions
  run_agent/run_pipeline pass allow_write through

CLI: kin approve --followup — interactive 1/2/3 prompt per blocked item
API: POST /approve returns {needs_decision, pending_actions}
     POST /resolve resolves individual actions
Frontend: pending actions shown as cards with 3 buttons in approve modal

136 tests, all passing. Frontend builds clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 15:16:48 +02:00
johnfrum1234
9264415776 Add follow-up task generation on approve
When approving a task, PM agent analyzes pipeline output and creates
follow-up tasks automatically (e.g. security audit → 8 fix tasks).

core/followup.py:
  generate_followups() — collects pipeline output, runs followup agent,
  parses JSON task list, creates tasks with parent_task_id linkage.
  Handles: bare arrays, {tasks:[...]} wrappers, invalid JSON, empty.

agents/prompts/followup.md — PM prompt for analyzing results and
  creating actionable follow-up tasks with priority from severity.

CLI: kin approve <task_id> [--followup] [--decision "text"]
API: POST /api/tasks/{id}/approve {create_followups: true}
  Returns {status, decision, followup_tasks: [...]}

Frontend (TaskDetail approve modal):
  - Checkbox "Create follow-up tasks" (default ON)
  - Loading state during generation
  - Results view: list of created tasks with links to /task/:id

ProjectView: tasks show "from VDOL-001" for follow-ups.

13 new tests (followup), 125 total, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 15:02:58 +02:00
johnfrum1234
f7830d484c Let pipeline subprocess stderr flow to uvicorn terminal
Removed stderr=subprocess.DEVNULL from POST /api/tasks/{id}/run
so errors from background kin run are visible in the API server log.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 14:46:20 +02:00
johnfrum1234
c129cf9d95 Fix output truncation bug, add language support for agent responses
Bug 1 — Output truncation:
  _run_claude() was replacing raw stdout with parsed sub-field which
  could be a dict (not string). run_agent() then saved dict.__repr__
  to DB instead of full JSON. Fixed: _run_claude() always returns
  string output; run_agent() ensures string before DB write.
  Added tests: full_output_saved_to_db, dict_output_saved_as_json_string.

Bug 2 — Language support:
  Added projects.language column (TEXT DEFAULT 'ru').
  Auto-migration for existing DBs (ALTER TABLE ADD COLUMN).
  context_builder passes language in project context.
  format_prompt() appends "## Language\nALWAYS respond in {language}"
  at the end of every prompt.
  CLI: kin project add --language ru (default: ru).
  Tests: language in prompt for ru/en, project creation, context.

112 tests, all passing. ~/.kin/kin.db migrated (vdol: language=ru).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 14:39:33 +02:00
johnfrum1234
38c252fc1b Add task detail view, pipeline visualization, approve/reject workflow
API (web/api.py) — 5 new endpoints:
  GET  /api/tasks/{id}/pipeline — agent_logs as pipeline steps
  GET  /api/tasks/{id}/full — task + steps + related decisions
  POST /api/tasks/{id}/approve — mark done, optionally add decision
  POST /api/tasks/{id}/reject — return to pending with reason
  POST /api/tasks/{id}/run — launch pipeline in background (202)

Frontend:
  TaskDetail (/task/:id) — full task page with:
    - Pipeline graph: role cards with icons, arrows, status colors
    - Click step → expand output (pre-formatted, JSON detected)
    - Action bar: Approve (with optional decision), Reject, Run Pipeline
    - Polling for live pipeline updates
  Dashboard: review_tasks badge ("awaiting review" in yellow)
  ProjectView: task rows are now clickable links to /task/:id

Runner: output_summary no longer truncated (full output for GUI).
Models: get_project_summary includes review_tasks count.

13 new API tests, 105 total, all passing. Frontend builds clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 14:32:29 +02:00
johnfrum1234
fabae74c19 Add context builder, agent runner, and pipeline executor
core/context_builder.py:
  build_context() — assembles role-specific context from DB.
  PM gets everything; debugger gets gotchas/workarounds; reviewer
  gets conventions only; tester gets minimal context; security
  gets security-category decisions.
  format_prompt() — injects context into role templates.

agents/runner.py:
  run_agent() — launches claude CLI as subprocess with role prompt.
  run_pipeline() — executes multi-step pipelines sequentially,
  chains output between steps, logs to agent_logs, creates/updates
  pipeline records, handles failures gracefully.

agents/specialists.yaml — 8 roles with tools, permissions, context rules.
agents/prompts/pm.md — PM prompt for task decomposition.
agents/prompts/security.md — security audit prompt (OWASP, auth, secrets).

CLI: kin run <task_id> [--dry-run]
  PM decomposes → shows pipeline → executes with confirmation.

31 new tests (15 context_builder, 11 runner, 5 JSON parsing).
92 total, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 14:03:32 +02:00
johnfrum1234
86e5b8febf Add web GUI: FastAPI API + Vue 3 frontend with dark theme
API (web/api.py):
  GET  /api/projects, /api/projects/{id}, /api/tasks/{id}
  GET  /api/decisions?project=X, /api/cost?days=7, /api/support/tickets
  POST /api/projects, /api/tasks, /api/decisions, /api/bootstrap
  CORS for localhost:5173, all queries via models.py

Frontend (web/frontend/):
  Vue 3 + TypeScript + Vite + Tailwind CSS v3
  Dashboard: project cards with task counters, cost, status badges
  ProjectView: tabs for Tasks/Decisions/Modules with filters
  Modals: Add Project, Add Task, Add Decision, Bootstrap
  Dark theme, monospace font, minimal clean design

Startup:
  API:  cd web && uvicorn api:app --reload --port 8420
  Web:  cd web/frontend && npm install && npm run dev

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:50:15 +02:00
johnfrum1234
b95db7c7d6 bootstrap: vdol project loaded with real data 2026-03-15 13:40:58 +02:00
johnfrum1234
e5444114bd Fix bootstrap: deep scan, CLAUDE.md fallback, noise filtering
1. Tech stack: recursive file search (depth 3) + CLAUDE.md text fallback
   when config files are on remote server (detects nodejs, postgresql, etc.)
2. Modules: scan */src/ patterns in top-level dirs (frontend/src/, backend-pg/src/)
3. Decisions: filter out unrelated sections (Jitsi, Nextcloud, Prosody, GOIP),
   filter noise (commit hashes, shell commands, external service paths).
   Noise filtering also applied to Obsidian decisions.

Tested on vdolipoperek: 4 tech, 5 modules, 9 clean decisions, 24 Obsidian tasks.
61 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:37:42 +02:00
johnfrum1234
da4a8aae72 Add bootstrap command — auto-detect project stack, modules, decisions
kin bootstrap <path> --id <id> --name <name> [--vault <path>]

Detects: package.json, requirements.txt, go.mod, config files → tech_stack.
Scans src/app/lib/frontend/backend dirs → modules with type detection.
Parses CLAUDE.md for GOTCHA/WORKAROUND/FIXME/ВАЖНО → decisions.
Scans Obsidian vault for kanban tasks, checkboxes, and decisions.
Preview before save, -y to skip confirmation.
18 bootstrap tests, 57 total passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:29:01 +02:00
johnfrum1234
432cfd55d4 Add CLI (cli/main.py) — click-based interface for all core operations
Commands: project (add/list/show), task (add/list/show),
decision (add/list), module (add/list), status, cost.
Auto-generated task IDs (PROJ-001). DB at ~/.kin/kin.db or $KIN_DB.
pyproject.toml with `kin` entry point. 18 CLI tests, 39 total passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:20:57 +02:00
johnfrum1234
3db73332ad Add core/models.py — data access functions for all 9 tables
20 functions covering: projects, tasks, decisions, modules,
agent_logs, pipelines, support tickets, and dashboard stats.
Parameterized queries, JSON encode/decode, no ORM.
21 tests, all passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:16:12 +02:00
johnfrum1234
d7491705d9 Add core/db.py — SQLite schema with all 9 tables from DESIGN.md 3.5
Tables: projects, tasks, decisions, agent_logs, modules, pipelines,
project_links, support_tickets, support_bot_config.
WAL mode, foreign keys enabled, idempotent init.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:12:54 +02:00
johnfrum1234
bdb9fb4a65 Add CLAUDE.md — project-level instructions for Kin
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:10:47 +02:00
johnfrum1234
f5a0e2f0b9 Add DESIGN.md — main architecture document for Kin agent orchestrator
Copied from agent-orchestrator-research.md as the foundational design reference.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 13:10:10 +02:00
47 changed files with 11346 additions and 0 deletions

7
.gitignore vendored
View file

@ -162,3 +162,10 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Kin
kin.db
kin.db-wal
kin.db-shm
PROGRESS.md
node_modules/
web/frontend/dist/

19
CLAUDE.md Normal file
View file

@ -0,0 +1,19 @@
# Kin — мультиагентный оркестратор проектов
## Что это
Виртуальная софтверная компания. Intake → PM → специалисты.
Каждый агент = отдельный Claude Code процесс с изолированным контекстом.
## Стек
Python 3.11+, SQLite, FastAPI (будущее), Vue 3 (GUI, будущее)
## Архитектура
Полная спека: DESIGN.md
## Правила
- НЕ создавать файлы без необходимости
- Коммитить после каждого рабочего этапа
- SQLite kin.db — единственный source of truth
- Промпты агентов в agents/prompts/*.md
- Тесты обязательны для core/
- Общие инструкции: ~/projects/CLAUDE.md

1291
DESIGN.md Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

0
agents/__init__.py Normal file
View file

711
agents/bootstrap.py Normal file
View file

@ -0,0 +1,711 @@
"""
Kin bootstrap auto-detect project tech stack, modules, and decisions.
Scans project directory, CLAUDE.md, and optionally Obsidian vault.
Writes results to kin.db via core.models.
"""
import json
import re
from pathlib import Path
from typing import Any
DEFAULT_VAULT = Path.home() / "Library" / "Mobile Documents" / "iCloud~md~obsidian" / "Documents"
# ---------------------------------------------------------------------------
# Tech stack detection
# ---------------------------------------------------------------------------
# package.json dependency → tech label
_NPM_MARKERS = {
"vue": "vue3", "nuxt": "nuxt3", "react": "react", "next": "nextjs",
"svelte": "svelte", "angular": "angular",
"typescript": "typescript", "vite": "vite", "webpack": "webpack",
"express": "express", "fastify": "fastify", "koa": "koa",
"pinia": "pinia", "vuex": "vuex", "redux": "redux",
"tailwindcss": "tailwind", "prisma": "prisma", "drizzle-orm": "drizzle",
"pg": "postgresql", "mysql2": "mysql", "better-sqlite3": "sqlite",
"axios": "axios", "puppeteer": "puppeteer", "playwright": "playwright",
}
# Config files → tech label
_FILE_MARKERS = {
"nuxt.config.ts": "nuxt3", "nuxt.config.js": "nuxt3",
"vite.config.ts": "vite", "vite.config.js": "vite",
"tsconfig.json": "typescript",
"tailwind.config.js": "tailwind", "tailwind.config.ts": "tailwind",
"docker-compose.yml": "docker", "docker-compose.yaml": "docker",
"Dockerfile": "docker",
"go.mod": "go", "Cargo.toml": "rust",
"requirements.txt": "python", "pyproject.toml": "python",
"setup.py": "python", "Pipfile": "python",
".eslintrc.js": "eslint", ".prettierrc": "prettier",
}
_SKIP_DIRS = {"node_modules", ".git", "dist", ".next", ".nuxt", "__pycache__", ".venv", "venv"}
def detect_tech_stack(project_path: Path) -> list[str]:
"""Detect tech stack from project files.
Searches recursively up to depth 3, skipping node_modules/.git/dist.
Falls back to CLAUDE.md heuristics if no files found.
"""
stack: set[str] = set()
# Recursive search for config files and package.json (depth ≤ 3)
for fpath in _walk_files(project_path, max_depth=3):
fname = fpath.name
if fname in _FILE_MARKERS:
stack.add(_FILE_MARKERS[fname])
if fname == "package.json":
stack.update(_parse_package_json(fpath))
if fname == "requirements.txt":
stack.update(_parse_requirements_txt(fpath))
if fname == "go.mod":
stack.add("go")
try:
text = fpath.read_text(errors="replace")
if "gin-gonic" in text:
stack.add("gin")
if "fiber" in text:
stack.add("fiber")
except OSError:
pass
# Fallback: extract tech hints from CLAUDE.md if no config files found
if not stack:
stack.update(_detect_stack_from_claude_md(project_path))
return sorted(stack)
# CLAUDE.md text → tech labels (for fallback when project files are on a remote server)
_CLAUDE_MD_TECH_HINTS = {
r"(?i)vue[\s.]?3": "vue3", r"(?i)vue[\s.]?2": "vue2",
r"(?i)\bnuxt\b": "nuxt3", r"(?i)\breact\b": "react",
r"(?i)\btypescript\b": "typescript", r"(?i)\bvite\b": "vite",
r"(?i)\btailwind": "tailwind",
r"(?i)node\.?js": "nodejs", r"(?i)\bexpress\b": "express",
r"(?i)postgresql|postgres": "postgresql",
r"(?i)\bsqlite\b": "sqlite", r"(?i)\bmysql\b": "mysql",
r"(?i)\bdocker\b": "docker",
r"(?i)\bpython\b": "python", r"(?i)\bfastapi\b": "fastapi",
r"(?i)\bdjango\b": "django", r"(?i)\bflask\b": "flask",
r"(?i)\bgo\b.*(?:gin|fiber|module)": "go",
r"(?i)\bnginx\b": "nginx",
r"(?i)\bpinia\b": "pinia", r"(?i)\bvuex\b": "vuex",
}
def _detect_stack_from_claude_md(project_path: Path) -> list[str]:
"""Fallback: infer tech stack from CLAUDE.md text when no config files exist."""
claude_md = project_path / "CLAUDE.md"
if not claude_md.exists():
return []
try:
text = claude_md.read_text(errors="replace")[:5000] # First 5KB is enough
except OSError:
return []
stack = []
for pattern, tech in _CLAUDE_MD_TECH_HINTS.items():
if re.search(pattern, text):
stack.append(tech)
return stack
def _walk_files(root: Path, max_depth: int = 3, _depth: int = 0):
"""Yield files up to max_depth, skipping node_modules/dist/.git."""
if _depth > max_depth:
return
try:
entries = sorted(root.iterdir())
except (OSError, PermissionError):
return
for entry in entries:
if entry.is_file():
yield entry
elif entry.is_dir() and entry.name not in _SKIP_DIRS and not entry.name.startswith("."):
yield from _walk_files(entry, max_depth, _depth + 1)
def _parse_package_json(path: Path) -> list[str]:
"""Extract tech labels from package.json."""
try:
data = json.loads(path.read_text(errors="replace"))
except (json.JSONDecodeError, OSError):
return []
stack = []
all_deps = {}
for key in ("dependencies", "devDependencies"):
all_deps.update(data.get(key, {}))
for dep_name, tech in _NPM_MARKERS.items():
if dep_name in all_deps:
stack.append(tech)
return stack
def _parse_requirements_txt(path: Path) -> list[str]:
"""Extract tech labels from requirements.txt."""
markers = {
"fastapi": "fastapi", "flask": "flask", "django": "django",
"sqlalchemy": "sqlalchemy", "celery": "celery", "redis": "redis",
"pydantic": "pydantic", "click": "click", "pytest": "pytest",
}
stack = []
try:
text = path.read_text(errors="replace").lower()
except OSError:
return stack
for pkg, tech in markers.items():
if pkg in text:
stack.append(tech)
return stack
def _is_inside_node_modules(path: Path, root: Path) -> bool:
rel = path.relative_to(root)
return "node_modules" in rel.parts
# ---------------------------------------------------------------------------
# Module detection
# ---------------------------------------------------------------------------
_FRONTEND_EXTS = {".vue", ".jsx", ".tsx", ".svelte"}
_BACKEND_MARKERS = {"express", "fastify", "koa", "router", "controller", "middleware"}
def detect_modules(project_path: Path) -> list[dict]:
"""Scan for modules: checks root subdirs, */src/ patterns, standard names.
Strategy:
1. Find all "source root" dirs (src/, app/, lib/ at root or inside top-level dirs)
2. Each first-level subdir of a source root = a module candidate
3. Top-level dirs with their own src/ are treated as component roots
(e.g. frontend/, backend-pg/) scan THEIR src/ for modules
"""
modules = []
scan_dirs: list[tuple[Path, str | None]] = [] # (dir, prefix_hint)
# Direct source dirs in root
for name in ("src", "app", "lib"):
d = project_path / name
if d.is_dir():
scan_dirs.append((d, None))
# Top-level component dirs (frontend/, backend/, backend-pg/, server/, client/)
# These get scanned for src/ inside, or directly if they contain source files
for child in sorted(project_path.iterdir()):
if not child.is_dir() or child.name in _SKIP_DIRS or child.name.startswith("."):
continue
child_src = child / "src"
if child_src.is_dir():
# e.g. frontend/src/, backend-pg/src/ — scan their subdirs
scan_dirs.append((child_src, child.name))
elif child.name in ("frontend", "backend", "server", "client", "web", "api"):
# No src/ but it's a known component dir — scan it directly
scan_dirs.append((child, child.name))
seen = set()
for scan_dir, prefix in scan_dirs:
for child in sorted(scan_dir.iterdir()):
if not child.is_dir() or child.name in _SKIP_DIRS or child.name.startswith("."):
continue
mod = _analyze_module(child, project_path)
key = (mod["name"], mod["path"])
if key not in seen:
seen.add(key)
modules.append(mod)
return modules
def _analyze_module(dir_path: Path, project_root: Path) -> dict:
"""Analyze a directory to determine module type and file count."""
rel_path = str(dir_path.relative_to(project_root)) + "/"
files = list(dir_path.rglob("*"))
source_files = [f for f in files if f.is_file() and not f.name.startswith(".")]
file_count = len(source_files)
# Determine type
exts = {f.suffix for f in source_files}
mod_type = _guess_module_type(dir_path, exts, source_files)
return {
"name": dir_path.name,
"type": mod_type,
"path": rel_path,
"file_count": file_count,
}
def _guess_module_type(dir_path: Path, exts: set[str], files: list[Path]) -> str:
"""Guess if module is frontend, backend, shared, or infra."""
# Obvious frontend
if exts & _FRONTEND_EXTS:
return "frontend"
# Check file contents for backend markers
has_backend_marker = False
for f in files[:20]: # Sample first 20 files
if f.suffix in (".ts", ".js", ".mjs"):
try:
text = f.read_text(errors="replace")[:2000]
text_lower = text.lower()
if any(m in text_lower for m in _BACKEND_MARKERS):
has_backend_marker = True
break
except OSError:
continue
if has_backend_marker:
return "backend"
# Infra patterns
name = dir_path.name.lower()
if name in ("infra", "deploy", "scripts", "ci", "docker", "nginx", "config"):
return "infra"
# Shared by default if ambiguous
if exts & {".ts", ".js", ".py"}:
return "shared"
return "shared"
# ---------------------------------------------------------------------------
# Decisions from CLAUDE.md
# ---------------------------------------------------------------------------
_DECISION_PATTERNS = [
(r"(?i)\b(GOTCHA|ВАЖНО|WARNING|ВНИМАНИЕ)[:\s]+(.*?)(?=\n[#\-]|\n\n|\Z)", "gotcha"),
(r"(?i)\b(WORKAROUND|ОБХОДНОЙ|ХАК)[:\s]+(.*?)(?=\n[#\-]|\n\n|\Z)", "workaround"),
(r"(?i)\b(FIXME|БАГИ?)[:\s]+(.*?)(?=\n[#\-]|\n\n|\Z)", "gotcha"),
(r"(?i)\b(РЕШЕНИЕ|DECISION)[:\s]+(.*?)(?=\n[#\-]|\n\n|\Z)", "decision"),
(r"(?i)\b(CONVENTION|СОГЛАШЕНИЕ|ПРАВИЛО)[:\s]+(.*?)(?=\n[#\-]|\n\n|\Z)", "convention"),
]
# Section headers that likely contain decisions
_DECISION_SECTIONS = [
r"(?i)known\s+issues?", r"(?i)workaround", r"(?i)gotcha",
r"(?i)решени[яе]", r"(?i)грабл[ия]",
r"(?i)conventions?", r"(?i)правила", r"(?i)нюансы",
]
# Section headers about UNRELATED services — skip these entirely
_UNRELATED_SECTION_PATTERNS = [
r"(?i)jitsi", r"(?i)nextcloud", r"(?i)prosody",
r"(?i)coturn", r"(?i)turn\b", r"(?i)asterisk",
r"(?i)ghost\s+блог", r"(?i)onlyoffice",
r"(?i)git\s+sync", r"(?i)\.env\s+добав",
r"(?i)goip\s+watcher", r"(?i)tbank\s+monitor", # monitoring services
r"(?i)фикс\s+удален", # commit-level fixes (not decisions)
]
# Noise patterns — individual items that look like noise, not decisions
_NOISE_PATTERNS = [
r"^[0-9a-f]{6,40}$", # commit hashes
r"^\s*(docker|ssh|scp|git|curl|sudo)\s", # shell commands
r"^`[^`]+`$", # inline code-only items
r"(?i)(prosody|jitsi|jicofo|jvb|coturn|nextcloud|onlyoffice|ghost)", # unrelated services
r"(?i)\.jitsi-meet-cfg", # jitsi config paths
r"(?i)(meet\.jitsi|sitemeet\.org)", # jitsi domains
r"(?i)(cloud\.vault\.red|office\.vault)", # nextcloud domains
r"(?i)JWT_APP_(ID|SECRET)", # jwt config lines
r"(?i)XMPP_", # prosody config
r"\(коммит\s+`?[0-9a-f]+`?\)", # "(коммит `a33c2b9`)" references
r"(?i)known_uids|idle_loop|reconnect", # goip-watcher internals
]
def _is_noise(text: str) -> bool:
"""Check if a decision candidate is noise."""
# Clean markdown bold for matching
clean = re.sub(r"\*\*([^*]*)\*\*", r"\1", text).strip()
return any(re.search(p, clean) for p in _NOISE_PATTERNS)
def _split_into_sections(text: str) -> list[tuple[str, str]]:
"""Split markdown into (header, body) pairs by ## headers.
Returns list of (header_text, body_text) tuples.
Anything before the first ## is returned with header="".
"""
parts = re.split(r"(?m)^(##\s+.+)$", text)
sections = []
current_header = ""
current_body = parts[0] if parts else ""
for i in range(1, len(parts), 2):
if current_header or current_body.strip():
sections.append((current_header, current_body))
current_header = parts[i].strip()
current_body = parts[i + 1] if i + 1 < len(parts) else ""
if current_header or current_body.strip():
sections.append((current_header, current_body))
return sections
def _is_unrelated_section(header: str) -> bool:
"""Check if a section header is about an unrelated service."""
return any(re.search(p, header) for p in _UNRELATED_SECTION_PATTERNS)
def extract_decisions_from_claude_md(
project_path: Path,
project_id: str | None = None,
project_name: str | None = None,
) -> list[dict]:
"""Parse CLAUDE.md for decisions, gotchas, workarounds.
Filters out:
- Sections about unrelated services (Jitsi, Nextcloud, Prosody, etc.)
- Noise: commit hashes, docker/ssh commands, paths to external services
- If CLAUDE.md has multi-project sections, only extracts for current project
"""
claude_md = project_path / "CLAUDE.md"
if not claude_md.exists():
return []
try:
text = claude_md.read_text(errors="replace")
except OSError:
return []
# Split into sections and filter out unrelated ones
sections = _split_into_sections(text)
relevant_text = []
for header, body in sections:
if _is_unrelated_section(header):
continue
relevant_text.append(header + "\n" + body)
filtered_text = "\n".join(relevant_text)
decisions = []
seen_titles = set()
# Pattern-based extraction from relevant sections only
for pattern, dec_type in _DECISION_PATTERNS:
for m in re.finditer(pattern, filtered_text, re.DOTALL):
body = m.group(2).strip()
if not body or len(body) < 10:
continue
lines = body.split("\n")
title = lines[0].strip().rstrip(".")[:100]
desc = body
if _is_noise(title) or _is_noise(desc):
continue
if title not in seen_titles:
seen_titles.add(title)
decisions.append({
"type": dec_type,
"title": title,
"description": desc,
"category": _guess_category(title + " " + desc),
})
# Section-based extraction: find ### or #### headers matching decision patterns
sub_sections = re.split(r"(?m)^(#{1,4}\s+.*?)$", filtered_text)
for i, section in enumerate(sub_sections):
if any(re.search(pat, section) for pat in _DECISION_SECTIONS):
if i + 1 < len(sub_sections):
content = sub_sections[i + 1].strip()
for line in content.split("\n"):
line = line.strip()
# Numbered items (1. **text**) or bullet items
item = None
if re.match(r"^\d+\.\s+", line):
item = re.sub(r"^\d+\.\s+", "", line).strip()
elif line.startswith(("- ", "* ", "")):
item = line.lstrip("-*• ").strip()
if not item or len(item) < 10:
continue
# Clean bold markers for title
clean = re.sub(r"\*\*([^*]+)\*\*", r"\1", item)
if _is_noise(clean):
continue
title = clean[:100]
if title not in seen_titles:
seen_titles.add(title)
decisions.append({
"type": "gotcha",
"title": title,
"description": item,
"category": _guess_category(item),
})
return decisions
def _guess_category(text: str) -> str:
"""Best-effort category guess from text content."""
t = text.lower()
if any(w in t for w in ("css", "ui", "vue", "компонент", "стил", "layout", "mobile", "safari", "bottom-sheet")):
return "ui"
if any(w in t for w in ("api", "endpoint", "rest", "route", "запрос", "fetch")):
return "api"
if any(w in t for w in ("sql", "база", "миграц", "postgres", "sqlite", "бд", "schema")):
return "architecture"
if any(w in t for w in ("безопас", "security", "xss", "auth", "token", "csrf", "injection")):
return "security"
if any(w in t for w in ("docker", "deploy", "nginx", "ci", "cd", "infra", "сервер")):
return "devops"
if any(w in t for w in ("performance", "cache", "оптимиз", "lazy", "скорость")):
return "performance"
return "architecture"
# ---------------------------------------------------------------------------
# Obsidian vault scanning
# ---------------------------------------------------------------------------
def find_vault_root(vault_path: Path | None = None) -> Path | None:
"""Find the Obsidian vault root directory.
If vault_path is given but doesn't exist, returns None (don't fallback).
If vault_path is None, tries the default iCloud Obsidian location.
"""
if vault_path is not None:
return vault_path if vault_path.is_dir() else None
# Default: iCloud Obsidian path
default = DEFAULT_VAULT
if default.is_dir():
# Look for a vault inside (usually one level deep)
for child in default.iterdir():
if child.is_dir() and not child.name.startswith("."):
return child
return None
def scan_obsidian(
vault_root: Path,
project_id: str,
project_name: str,
project_dir_name: str | None = None,
) -> dict:
"""Scan Obsidian vault for project-related notes.
Returns {"tasks": [...], "decisions": [...], "files_scanned": int}
"""
result = {"tasks": [], "decisions": [], "files_scanned": 0}
# Build search terms
search_terms = {project_id.lower()}
if project_name:
search_terms.add(project_name.lower())
if project_dir_name:
search_terms.add(project_dir_name.lower())
# Find project folder in vault
project_files: list[Path] = []
for term in list(search_terms):
for child in vault_root.iterdir():
if child.is_dir() and term in child.name.lower():
for f in child.rglob("*.md"):
if f not in project_files:
project_files.append(f)
# Also search for files mentioning the project by name
for md_file in vault_root.glob("*.md"):
try:
text = md_file.read_text(errors="replace")[:5000].lower()
except OSError:
continue
if any(term in text for term in search_terms):
if md_file not in project_files:
project_files.append(md_file)
result["files_scanned"] = len(project_files)
for f in project_files:
try:
text = f.read_text(errors="replace")
except OSError:
continue
_extract_obsidian_tasks(text, f.stem, result["tasks"])
_extract_obsidian_decisions(text, f.stem, result["decisions"])
return result
def _extract_obsidian_tasks(text: str, source: str, tasks: list[dict]):
"""Extract checkbox items from Obsidian markdown."""
for m in re.finditer(r"^[-*]\s+\[([ xX])\]\s+(.+)$", text, re.MULTILINE):
done = m.group(1).lower() == "x"
title = m.group(2).strip()
# Remove Obsidian wiki-links
title = re.sub(r"\[\[([^\]|]+)(?:\|[^\]]+)?\]\]", r"\1", title)
if len(title) > 5:
tasks.append({
"title": title[:200],
"done": done,
"source": source,
})
def _extract_obsidian_decisions(text: str, source: str, decisions: list[dict]):
"""Extract decisions/gotchas from Obsidian notes."""
for pattern, dec_type in _DECISION_PATTERNS:
for m in re.finditer(pattern, text, re.DOTALL):
body = m.group(2).strip()
if not body or len(body) < 10:
continue
title = body.split("\n")[0].strip()[:100]
if _is_noise(title) or _is_noise(body):
continue
decisions.append({
"type": dec_type,
"title": title,
"description": body,
"category": _guess_category(body),
"source": source,
})
# Also look for ВАЖНО/GOTCHA/FIXME inline markers not caught above
for m in re.finditer(r"(?i)\*\*(ВАЖНО|GOTCHA|FIXME)\*\*[:\s]*(.*?)(?=\n|$)", text):
body = m.group(2).strip()
if not body or len(body) < 10:
continue
if _is_noise(body):
continue
decisions.append({
"type": "gotcha",
"title": body[:100],
"description": body,
"category": _guess_category(body),
"source": source,
})
# ---------------------------------------------------------------------------
# Formatting for CLI preview
# ---------------------------------------------------------------------------
def format_preview(
project_id: str,
name: str,
path: str,
tech_stack: list[str],
modules: list[dict],
decisions: list[dict],
obsidian: dict | None = None,
) -> str:
"""Format bootstrap results for user review."""
lines = [
f"Project: {project_id}{name}",
f"Path: {path}",
"",
f"Tech stack: {', '.join(tech_stack) if tech_stack else '(not detected)'}",
"",
]
if modules:
lines.append(f"Modules ({len(modules)}):")
for m in modules:
lines.append(f" {m['name']} ({m['type']}) — {m['path']} ({m['file_count']} files)")
else:
lines.append("Modules: (none detected)")
lines.append("")
if decisions:
lines.append(f"Decisions from CLAUDE.md ({len(decisions)}):")
for i, d in enumerate(decisions, 1):
lines.append(f" #{i} {d['type']}: {d['title']}")
else:
lines.append("Decisions from CLAUDE.md: (none found)")
if obsidian:
lines.append("")
lines.append(f"Obsidian vault ({obsidian['files_scanned']} files scanned):")
if obsidian["tasks"]:
pending = [t for t in obsidian["tasks"] if not t["done"]]
done = [t for t in obsidian["tasks"] if t["done"]]
lines.append(f" Tasks: {len(pending)} pending, {len(done)} done")
for t in pending[:10]:
lines.append(f" [ ] {t['title']}")
if len(pending) > 10:
lines.append(f" ... and {len(pending) - 10} more")
for t in done[:5]:
lines.append(f" [x] {t['title']}")
if len(done) > 5:
lines.append(f" ... and {len(done) - 5} more done")
else:
lines.append(" Tasks: (none found)")
if obsidian["decisions"]:
lines.append(f" Decisions: {len(obsidian['decisions'])}")
for d in obsidian["decisions"][:5]:
lines.append(f" {d['type']}: {d['title']} (from {d['source']})")
if len(obsidian["decisions"]) > 5:
lines.append(f" ... and {len(obsidian['decisions']) - 5} more")
else:
lines.append(" Decisions: (none found)")
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Write to DB
# ---------------------------------------------------------------------------
def save_to_db(
conn,
project_id: str,
name: str,
path: str,
tech_stack: list[str],
modules: list[dict],
decisions: list[dict],
obsidian: dict | None = None,
):
"""Save all bootstrap data to kin.db via models."""
from core import models
# Create project
claude_md = Path(path).expanduser() / "CLAUDE.md"
models.create_project(
conn, project_id, name, path,
tech_stack=tech_stack,
claude_md_path=str(claude_md) if claude_md.exists() else None,
)
# Add modules
for m in modules:
models.add_module(
conn, project_id, m["name"], m["type"], m["path"],
description=f"{m['file_count']} files",
)
# Add decisions from CLAUDE.md
for d in decisions:
models.add_decision(
conn, project_id, d["type"], d["title"], d["description"],
category=d.get("category"),
)
# Add Obsidian decisions
if obsidian:
for d in obsidian.get("decisions", []):
models.add_decision(
conn, project_id, d["type"], d["title"], d["description"],
category=d.get("category"),
tags=[f"obsidian:{d['source']}"],
)
# Import Obsidian tasks
task_num = 1
for t in obsidian.get("tasks", []):
task_id = f"{project_id.upper()}-OBS-{task_num:03d}"
status = "done" if t["done"] else "pending"
models.create_task(
conn, task_id, project_id, t["title"],
status=status,
brief={"source": f"obsidian:{t['source']}"},
)
task_num += 1

View file

@ -0,0 +1,35 @@
You are a Project Manager reviewing completed pipeline results.
Your job: analyze the output from all pipeline steps and create follow-up tasks.
## Rules
- Create one task per actionable item found in the pipeline output
- Group small related fixes into a single task when logical (e.g. "CORS + Helmet + CSP headers" = one task)
- Set priority based on severity: CRITICAL=1, HIGH=2, MEDIUM=4, LOW=6, INFO=8
- Set type: "hotfix" for CRITICAL/HIGH security, "debug" for bugs, "feature" for improvements, "refactor" for cleanup
- Each task must have a clear, actionable title
- Include enough context in brief so the assigned specialist can start without re-reading the full audit
- Skip informational/already-done items — only create tasks for things that need action
- If no follow-ups are needed, return an empty array
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
[
{
"title": "Добавить requireAuth на admin endpoints",
"type": "hotfix",
"priority": 2,
"brief": "3 admin-эндпоинта без auth: /api/admin/collect-hot-tours, /api/admin/refresh-hotel-details, /api/admin/hotel-stats. Добавить middleware requireAuth."
},
{
"title": "Rate limiting на /api/auth/login",
"type": "feature",
"priority": 4,
"brief": "Эндпоинт login не имеет rate limiting. Добавить express-rate-limit: 5 попыток / 15 мин на IP."
}
]
```

58
agents/prompts/pm.md Normal file
View file

@ -0,0 +1,58 @@
You are a Project Manager for the Kin multi-agent orchestrator.
Your job: decompose a task into a pipeline of specialist steps.
## Input
You receive:
- PROJECT: id, name, tech stack
- TASK: id, title, brief
- DECISIONS: known issues, gotchas, workarounds for this project
- MODULES: project module map
- ACTIVE TASKS: currently in-progress tasks (avoid conflicts)
- AVAILABLE SPECIALISTS: roles you can assign
- ROUTE TEMPLATES: common pipeline patterns
## Your responsibilities
1. Analyze the task and determine what type of work is needed
2. Select the right specialists from the available pool
3. Build an ordered pipeline with dependencies
4. Include relevant context hints for each specialist
5. Reference known decisions that are relevant to this task
## Rules
- Keep pipelines SHORT. 2-4 steps for most tasks.
- Always end with a tester or reviewer step for quality.
- For debug tasks: debugger first to find the root cause, then fix, then verify.
- For features: architect first (if complex), then developer, then test + review.
- Don't assign specialists who aren't needed.
- If a task is blocked or unclear, say so — don't guess.
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"analysis": "Brief analysis of what needs to be done",
"pipeline": [
{
"role": "debugger",
"model": "sonnet",
"brief": "What this specialist should do",
"module": "search",
"relevant_decisions": [1, 5, 12]
},
{
"role": "tester",
"model": "sonnet",
"depends_on": "debugger",
"brief": "Write regression test for the fix"
}
],
"estimated_steps": 2,
"route_type": "debug"
}
```

View file

@ -0,0 +1,73 @@
You are a Security Engineer performing a security audit.
## Scope
Analyze the codebase for security vulnerabilities. Focus on:
1. **Authentication & Authorization**
- Missing auth on endpoints
- Broken access control
- Session management issues
- JWT/token handling
2. **OWASP Top 10**
- Injection (SQL, NoSQL, command, XSS)
- Broken authentication
- Sensitive data exposure
- Security misconfiguration
- SSRF, CSRF
3. **Secrets & Credentials**
- Hardcoded secrets, API keys, passwords
- Secrets in git history
- Unencrypted sensitive data
- .env files exposed
4. **Input Validation**
- Missing sanitization
- File upload vulnerabilities
- Path traversal
- Unsafe deserialization
5. **Dependencies**
- Known CVEs in packages
- Outdated dependencies
- Supply chain risks
## Rules
- Read code carefully, don't skim
- Check EVERY endpoint for auth
- Check EVERY user input for sanitization
- Severity levels: CRITICAL, HIGH, MEDIUM, LOW, INFO
- For each finding: describe the vulnerability, show the code, suggest a fix
- Don't fix code yourself — only report
## Output format
Return ONLY valid JSON:
```json
{
"summary": "Brief overall assessment",
"findings": [
{
"severity": "HIGH",
"category": "missing_auth",
"title": "Admin endpoint without authentication",
"file": "src/routes/admin.js",
"line": 42,
"description": "The /api/admin/users endpoint has no auth middleware",
"recommendation": "Add requireAuth middleware before the handler",
"owasp": "A01:2021 Broken Access Control"
}
],
"stats": {
"files_reviewed": 15,
"critical": 0,
"high": 2,
"medium": 3,
"low": 1
}
}
```

321
agents/runner.py Normal file
View file

@ -0,0 +1,321 @@
"""
Kin agent runner launches Claude Code as subprocess with role-specific context.
Each agent = separate process with isolated context.
"""
import json
import sqlite3
import subprocess
import time
from pathlib import Path
from typing import Any
from core import models
from core.context_builder import build_context, format_prompt
def run_agent(
conn: sqlite3.Connection,
role: str,
task_id: str,
project_id: str,
model: str = "sonnet",
previous_output: str | None = None,
brief_override: str | None = None,
dry_run: bool = False,
allow_write: bool = False,
) -> dict:
"""Run a single Claude Code agent as a subprocess.
1. Build context from DB
2. Format prompt with role template
3. Run: claude -p "{prompt}" --output-format json
4. Log result to agent_logs
5. Return {success, output, tokens_used, duration_seconds, cost_usd}
"""
# Build context
ctx = build_context(conn, task_id, role, project_id)
if previous_output:
ctx["previous_output"] = previous_output
if brief_override:
if ctx.get("task"):
ctx["task"]["brief"] = brief_override
prompt = format_prompt(ctx, role)
if dry_run:
return {
"success": True,
"output": None,
"prompt": prompt,
"role": role,
"model": model,
"dry_run": True,
}
# Determine working directory
project = models.get_project(conn, project_id)
working_dir = None
if project and role in ("debugger", "frontend_dev", "backend_dev", "tester", "security"):
project_path = Path(project["path"]).expanduser()
if project_path.is_dir():
working_dir = str(project_path)
# Run claude subprocess
start = time.monotonic()
result = _run_claude(prompt, model=model, working_dir=working_dir,
allow_write=allow_write)
duration = int(time.monotonic() - start)
# Parse output — ensure output_text is always a string for DB storage
raw_output = result.get("output", "")
if not isinstance(raw_output, str):
raw_output = json.dumps(raw_output, ensure_ascii=False)
output_text = raw_output
success = result["returncode"] == 0
parsed_output = _try_parse_json(output_text)
# Log FULL output to DB (no truncation)
models.log_agent_run(
conn,
project_id=project_id,
task_id=task_id,
agent_role=role,
action="execute",
input_summary=f"task={task_id}, model={model}",
output_summary=output_text or None,
tokens_used=result.get("tokens_used"),
model=model,
cost_usd=result.get("cost_usd"),
success=success,
error_message=result.get("error") if not success else None,
duration_seconds=duration,
)
return {
"success": success,
"output": parsed_output if parsed_output else output_text,
"raw_output": output_text,
"role": role,
"model": model,
"duration_seconds": duration,
"tokens_used": result.get("tokens_used"),
"cost_usd": result.get("cost_usd"),
}
def _run_claude(
prompt: str,
model: str = "sonnet",
working_dir: str | None = None,
allow_write: bool = False,
) -> dict:
"""Execute claude CLI as subprocess. Returns dict with output, returncode, etc."""
cmd = [
"claude",
"-p", prompt,
"--output-format", "json",
"--model", model,
]
if allow_write:
cmd.append("--dangerously-skip-permissions")
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=600, # 10 min max
cwd=working_dir,
)
except FileNotFoundError:
return {
"output": "",
"error": "claude CLI not found in PATH",
"returncode": 127,
}
except subprocess.TimeoutExpired:
return {
"output": "",
"error": "Agent timed out after 600s",
"returncode": 124,
}
# Always preserve the full raw stdout
raw_stdout = proc.stdout or ""
result: dict[str, Any] = {
"output": raw_stdout,
"error": proc.stderr if proc.returncode != 0 else None,
"returncode": proc.returncode,
}
# Parse JSON wrapper from claude --output-format json
# Extract metadata (tokens, cost) but keep output as the full content string
parsed = _try_parse_json(raw_stdout)
if isinstance(parsed, dict):
result["tokens_used"] = parsed.get("usage", {}).get("total_tokens")
result["cost_usd"] = parsed.get("cost_usd")
# Extract the agent's actual response, converting to string if needed
content = parsed.get("result") or parsed.get("content")
if content is not None:
result["output"] = content if isinstance(content, str) else json.dumps(content, ensure_ascii=False)
return result
def _try_parse_json(text: str) -> Any:
"""Try to parse JSON from text. Returns parsed obj or None."""
text = text.strip()
if not text:
return None
# Direct parse
try:
return json.loads(text)
except json.JSONDecodeError:
pass
# Try to find JSON block in markdown code fences
import re
m = re.search(r"```(?:json)?\s*\n(.*?)\n```", text, re.DOTALL)
if m:
try:
return json.loads(m.group(1))
except json.JSONDecodeError:
pass
# Try to find first { ... } or [ ... ]
for start_char, end_char in [("{", "}"), ("[", "]")]:
start = text.find(start_char)
if start >= 0:
# Find matching close
depth = 0
for i in range(start, len(text)):
if text[i] == start_char:
depth += 1
elif text[i] == end_char:
depth -= 1
if depth == 0:
try:
return json.loads(text[start:i + 1])
except json.JSONDecodeError:
break
return None
# ---------------------------------------------------------------------------
# Pipeline executor
# ---------------------------------------------------------------------------
def run_pipeline(
conn: sqlite3.Connection,
task_id: str,
steps: list[dict],
dry_run: bool = False,
allow_write: bool = False,
) -> dict:
"""Execute a multi-step pipeline of agents.
steps = [
{"role": "debugger", "model": "opus", "brief": "..."},
{"role": "tester", "depends_on": "debugger", "brief": "..."},
]
Returns {success, steps_completed, total_cost, total_tokens, total_duration, results}
"""
task = models.get_task(conn, task_id)
if not task:
return {"success": False, "error": f"Task '{task_id}' not found"}
project_id = task["project_id"]
# Determine route type from steps or task brief
route_type = "custom"
if task.get("brief") and isinstance(task["brief"], dict):
route_type = task["brief"].get("route_type", "custom") or "custom"
# Create pipeline in DB
pipeline = None
if not dry_run:
pipeline = models.create_pipeline(
conn, task_id, project_id, route_type, steps,
)
models.update_task(conn, task_id, status="in_progress")
results = []
total_cost = 0.0
total_tokens = 0
total_duration = 0
previous_output = None
for i, step in enumerate(steps):
role = step["role"]
model = step.get("model", "sonnet")
brief = step.get("brief")
result = run_agent(
conn, role, task_id, project_id,
model=model,
previous_output=previous_output,
brief_override=brief,
dry_run=dry_run,
allow_write=allow_write,
)
results.append(result)
if dry_run:
continue
# Accumulate stats
total_cost += result.get("cost_usd") or 0
total_tokens += result.get("tokens_used") or 0
total_duration += result.get("duration_seconds") or 0
if not result["success"]:
# Pipeline failed — stop and mark as failed
if pipeline:
models.update_pipeline(
conn, pipeline["id"],
status="failed",
total_cost_usd=total_cost,
total_tokens=total_tokens,
total_duration_seconds=total_duration,
)
models.update_task(conn, task_id, status="blocked")
return {
"success": False,
"error": f"Step {i+1}/{len(steps)} ({role}) failed",
"steps_completed": i,
"results": results,
"total_cost_usd": total_cost,
"total_tokens": total_tokens,
"total_duration_seconds": total_duration,
"pipeline_id": pipeline["id"] if pipeline else None,
}
# Chain output to next step
previous_output = result.get("raw_output") or result.get("output")
if isinstance(previous_output, (dict, list)):
previous_output = json.dumps(previous_output, ensure_ascii=False)
# Pipeline completed
if pipeline and not dry_run:
models.update_pipeline(
conn, pipeline["id"],
status="completed",
total_cost_usd=total_cost,
total_tokens=total_tokens,
total_duration_seconds=total_duration,
)
models.update_task(conn, task_id, status="review")
return {
"success": True,
"steps_completed": len(steps),
"results": results,
"total_cost_usd": total_cost,
"total_tokens": total_tokens,
"total_duration_seconds": total_duration,
"pipeline_id": pipeline["id"] if pipeline else None,
"dry_run": dry_run,
}

104
agents/specialists.yaml Normal file
View file

@ -0,0 +1,104 @@
# Kin specialist pool — roles available for pipeline construction.
# PM selects from this pool based on task type.
specialists:
pm:
name: "Project Manager"
model: sonnet
tools: [Read, Grep, Glob]
description: "Decomposes tasks, selects specialists, builds pipelines"
permissions: read_only
context_rules:
decisions: all
modules: all
architect:
name: "Software Architect"
model: sonnet
tools: [Read, Grep, Glob]
description: "Designs solutions, reviews structure, writes specs"
permissions: read_only
context_rules:
decisions: all
modules: all
debugger:
name: "Debugger"
model: sonnet
tools: [Read, Grep, Glob, Bash]
description: "Finds root causes, reads logs, traces execution"
permissions: read_bash
working_dir: project
context_rules:
decisions: [gotcha, workaround]
frontend_dev:
name: "Frontend Developer"
model: sonnet
tools: [Read, Write, Edit, Bash, Glob, Grep]
description: "Implements UI: Vue, CSS, components, composables"
permissions: full
working_dir: project
context_rules:
decisions: [gotcha, workaround, convention]
backend_dev:
name: "Backend Developer"
model: sonnet
tools: [Read, Write, Edit, Bash, Glob, Grep]
description: "Implements API, services, database, business logic"
permissions: full
working_dir: project
context_rules:
decisions: [gotcha, workaround, convention]
tester:
name: "Tester"
model: sonnet
tools: [Read, Write, Bash, Glob, Grep]
description: "Writes and runs tests, verifies fixes"
permissions: full
working_dir: project
context_rules:
decisions: []
reviewer:
name: "Code Reviewer"
model: sonnet
tools: [Read, Grep, Glob]
description: "Reviews code for quality, conventions, bugs"
permissions: read_only
context_rules:
decisions: [convention]
security:
name: "Security Engineer"
model: sonnet
tools: [Read, Grep, Glob, Bash]
description: "OWASP audit, auth checks, secrets scan, vulnerability analysis"
permissions: read_bash
working_dir: project
context_rules:
decisions_category: security
# Route templates — PM uses these to build pipelines
routes:
debug:
steps: [debugger, tester, frontend_dev, tester]
description: "Find bug → verify → fix → verify fix"
feature:
steps: [architect, frontend_dev, tester, reviewer]
description: "Design → implement → test → review"
refactor:
steps: [architect, frontend_dev, tester, reviewer]
description: "Plan refactor → implement → test → review"
hotfix:
steps: [debugger, frontend_dev, tester]
description: "Find → fix → verify (fast track)"
security_audit:
steps: [security, architect]
description: "Audit → remediation plan"

0
cli/__init__.py Normal file
View file

629
cli/main.py Normal file
View file

@ -0,0 +1,629 @@
"""
Kin CLI command-line interface for the multi-agent orchestrator.
Uses core.models for all data access, never raw SQL.
"""
import json
import sys
from pathlib import Path
import click
# Ensure project root is on sys.path
sys.path.insert(0, str(Path(__file__).parent.parent))
from core.db import init_db
from core import models
from agents.bootstrap import (
detect_tech_stack, detect_modules, extract_decisions_from_claude_md,
find_vault_root, scan_obsidian, format_preview, save_to_db,
)
DEFAULT_DB = Path.home() / ".kin" / "kin.db"
def get_conn(db_path: Path = DEFAULT_DB):
db_path.parent.mkdir(parents=True, exist_ok=True)
return init_db(db_path)
def _parse_json(ctx, param, value):
"""Click callback: parse a JSON string or return None."""
if value is None:
return None
try:
return json.loads(value)
except json.JSONDecodeError:
raise click.BadParameter(f"Invalid JSON: {value}")
def _table(headers: list[str], rows: list[list[str]], min_width: int = 6):
"""Render a simple aligned text table."""
widths = [max(min_width, len(h)) for h in headers]
for row in rows:
for i, cell in enumerate(row):
if i < len(widths):
widths[i] = max(widths[i], len(str(cell)))
fmt = " ".join(f"{{:<{w}}}" for w in widths)
lines = [fmt.format(*headers), fmt.format(*("-" * w for w in widths))]
for row in rows:
lines.append(fmt.format(*[str(c) for c in row]))
return "\n".join(lines)
def _auto_task_id(conn, project_id: str) -> str:
"""Generate next task ID like PROJ-001."""
prefix = project_id.upper()
existing = models.list_tasks(conn, project_id=project_id)
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(prefix + "-"):
try:
num = int(tid.split("-", 1)[1])
max_num = max(max_num, num)
except ValueError:
pass
return f"{prefix}-{max_num + 1:03d}"
# ===========================================================================
# Root group
# ===========================================================================
@click.group()
@click.option("--db", type=click.Path(), default=None, envvar="KIN_DB",
help="Path to kin.db (default: ~/.kin/kin.db, or $KIN_DB)")
@click.pass_context
def cli(ctx, db):
"""Kin — multi-agent project orchestrator."""
ctx.ensure_object(dict)
db_path = Path(db) if db else DEFAULT_DB
ctx.obj["conn"] = get_conn(db_path)
# ===========================================================================
# project
# ===========================================================================
@cli.group()
def project():
"""Manage projects."""
@project.command("add")
@click.argument("id")
@click.argument("name")
@click.argument("path")
@click.option("--tech-stack", callback=_parse_json, default=None, help='JSON array, e.g. \'["vue3","nuxt"]\'')
@click.option("--status", default="active")
@click.option("--priority", type=int, default=5)
@click.option("--language", default="ru", help="Response language for agents (ru, en, etc.)")
@click.pass_context
def project_add(ctx, id, name, path, tech_stack, status, priority, language):
"""Add a new project."""
conn = ctx.obj["conn"]
p = models.create_project(conn, id, name, path,
tech_stack=tech_stack, status=status, priority=priority,
language=language)
click.echo(f"Created project: {p['id']} ({p['name']})")
@project.command("list")
@click.option("--status", default=None)
@click.pass_context
def project_list(ctx, status):
"""List projects."""
conn = ctx.obj["conn"]
projects = models.list_projects(conn, status=status)
if not projects:
click.echo("No projects found.")
return
rows = [[p["id"], p["name"], p["status"], str(p["priority"]), p["path"]]
for p in projects]
click.echo(_table(["ID", "Name", "Status", "Pri", "Path"], rows))
@project.command("show")
@click.argument("id")
@click.pass_context
def project_show(ctx, id):
"""Show project details."""
conn = ctx.obj["conn"]
p = models.get_project(conn, id)
if not p:
click.echo(f"Project '{id}' not found.", err=True)
raise SystemExit(1)
click.echo(f"Project: {p['id']}")
click.echo(f" Name: {p['name']}")
click.echo(f" Path: {p['path']}")
click.echo(f" Status: {p['status']}")
click.echo(f" Priority: {p['priority']}")
if p.get("tech_stack"):
click.echo(f" Tech stack: {', '.join(p['tech_stack'])}")
if p.get("forgejo_repo"):
click.echo(f" Forgejo: {p['forgejo_repo']}")
click.echo(f" Created: {p['created_at']}")
# ===========================================================================
# task
# ===========================================================================
@cli.group()
def task():
"""Manage tasks."""
@task.command("add")
@click.argument("project_id")
@click.argument("title")
@click.option("--type", "route_type", type=click.Choice(["debug", "feature", "refactor", "hotfix"]), default=None)
@click.option("--priority", type=int, default=5)
@click.pass_context
def task_add(ctx, project_id, title, route_type, priority):
"""Add a task to a project. ID is auto-generated (PROJ-001)."""
conn = ctx.obj["conn"]
p = models.get_project(conn, project_id)
if not p:
click.echo(f"Project '{project_id}' not found.", err=True)
raise SystemExit(1)
task_id = _auto_task_id(conn, project_id)
brief = {"route_type": route_type} if route_type else None
t = models.create_task(conn, task_id, project_id, title,
priority=priority, brief=brief)
click.echo(f"Created task: {t['id']}{t['title']}")
@task.command("list")
@click.option("--project", "project_id", default=None)
@click.option("--status", default=None)
@click.pass_context
def task_list(ctx, project_id, status):
"""List tasks."""
conn = ctx.obj["conn"]
tasks = models.list_tasks(conn, project_id=project_id, status=status)
if not tasks:
click.echo("No tasks found.")
return
rows = [[t["id"], t["project_id"], t["title"][:40], t["status"],
str(t["priority"]), t.get("assigned_role") or "-"]
for t in tasks]
click.echo(_table(["ID", "Project", "Title", "Status", "Pri", "Role"], rows))
@task.command("show")
@click.argument("id")
@click.pass_context
def task_show(ctx, id):
"""Show task details."""
conn = ctx.obj["conn"]
t = models.get_task(conn, id)
if not t:
click.echo(f"Task '{id}' not found.", err=True)
raise SystemExit(1)
click.echo(f"Task: {t['id']}")
click.echo(f" Project: {t['project_id']}")
click.echo(f" Title: {t['title']}")
click.echo(f" Status: {t['status']}")
click.echo(f" Priority: {t['priority']}")
if t.get("assigned_role"):
click.echo(f" Role: {t['assigned_role']}")
if t.get("parent_task_id"):
click.echo(f" Parent: {t['parent_task_id']}")
if t.get("brief"):
click.echo(f" Brief: {json.dumps(t['brief'], ensure_ascii=False)}")
if t.get("spec"):
click.echo(f" Spec: {json.dumps(t['spec'], ensure_ascii=False)}")
click.echo(f" Created: {t['created_at']}")
click.echo(f" Updated: {t['updated_at']}")
# ===========================================================================
# decision
# ===========================================================================
@cli.group()
def decision():
"""Manage decisions and gotchas."""
@decision.command("add")
@click.argument("project_id")
@click.argument("type", type=click.Choice(["decision", "gotcha", "workaround", "rejected_approach", "convention"]))
@click.argument("title")
@click.argument("description")
@click.option("--category", default=None)
@click.option("--tags", callback=_parse_json, default=None, help='JSON array, e.g. \'["ios","css"]\'')
@click.option("--task-id", default=None)
@click.pass_context
def decision_add(ctx, project_id, type, title, description, category, tags, task_id):
"""Record a decision, gotcha, or convention."""
conn = ctx.obj["conn"]
p = models.get_project(conn, project_id)
if not p:
click.echo(f"Project '{project_id}' not found.", err=True)
raise SystemExit(1)
d = models.add_decision(conn, project_id, type, title, description,
category=category, tags=tags, task_id=task_id)
click.echo(f"Added {d['type']}: #{d['id']}{d['title']}")
@decision.command("list")
@click.argument("project_id")
@click.option("--category", default=None)
@click.option("--tag", multiple=True, help="Filter by tag (can repeat)")
@click.option("--type", "types", multiple=True,
type=click.Choice(["decision", "gotcha", "workaround", "rejected_approach", "convention"]),
help="Filter by type (can repeat)")
@click.pass_context
def decision_list(ctx, project_id, category, tag, types):
"""List decisions for a project."""
conn = ctx.obj["conn"]
tags_list = list(tag) if tag else None
types_list = list(types) if types else None
decisions = models.get_decisions(conn, project_id, category=category,
tags=tags_list, types=types_list)
if not decisions:
click.echo("No decisions found.")
return
rows = [[str(d["id"]), d["type"], d["category"] or "-",
d["title"][:50], d["created_at"][:10]]
for d in decisions]
click.echo(_table(["#", "Type", "Category", "Title", "Date"], rows))
# ===========================================================================
# module
# ===========================================================================
@cli.group()
def module():
"""Manage project modules."""
@module.command("add")
@click.argument("project_id")
@click.argument("name")
@click.argument("type", type=click.Choice(["frontend", "backend", "shared", "infra"]))
@click.argument("path")
@click.option("--description", default=None)
@click.option("--owner-role", default=None)
@click.pass_context
def module_add(ctx, project_id, name, type, path, description, owner_role):
"""Register a project module."""
conn = ctx.obj["conn"]
p = models.get_project(conn, project_id)
if not p:
click.echo(f"Project '{project_id}' not found.", err=True)
raise SystemExit(1)
m = models.add_module(conn, project_id, name, type, path,
description=description, owner_role=owner_role)
click.echo(f"Added module: {m['name']} ({m['type']}) at {m['path']}")
@module.command("list")
@click.argument("project_id")
@click.pass_context
def module_list(ctx, project_id):
"""List modules for a project."""
conn = ctx.obj["conn"]
mods = models.get_modules(conn, project_id)
if not mods:
click.echo("No modules found.")
return
rows = [[m["name"], m["type"], m["path"], m.get("owner_role") or "-",
m.get("description") or ""]
for m in mods]
click.echo(_table(["Name", "Type", "Path", "Owner", "Description"], rows))
# ===========================================================================
# status
# ===========================================================================
@cli.command("status")
@click.argument("project_id", required=False)
@click.pass_context
def status(ctx, project_id):
"""Project status overview. Without args — all projects. With id — detailed."""
conn = ctx.obj["conn"]
if project_id:
p = models.get_project(conn, project_id)
if not p:
click.echo(f"Project '{project_id}' not found.", err=True)
raise SystemExit(1)
tasks = models.list_tasks(conn, project_id=project_id)
counts = {}
for t in tasks:
counts[t["status"]] = counts.get(t["status"], 0) + 1
click.echo(f"Project: {p['id']}{p['name']} [{p['status']}]")
click.echo(f" Path: {p['path']}")
if p.get("tech_stack"):
click.echo(f" Stack: {', '.join(p['tech_stack'])}")
click.echo(f" Tasks: {len(tasks)} total")
for s in ["pending", "in_progress", "review", "done", "blocked"]:
if counts.get(s, 0) > 0:
click.echo(f" {s}: {counts[s]}")
if tasks:
click.echo("")
rows = [[t["id"], t["title"][:40], t["status"],
t.get("assigned_role") or "-"]
for t in tasks]
click.echo(_table(["ID", "Title", "Status", "Role"], rows))
else:
summary = models.get_project_summary(conn)
if not summary:
click.echo("No projects.")
return
rows = [[s["id"], s["name"][:25], s["status"], str(s["priority"]),
str(s["total_tasks"]), str(s["done_tasks"]),
str(s["active_tasks"]), str(s["blocked_tasks"])]
for s in summary]
click.echo(_table(
["ID", "Name", "Status", "Pri", "Total", "Done", "Active", "Blocked"],
rows,
))
# ===========================================================================
# cost
# ===========================================================================
@cli.command("cost")
@click.option("--last", "period", default="7d", help="Period: 7d, 30d, etc.")
@click.pass_context
def cost(ctx, period):
"""Show cost summary by project."""
# Parse period like "7d", "30d"
period = period.strip().lower()
if period.endswith("d"):
try:
days = int(period[:-1])
except ValueError:
click.echo(f"Invalid period: {period}. Use e.g. 7d, 30d.", err=True)
raise SystemExit(1)
else:
try:
days = int(period)
except ValueError:
click.echo(f"Invalid period: {period}. Use e.g. 7d, 30d.", err=True)
raise SystemExit(1)
conn = ctx.obj["conn"]
costs = models.get_cost_summary(conn, days=days)
if not costs:
click.echo(f"No agent runs in the last {days} days.")
return
rows = [[c["project_id"], c["project_name"][:25], str(c["runs"]),
f"{c['total_tokens']:,}", f"${c['total_cost_usd']:.4f}",
f"{c['total_duration_seconds']}s"]
for c in costs]
click.echo(f"Cost summary (last {days} days):\n")
click.echo(_table(
["Project", "Name", "Runs", "Tokens", "Cost", "Time"],
rows,
))
total = sum(c["total_cost_usd"] for c in costs)
click.echo(f"\nTotal: ${total:.4f}")
# ===========================================================================
# approve
# ===========================================================================
@cli.command("approve")
@click.argument("task_id")
@click.option("--followup", is_flag=True, help="Generate follow-up tasks from pipeline results")
@click.option("--decision", "decision_text", default=None, help="Record a decision with this text")
@click.pass_context
def approve_task(ctx, task_id, followup, decision_text):
"""Approve a task (set status=done). Optionally generate follow-ups."""
from core.followup import generate_followups, resolve_pending_action
conn = ctx.obj["conn"]
task = models.get_task(conn, task_id)
if not task:
click.echo(f"Task '{task_id}' not found.", err=True)
raise SystemExit(1)
models.update_task(conn, task_id, status="done")
click.echo(f"Approved: {task_id} → done")
if decision_text:
models.add_decision(
conn, task["project_id"], "decision", decision_text, decision_text,
task_id=task_id,
)
click.echo(f"Decision recorded.")
if followup:
click.echo("Generating follow-up tasks...")
result = generate_followups(conn, task_id)
created = result["created"]
pending = result["pending_actions"]
if created:
click.echo(f"Created {len(created)} follow-up tasks:")
for t in created:
click.echo(f" {t['id']}: {t['title']} (pri {t['priority']})")
for action in pending:
click.echo(f"\nPermission issue: {action['description']}")
click.echo(" 1. Rerun with --dangerously-skip-permissions")
click.echo(" 2. Create task for manual fix")
click.echo(" 3. Skip")
choice_input = click.prompt("Choice", type=click.Choice(["1", "2", "3"]), default="2")
choice_map = {"1": "rerun", "2": "manual_task", "3": "skip"}
choice = choice_map[choice_input]
result = resolve_pending_action(conn, task_id, action, choice)
if choice == "rerun" and result:
rr = result.get("rerun_result", {})
if rr.get("success"):
click.echo(" Re-run completed successfully.")
else:
click.echo(f" Re-run failed: {rr.get('error', 'unknown')}")
elif choice == "manual_task" and result:
click.echo(f" Created: {result['id']}: {result['title']}")
elif choice == "skip":
click.echo(" Skipped.")
if not created and not pending:
click.echo("No follow-up tasks generated.")
# ===========================================================================
# run
# ===========================================================================
@cli.command("run")
@click.argument("task_id")
@click.option("--dry-run", is_flag=True, help="Show pipeline plan without executing")
@click.pass_context
def run_task(ctx, task_id, dry_run):
"""Run a task through the agent pipeline.
PM decomposes the task into specialist steps, then the pipeline executes.
With --dry-run, shows the plan without running agents.
"""
from agents.runner import run_agent, run_pipeline
conn = ctx.obj["conn"]
task = models.get_task(conn, task_id)
if not task:
click.echo(f"Task '{task_id}' not found.", err=True)
raise SystemExit(1)
project_id = task["project_id"]
click.echo(f"Task: {task['id']}{task['title']}")
# Step 1: PM decomposes
click.echo("Running PM to decompose task...")
pm_result = run_agent(
conn, "pm", task_id, project_id,
model="sonnet", dry_run=dry_run,
)
if dry_run:
click.echo("\n--- PM Prompt (dry-run) ---")
click.echo(pm_result.get("prompt", "")[:2000])
click.echo("\n(Dry-run: PM would produce a pipeline JSON)")
return
if not pm_result["success"]:
click.echo(f"PM failed: {pm_result.get('output', 'unknown error')}", err=True)
raise SystemExit(1)
# Parse PM output for pipeline
output = pm_result.get("output")
if isinstance(output, str):
try:
output = json.loads(output)
except json.JSONDecodeError:
click.echo(f"PM returned non-JSON output:\n{output[:500]}", err=True)
raise SystemExit(1)
if not isinstance(output, dict) or "pipeline" not in output:
click.echo(f"PM output missing 'pipeline' key:\n{json.dumps(output, indent=2)[:500]}", err=True)
raise SystemExit(1)
pipeline_steps = output["pipeline"]
analysis = output.get("analysis", "")
click.echo(f"\nAnalysis: {analysis}")
click.echo(f"Pipeline ({len(pipeline_steps)} steps):")
for i, step in enumerate(pipeline_steps, 1):
click.echo(f" {i}. {step['role']} ({step.get('model', 'sonnet')}): {step.get('brief', '')}")
if not click.confirm("\nExecute pipeline?"):
click.echo("Aborted.")
return
# Step 2: Execute pipeline
click.echo("\nExecuting pipeline...")
result = run_pipeline(conn, task_id, pipeline_steps)
if result["success"]:
click.echo(f"\nPipeline completed: {result['steps_completed']} steps")
else:
click.echo(f"\nPipeline failed at step: {result.get('error', 'unknown')}", err=True)
if result.get("total_cost_usd"):
click.echo(f"Cost: ${result['total_cost_usd']:.4f}")
if result.get("total_duration_seconds"):
click.echo(f"Duration: {result['total_duration_seconds']}s")
# ===========================================================================
# bootstrap
# ===========================================================================
@cli.command("bootstrap")
@click.argument("path", type=click.Path(exists=True))
@click.option("--id", "project_id", required=True, help="Short project ID (e.g. vdol)")
@click.option("--name", required=True, help="Project display name")
@click.option("--vault", "vault_path", type=click.Path(), default=None,
help="Obsidian vault path (auto-detected if omitted)")
@click.option("-y", "--yes", is_flag=True, help="Skip confirmation")
@click.pass_context
def bootstrap(ctx, path, project_id, name, vault_path, yes):
"""Auto-detect project stack, modules, decisions and import into Kin."""
conn = ctx.obj["conn"]
project_path = Path(path).expanduser().resolve()
# Check if project already exists
existing = models.get_project(conn, project_id)
if existing:
click.echo(f"Project '{project_id}' already exists. Use 'kin project show {project_id}'.", err=True)
raise SystemExit(1)
# Detect everything
click.echo(f"Scanning {project_path} ...")
tech_stack = detect_tech_stack(project_path)
modules = detect_modules(project_path)
decisions = extract_decisions_from_claude_md(project_path, project_id, name)
# Obsidian
obsidian = None
vault_root = find_vault_root(Path(vault_path) if vault_path else None)
if vault_root:
dir_name = project_path.name
obsidian = scan_obsidian(vault_root, project_id, name, dir_name)
if not obsidian["tasks"] and not obsidian["decisions"]:
obsidian = None # Nothing found, don't clutter output
# Preview
click.echo("")
click.echo(format_preview(
project_id, name, str(project_path), tech_stack,
modules, decisions, obsidian,
))
click.echo("")
if not yes:
if not click.confirm("Save to kin.db?"):
click.echo("Aborted.")
return
save_to_db(conn, project_id, name, str(project_path),
tech_stack, modules, decisions, obsidian)
# Summary
task_count = 0
dec_count = len(decisions)
if obsidian:
task_count += len(obsidian.get("tasks", []))
dec_count += len(obsidian.get("decisions", []))
click.echo(f"Saved: 1 project, {len(modules)} modules, "
f"{dec_count} decisions, {task_count} tasks.")
# ===========================================================================
# Entry point
# ===========================================================================
if __name__ == "__main__":
cli()

0
core/__init__.py Normal file
View file

222
core/context_builder.py Normal file
View file

@ -0,0 +1,222 @@
"""
Kin context builder assembles role-specific context from DB for agent prompts.
Each role gets only the information it needs, keeping prompts focused.
"""
import json
import sqlite3
from pathlib import Path
from core import models
PROMPTS_DIR = Path(__file__).parent.parent / "agents" / "prompts"
SPECIALISTS_PATH = Path(__file__).parent.parent / "agents" / "specialists.yaml"
def _load_specialists() -> dict:
"""Load specialists.yaml (lazy, no pyyaml dependency — simple parser)."""
path = SPECIALISTS_PATH
if not path.exists():
return {}
import yaml
return yaml.safe_load(path.read_text())
def build_context(
conn: sqlite3.Connection,
task_id: str,
role: str,
project_id: str,
) -> dict:
"""Build role-specific context from DB.
Returns a dict with keys: task, project, and role-specific data.
"""
task = models.get_task(conn, task_id)
project = models.get_project(conn, project_id)
ctx = {
"task": _slim_task(task) if task else None,
"project": _slim_project(project) if project else None,
"role": role,
}
if role == "pm":
ctx["modules"] = models.get_modules(conn, project_id)
ctx["decisions"] = models.get_decisions(conn, project_id)
ctx["active_tasks"] = models.list_tasks(conn, project_id=project_id, status="in_progress")
try:
specs = _load_specialists()
ctx["available_specialists"] = list(specs.get("specialists", {}).keys())
ctx["routes"] = specs.get("routes", {})
except Exception:
ctx["available_specialists"] = []
ctx["routes"] = {}
elif role == "architect":
ctx["modules"] = models.get_modules(conn, project_id)
ctx["decisions"] = models.get_decisions(conn, project_id)
elif role == "debugger":
ctx["decisions"] = models.get_decisions(
conn, project_id, types=["gotcha", "workaround"],
)
ctx["module_hint"] = _extract_module_hint(task)
elif role in ("frontend_dev", "backend_dev"):
ctx["decisions"] = models.get_decisions(
conn, project_id, types=["gotcha", "workaround", "convention"],
)
elif role == "reviewer":
ctx["decisions"] = models.get_decisions(
conn, project_id, types=["convention"],
)
elif role == "tester":
# Minimal context — just the task spec
pass
elif role == "security":
ctx["decisions"] = models.get_decisions(
conn, project_id, category="security",
)
else:
# Unknown role — give decisions as fallback
ctx["decisions"] = models.get_decisions(conn, project_id, limit=20)
return ctx
def _slim_task(task: dict) -> dict:
"""Extract only relevant fields from a task for the prompt."""
return {
"id": task["id"],
"title": task["title"],
"status": task["status"],
"priority": task["priority"],
"assigned_role": task.get("assigned_role"),
"brief": task.get("brief"),
"spec": task.get("spec"),
}
def _slim_project(project: dict) -> dict:
"""Extract only relevant fields from a project."""
return {
"id": project["id"],
"name": project["name"],
"path": project["path"],
"tech_stack": project.get("tech_stack"),
"language": project.get("language", "ru"),
}
def _extract_module_hint(task: dict | None) -> str | None:
"""Try to extract module name from task brief."""
if not task:
return None
brief = task.get("brief")
if isinstance(brief, dict):
return brief.get("module")
return None
def format_prompt(context: dict, role: str, prompt_template: str | None = None) -> str:
"""Format a prompt by injecting context into a role template.
If prompt_template is None, loads from agents/prompts/{role}.md.
"""
if prompt_template is None:
prompt_path = PROMPTS_DIR / f"{role}.md"
if prompt_path.exists():
prompt_template = prompt_path.read_text()
else:
prompt_template = f"You are a {role}. Complete the task described below."
sections = [prompt_template, ""]
# Project info
proj = context.get("project")
if proj:
sections.append(f"## Project: {proj['id']}{proj['name']}")
if proj.get("tech_stack"):
sections.append(f"Tech stack: {', '.join(proj['tech_stack'])}")
sections.append(f"Path: {proj['path']}")
sections.append("")
# Task info
task = context.get("task")
if task:
sections.append(f"## Task: {task['id']}{task['title']}")
sections.append(f"Status: {task['status']}, Priority: {task['priority']}")
if task.get("brief"):
sections.append(f"Brief: {json.dumps(task['brief'], ensure_ascii=False)}")
if task.get("spec"):
sections.append(f"Spec: {json.dumps(task['spec'], ensure_ascii=False)}")
sections.append("")
# Decisions
decisions = context.get("decisions")
if decisions:
sections.append(f"## Known decisions ({len(decisions)}):")
for d in decisions[:30]: # Cap at 30 to avoid token bloat
tags = f" [{', '.join(d['tags'])}]" if d.get("tags") else ""
sections.append(f"- #{d['id']} [{d['type']}] {d['title']}{tags}")
sections.append("")
# Modules
modules = context.get("modules")
if modules:
sections.append(f"## Modules ({len(modules)}):")
for m in modules:
sections.append(f"- {m['name']} ({m['type']}) — {m['path']}")
sections.append("")
# Active tasks (PM)
active = context.get("active_tasks")
if active:
sections.append(f"## Active tasks ({len(active)}):")
for t in active:
sections.append(f"- {t['id']}: {t['title']} [{t['status']}]")
sections.append("")
# Available specialists (PM)
specialists = context.get("available_specialists")
if specialists:
sections.append(f"## Available specialists: {', '.join(specialists)}")
sections.append("")
# Routes (PM)
routes = context.get("routes")
if routes:
sections.append("## Route templates:")
for name, route in routes.items():
steps = "".join(route.get("steps", []))
sections.append(f"- {name}: {steps}")
sections.append("")
# Module hint (debugger)
hint = context.get("module_hint")
if hint:
sections.append(f"## Target module: {hint}")
sections.append("")
# Previous step output (pipeline chaining)
prev = context.get("previous_output")
if prev:
sections.append("## Previous step output:")
sections.append(prev if isinstance(prev, str) else json.dumps(prev, ensure_ascii=False))
sections.append("")
# Language instruction — always last so it's fresh in context
proj = context.get("project")
language = proj.get("language", "ru") if proj else "ru"
_LANG_NAMES = {"ru": "Russian", "en": "English", "es": "Spanish", "de": "German", "fr": "French"}
lang_name = _LANG_NAMES.get(language, language)
sections.append(f"## Language")
sections.append(f"ALWAYS respond in {lang_name}. All summaries, analysis, comments, and recommendations must be in {lang_name}.")
sections.append("")
return "\n".join(sections)

192
core/db.py Normal file
View file

@ -0,0 +1,192 @@
"""
Kin SQLite database schema and connection management.
All tables from DESIGN.md section 3.5 State Management.
"""
import sqlite3
from pathlib import Path
DB_PATH = Path(__file__).parent.parent / "kin.db"
SCHEMA = """
-- Проекты (центральный реестр)
CREATE TABLE IF NOT EXISTS projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT NOT NULL,
tech_stack JSON,
status TEXT DEFAULT 'active',
priority INTEGER DEFAULT 5,
pm_prompt TEXT,
claude_md_path TEXT,
forgejo_repo TEXT,
language TEXT DEFAULT 'ru',
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Задачи (привязаны к проекту)
CREATE TABLE IF NOT EXISTS tasks (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL REFERENCES projects(id),
title TEXT NOT NULL,
status TEXT DEFAULT 'pending',
priority INTEGER DEFAULT 5,
assigned_role TEXT,
parent_task_id TEXT REFERENCES tasks(id),
brief JSON,
spec JSON,
review JSON,
test_result JSON,
security_result JSON,
forgejo_issue_id INTEGER,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Решения и грабли (внешняя память PM-агента)
CREATE TABLE IF NOT EXISTS decisions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
task_id TEXT REFERENCES tasks(id),
type TEXT NOT NULL,
category TEXT,
title TEXT NOT NULL,
description TEXT NOT NULL,
tags JSON,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Логи агентов (дебаг, обучение, cost tracking)
CREATE TABLE IF NOT EXISTS agent_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
task_id TEXT REFERENCES tasks(id),
agent_role TEXT NOT NULL,
session_id TEXT,
action TEXT NOT NULL,
input_summary TEXT,
output_summary TEXT,
tokens_used INTEGER,
model TEXT,
cost_usd REAL,
success BOOLEAN,
error_message TEXT,
duration_seconds INTEGER,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Модули проекта (карта для PM)
CREATE TABLE IF NOT EXISTS modules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
name TEXT NOT NULL,
type TEXT NOT NULL,
path TEXT NOT NULL,
description TEXT,
owner_role TEXT,
dependencies JSON,
UNIQUE(project_id, name)
);
-- Pipelines (история запусков)
CREATE TABLE IF NOT EXISTS pipelines (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_id TEXT NOT NULL REFERENCES tasks(id),
project_id TEXT NOT NULL REFERENCES projects(id),
route_type TEXT NOT NULL,
steps JSON NOT NULL,
status TEXT DEFAULT 'running',
total_cost_usd REAL,
total_tokens INTEGER,
total_duration_seconds INTEGER,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
completed_at DATETIME
);
-- Кросс-проектные зависимости
CREATE TABLE IF NOT EXISTS project_links (
id INTEGER PRIMARY KEY AUTOINCREMENT,
from_project TEXT NOT NULL REFERENCES projects(id),
to_project TEXT NOT NULL REFERENCES projects(id),
type TEXT NOT NULL,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Тикеты от пользователей
CREATE TABLE IF NOT EXISTS support_tickets (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
source TEXT NOT NULL,
client_id TEXT,
client_message TEXT NOT NULL,
classification TEXT,
guard_result TEXT,
guard_reason TEXT,
anamnesis JSON,
task_id TEXT REFERENCES tasks(id),
response TEXT,
response_approved BOOLEAN DEFAULT FALSE,
status TEXT DEFAULT 'new',
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
resolved_at DATETIME
);
-- Настройки бота для каждого проекта
CREATE TABLE IF NOT EXISTS support_bot_config (
project_id TEXT PRIMARY KEY REFERENCES projects(id),
telegram_bot_token TEXT,
welcome_message TEXT,
faq JSON,
auto_reply BOOLEAN DEFAULT FALSE,
require_approval BOOLEAN DEFAULT TRUE,
brand_voice TEXT,
forbidden_topics JSON,
escalation_keywords JSON
);
-- Индексы
CREATE INDEX IF NOT EXISTS idx_tasks_project_status ON tasks(project_id, status);
CREATE INDEX IF NOT EXISTS idx_decisions_project ON decisions(project_id);
CREATE INDEX IF NOT EXISTS idx_decisions_tags ON decisions(tags);
CREATE INDEX IF NOT EXISTS idx_agent_logs_project ON agent_logs(project_id, created_at);
CREATE INDEX IF NOT EXISTS idx_agent_logs_cost ON agent_logs(project_id, cost_usd);
CREATE INDEX IF NOT EXISTS idx_tickets_project ON support_tickets(project_id, status);
CREATE INDEX IF NOT EXISTS idx_tickets_client ON support_tickets(client_id);
"""
def get_connection(db_path: Path = DB_PATH) -> sqlite3.Connection:
conn = sqlite3.connect(str(db_path))
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA foreign_keys=ON")
conn.row_factory = sqlite3.Row
return conn
def _migrate(conn: sqlite3.Connection):
"""Run migrations for existing databases."""
# Check if language column exists on projects
cols = {r[1] for r in conn.execute("PRAGMA table_info(projects)").fetchall()}
if "language" not in cols:
conn.execute("ALTER TABLE projects ADD COLUMN language TEXT DEFAULT 'ru'")
conn.commit()
def init_db(db_path: Path = DB_PATH) -> sqlite3.Connection:
conn = get_connection(db_path)
conn.executescript(SCHEMA)
conn.commit()
_migrate(conn)
return conn
if __name__ == "__main__":
conn = init_db()
tables = conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
).fetchall()
print(f"Initialized {len(tables)} tables:")
for t in tables:
print(f" - {t['name']}")
conn.close()

232
core/followup.py Normal file
View file

@ -0,0 +1,232 @@
"""
Kin follow-up generator analyzes pipeline output and creates follow-up tasks.
Runs a PM agent to parse results and produce actionable task list.
Detects permission-blocked items and returns them as pending actions.
"""
import json
import re
import sqlite3
from core import models
from core.context_builder import format_prompt, PROMPTS_DIR
_PERMISSION_PATTERNS = [
r"(?i)permission\s+denied",
r"(?i)ручное\s+применение",
r"(?i)не\s+получил[иа]?\s+разрешени[ея]",
r"(?i)cannot\s+write",
r"(?i)read[- ]?only",
r"(?i)нет\s+прав\s+на\s+запись",
r"(?i)manually\s+appl",
r"(?i)apply\s+manually",
r"(?i)требуется\s+ручн",
]
def _is_permission_blocked(item: dict) -> bool:
"""Check if a follow-up item describes a permission/write failure."""
text = f"{item.get('title', '')} {item.get('brief', '')}".lower()
return any(re.search(p, text) for p in _PERMISSION_PATTERNS)
def _collect_pipeline_output(conn: sqlite3.Connection, task_id: str) -> str:
"""Collect all pipeline step outputs for a task into a single string."""
rows = conn.execute(
"""SELECT agent_role, output_summary, success
FROM agent_logs WHERE task_id = ? ORDER BY created_at""",
(task_id,),
).fetchall()
if not rows:
return ""
parts = []
for r in rows:
status = "OK" if r["success"] else "FAILED"
parts.append(f"=== {r['agent_role']} [{status}] ===")
parts.append(r["output_summary"] or "(no output)")
parts.append("")
return "\n".join(parts)
def _next_task_id(conn: sqlite3.Connection, project_id: str) -> str:
"""Generate the next sequential task ID for a project."""
prefix = project_id.upper()
existing = models.list_tasks(conn, project_id=project_id)
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(prefix + "-"):
try:
num = int(tid.split("-", 1)[1])
max_num = max(max_num, num)
except ValueError:
pass
return f"{prefix}-{max_num + 1:03d}"
def generate_followups(
conn: sqlite3.Connection,
task_id: str,
dry_run: bool = False,
) -> dict:
"""Analyze pipeline output and create follow-up tasks.
Returns dict:
{
"created": [task, ...], # tasks created immediately
"pending_actions": [action, ...], # items needing user decision
}
A pending_action looks like:
{
"type": "permission_fix",
"description": "...",
"original_item": {...}, # raw item from PM
"options": ["rerun", "manual_task", "skip"],
}
"""
task = models.get_task(conn, task_id)
if not task:
return {"created": [], "pending_actions": []}
project_id = task["project_id"]
project = models.get_project(conn, project_id)
if not project:
return {"created": [], "pending_actions": []}
pipeline_output = _collect_pipeline_output(conn, task_id)
if not pipeline_output:
return {"created": [], "pending_actions": []}
# Build context for followup agent
language = project.get("language", "ru")
context = {
"project": {
"id": project["id"],
"name": project["name"],
"path": project["path"],
"tech_stack": project.get("tech_stack"),
"language": language,
},
"task": {
"id": task["id"],
"title": task["title"],
"status": task["status"],
"priority": task["priority"],
"brief": task.get("brief"),
"spec": task.get("spec"),
},
"previous_output": pipeline_output,
}
prompt = format_prompt(context, "followup")
if dry_run:
return {"created": [{"_dry_run": True, "_prompt": prompt}], "pending_actions": []}
# Run followup agent
from agents.runner import _run_claude, _try_parse_json
result = _run_claude(prompt, model="sonnet")
output = result.get("output", "")
# Parse the task list from output
parsed = _try_parse_json(output)
if not isinstance(parsed, list):
if isinstance(parsed, dict):
parsed = parsed.get("tasks") or parsed.get("followups") or []
else:
return {"created": [], "pending_actions": []}
# Separate permission-blocked items from normal ones
created = []
pending_actions = []
for item in parsed:
if not isinstance(item, dict) or "title" not in item:
continue
if _is_permission_blocked(item):
pending_actions.append({
"type": "permission_fix",
"description": item["title"],
"original_item": item,
"options": ["rerun", "manual_task", "skip"],
})
else:
new_id = _next_task_id(conn, project_id)
brief_dict = {"source": f"followup:{task_id}"}
if item.get("type"):
brief_dict["route_type"] = item["type"]
if item.get("brief"):
brief_dict["description"] = item["brief"]
t = models.create_task(
conn, new_id, project_id,
title=item["title"],
priority=item.get("priority", 5),
parent_task_id=task_id,
brief=brief_dict,
)
created.append(t)
# Log the followup generation
models.log_agent_run(
conn, project_id, "followup_pm", "generate_followups",
task_id=task_id,
output_summary=json.dumps({
"created": [{"id": t["id"], "title": t["title"]} for t in created],
"pending": len(pending_actions),
}, ensure_ascii=False),
success=True,
)
return {"created": created, "pending_actions": pending_actions}
def resolve_pending_action(
conn: sqlite3.Connection,
task_id: str,
action: dict,
choice: str,
) -> dict | None:
"""Resolve a single pending action.
choice: "rerun" | "manual_task" | "skip"
Returns created task dict for "manual_task", None otherwise.
"""
task = models.get_task(conn, task_id)
if not task:
return None
project_id = task["project_id"]
item = action.get("original_item", {})
if choice == "skip":
return None
if choice == "manual_task":
new_id = _next_task_id(conn, project_id)
brief_dict = {"source": f"followup:{task_id}"}
if item.get("type"):
brief_dict["route_type"] = item["type"]
if item.get("brief"):
brief_dict["description"] = item["brief"]
return models.create_task(
conn, new_id, project_id,
title=item.get("title", "Manual fix required"),
priority=item.get("priority", 5),
parent_task_id=task_id,
brief=brief_dict,
)
if choice == "rerun":
# Re-run pipeline for the parent task with allow_write
from agents.runner import run_pipeline
steps = [{"role": item.get("type", "frontend_dev"),
"brief": item.get("brief", item.get("title", "")),
"model": "sonnet"}]
result = run_pipeline(conn, task_id, steps, allow_write=True)
return {"rerun_result": result}
return None

447
core/models.py Normal file
View file

@ -0,0 +1,447 @@
"""
Kin data access functions for all tables.
Pure functions: (conn, params) dict | list[dict]. No ORM, no classes.
"""
import json
import sqlite3
from datetime import datetime
from typing import Any
def _row_to_dict(row: sqlite3.Row | None) -> dict | None:
"""Convert sqlite3.Row to dict with JSON fields decoded."""
if row is None:
return None
d = dict(row)
for key, val in d.items():
if isinstance(val, str) and val.startswith(("[", "{")):
try:
d[key] = json.loads(val)
except (json.JSONDecodeError, ValueError):
pass
return d
def _rows_to_list(rows: list[sqlite3.Row]) -> list[dict]:
"""Convert list of sqlite3.Row to list of dicts."""
return [_row_to_dict(r) for r in rows]
def _json_encode(val: Any) -> Any:
"""Encode lists/dicts to JSON strings for storage."""
if isinstance(val, (list, dict)):
return json.dumps(val, ensure_ascii=False)
return val
# ---------------------------------------------------------------------------
# Projects
# ---------------------------------------------------------------------------
def create_project(
conn: sqlite3.Connection,
id: str,
name: str,
path: str,
tech_stack: list | None = None,
status: str = "active",
priority: int = 5,
pm_prompt: str | None = None,
claude_md_path: str | None = None,
forgejo_repo: str | None = None,
language: str = "ru",
) -> dict:
"""Create a new project and return it as dict."""
conn.execute(
"""INSERT INTO projects (id, name, path, tech_stack, status, priority,
pm_prompt, claude_md_path, forgejo_repo, language)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(id, name, path, _json_encode(tech_stack), status, priority,
pm_prompt, claude_md_path, forgejo_repo, language),
)
conn.commit()
return get_project(conn, id)
def get_project(conn: sqlite3.Connection, id: str) -> dict | None:
"""Get project by id."""
row = conn.execute("SELECT * FROM projects WHERE id = ?", (id,)).fetchone()
return _row_to_dict(row)
def list_projects(conn: sqlite3.Connection, status: str | None = None) -> list[dict]:
"""List projects, optionally filtered by status."""
if status:
rows = conn.execute(
"SELECT * FROM projects WHERE status = ? ORDER BY priority, name",
(status,),
).fetchall()
else:
rows = conn.execute(
"SELECT * FROM projects ORDER BY priority, name"
).fetchall()
return _rows_to_list(rows)
def update_project(conn: sqlite3.Connection, id: str, **fields) -> dict:
"""Update project fields. Returns updated project."""
if not fields:
return get_project(conn, id)
for key in ("tech_stack",):
if key in fields:
fields[key] = _json_encode(fields[key])
sets = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [id]
conn.execute(f"UPDATE projects SET {sets} WHERE id = ?", vals)
conn.commit()
return get_project(conn, id)
# ---------------------------------------------------------------------------
# Tasks
# ---------------------------------------------------------------------------
def create_task(
conn: sqlite3.Connection,
id: str,
project_id: str,
title: str,
status: str = "pending",
priority: int = 5,
assigned_role: str | None = None,
parent_task_id: str | None = None,
brief: dict | None = None,
spec: dict | None = None,
forgejo_issue_id: int | None = None,
) -> dict:
"""Create a task linked to a project."""
conn.execute(
"""INSERT INTO tasks (id, project_id, title, status, priority,
assigned_role, parent_task_id, brief, spec, forgejo_issue_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(id, project_id, title, status, priority, assigned_role,
parent_task_id, _json_encode(brief), _json_encode(spec),
forgejo_issue_id),
)
conn.commit()
return get_task(conn, id)
def get_task(conn: sqlite3.Connection, id: str) -> dict | None:
"""Get task by id."""
row = conn.execute("SELECT * FROM tasks WHERE id = ?", (id,)).fetchone()
return _row_to_dict(row)
def list_tasks(
conn: sqlite3.Connection,
project_id: str | None = None,
status: str | None = None,
) -> list[dict]:
"""List tasks with optional project/status filters."""
query = "SELECT * FROM tasks WHERE 1=1"
params: list = []
if project_id:
query += " AND project_id = ?"
params.append(project_id)
if status:
query += " AND status = ?"
params.append(status)
query += " ORDER BY priority, created_at"
return _rows_to_list(conn.execute(query, params).fetchall())
def update_task(conn: sqlite3.Connection, id: str, **fields) -> dict:
"""Update task fields. Auto-sets updated_at."""
if not fields:
return get_task(conn, id)
json_cols = ("brief", "spec", "review", "test_result", "security_result")
for key in json_cols:
if key in fields:
fields[key] = _json_encode(fields[key])
fields["updated_at"] = datetime.now().isoformat()
sets = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [id]
conn.execute(f"UPDATE tasks SET {sets} WHERE id = ?", vals)
conn.commit()
return get_task(conn, id)
# ---------------------------------------------------------------------------
# Decisions
# ---------------------------------------------------------------------------
def add_decision(
conn: sqlite3.Connection,
project_id: str,
type: str,
title: str,
description: str,
category: str | None = None,
tags: list | None = None,
task_id: str | None = None,
) -> dict:
"""Record a decision, gotcha, or convention for a project."""
cur = conn.execute(
"""INSERT INTO decisions (project_id, task_id, type, category,
title, description, tags)
VALUES (?, ?, ?, ?, ?, ?, ?)""",
(project_id, task_id, type, category, title, description,
_json_encode(tags)),
)
conn.commit()
row = conn.execute(
"SELECT * FROM decisions WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def get_decisions(
conn: sqlite3.Connection,
project_id: str,
category: str | None = None,
tags: list | None = None,
types: list | None = None,
limit: int | None = None,
) -> list[dict]:
"""Query decisions for a project with optional filters.
tags: matches if ANY tag is present (OR logic via json_each).
types: filter by decision type (decision, gotcha, workaround, etc).
"""
query = "SELECT DISTINCT d.* FROM decisions d WHERE d.project_id = ?"
params: list = [project_id]
if category:
query += " AND d.category = ?"
params.append(category)
if types:
placeholders = ", ".join("?" for _ in types)
query += f" AND d.type IN ({placeholders})"
params.extend(types)
if tags:
query += """ AND d.id IN (
SELECT d2.id FROM decisions d2, json_each(d2.tags) AS t
WHERE t.value IN ({})
)""".format(", ".join("?" for _ in tags))
params.extend(tags)
query += " ORDER BY d.created_at DESC"
if limit:
query += " LIMIT ?"
params.append(limit)
return _rows_to_list(conn.execute(query, params).fetchall())
# ---------------------------------------------------------------------------
# Modules
# ---------------------------------------------------------------------------
def add_module(
conn: sqlite3.Connection,
project_id: str,
name: str,
type: str,
path: str,
description: str | None = None,
owner_role: str | None = None,
dependencies: list | None = None,
) -> dict:
"""Register a project module."""
cur = conn.execute(
"""INSERT INTO modules (project_id, name, type, path, description,
owner_role, dependencies)
VALUES (?, ?, ?, ?, ?, ?, ?)""",
(project_id, name, type, path, description, owner_role,
_json_encode(dependencies)),
)
conn.commit()
row = conn.execute(
"SELECT * FROM modules WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def get_modules(conn: sqlite3.Connection, project_id: str) -> list[dict]:
"""Get all modules for a project."""
rows = conn.execute(
"SELECT * FROM modules WHERE project_id = ? ORDER BY type, name",
(project_id,),
).fetchall()
return _rows_to_list(rows)
# ---------------------------------------------------------------------------
# Agent Logs
# ---------------------------------------------------------------------------
def log_agent_run(
conn: sqlite3.Connection,
project_id: str,
agent_role: str,
action: str,
task_id: str | None = None,
session_id: str | None = None,
input_summary: str | None = None,
output_summary: str | None = None,
tokens_used: int | None = None,
model: str | None = None,
cost_usd: float | None = None,
success: bool = True,
error_message: str | None = None,
duration_seconds: int | None = None,
) -> dict:
"""Log an agent execution run."""
cur = conn.execute(
"""INSERT INTO agent_logs (project_id, task_id, agent_role, session_id,
action, input_summary, output_summary, tokens_used, model,
cost_usd, success, error_message, duration_seconds)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(project_id, task_id, agent_role, session_id, action, input_summary,
output_summary, tokens_used, model, cost_usd, success,
error_message, duration_seconds),
)
conn.commit()
row = conn.execute(
"SELECT * FROM agent_logs WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
# ---------------------------------------------------------------------------
# Pipelines
# ---------------------------------------------------------------------------
def create_pipeline(
conn: sqlite3.Connection,
task_id: str,
project_id: str,
route_type: str,
steps: list | dict,
) -> dict:
"""Create a new pipeline run."""
cur = conn.execute(
"""INSERT INTO pipelines (task_id, project_id, route_type, steps)
VALUES (?, ?, ?, ?)""",
(task_id, project_id, route_type, _json_encode(steps)),
)
conn.commit()
row = conn.execute(
"SELECT * FROM pipelines WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def update_pipeline(
conn: sqlite3.Connection,
id: int,
status: str | None = None,
total_cost_usd: float | None = None,
total_tokens: int | None = None,
total_duration_seconds: int | None = None,
) -> dict:
"""Update pipeline status and stats."""
fields: dict[str, Any] = {}
if status is not None:
fields["status"] = status
if status in ("completed", "failed", "cancelled"):
fields["completed_at"] = datetime.now().isoformat()
if total_cost_usd is not None:
fields["total_cost_usd"] = total_cost_usd
if total_tokens is not None:
fields["total_tokens"] = total_tokens
if total_duration_seconds is not None:
fields["total_duration_seconds"] = total_duration_seconds
if fields:
sets = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [id]
conn.execute(f"UPDATE pipelines SET {sets} WHERE id = ?", vals)
conn.commit()
row = conn.execute(
"SELECT * FROM pipelines WHERE id = ?", (id,)
).fetchone()
return _row_to_dict(row)
# ---------------------------------------------------------------------------
# Support
# ---------------------------------------------------------------------------
def create_ticket(
conn: sqlite3.Connection,
project_id: str,
source: str,
client_message: str,
client_id: str | None = None,
classification: str | None = None,
) -> dict:
"""Create a support ticket."""
cur = conn.execute(
"""INSERT INTO support_tickets (project_id, source, client_id,
client_message, classification)
VALUES (?, ?, ?, ?, ?)""",
(project_id, source, client_id, client_message, classification),
)
conn.commit()
row = conn.execute(
"SELECT * FROM support_tickets WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def list_tickets(
conn: sqlite3.Connection,
project_id: str | None = None,
status: str | None = None,
) -> list[dict]:
"""List support tickets with optional filters."""
query = "SELECT * FROM support_tickets WHERE 1=1"
params: list = []
if project_id:
query += " AND project_id = ?"
params.append(project_id)
if status:
query += " AND status = ?"
params.append(status)
query += " ORDER BY created_at DESC"
return _rows_to_list(conn.execute(query, params).fetchall())
# ---------------------------------------------------------------------------
# Statistics / Dashboard
# ---------------------------------------------------------------------------
def get_project_summary(conn: sqlite3.Connection) -> list[dict]:
"""Get all projects with task counts by status."""
rows = conn.execute("""
SELECT p.*,
COUNT(t.id) AS total_tasks,
SUM(CASE WHEN t.status = 'done' THEN 1 ELSE 0 END) AS done_tasks,
SUM(CASE WHEN t.status = 'in_progress' THEN 1 ELSE 0 END) AS active_tasks,
SUM(CASE WHEN t.status = 'blocked' THEN 1 ELSE 0 END) AS blocked_tasks,
SUM(CASE WHEN t.status = 'review' THEN 1 ELSE 0 END) AS review_tasks
FROM projects p
LEFT JOIN tasks t ON t.project_id = p.id
GROUP BY p.id
ORDER BY p.priority, p.name
""").fetchall()
return _rows_to_list(rows)
def get_cost_summary(conn: sqlite3.Connection, days: int = 7) -> list[dict]:
"""Get cost summary by project for the last N days."""
rows = conn.execute("""
SELECT
p.id AS project_id,
p.name AS project_name,
COUNT(a.id) AS runs,
COALESCE(SUM(a.tokens_used), 0) AS total_tokens,
COALESCE(SUM(a.cost_usd), 0) AS total_cost_usd,
COALESCE(SUM(a.duration_seconds), 0) AS total_duration_seconds
FROM projects p
LEFT JOIN agent_logs a ON a.project_id = p.id
AND a.created_at >= datetime('now', ?)
GROUP BY p.id
HAVING runs > 0
ORDER BY total_cost_usd DESC
""", (f"-{days} days",)).fetchall()
return _rows_to_list(rows)

16
pyproject.toml Normal file
View file

@ -0,0 +1,16 @@
[build-system]
requires = ["setuptools>=68.0"]
build-backend = "setuptools.backends._legacy:_Backend"
[project]
name = "kin"
version = "0.1.0"
description = "Multi-agent project orchestrator"
requires-python = ">=3.11"
dependencies = ["click>=8.0", "fastapi>=0.110", "uvicorn>=0.29"]
[project.scripts]
kin = "cli.main:cli"
[tool.pytest.ini_options]
testpaths = ["tests"]

0
tests/__init__.py Normal file
View file

185
tests/test_api.py Normal file
View file

@ -0,0 +1,185 @@
"""Tests for web/api.py — new task endpoints (pipeline, approve, reject, full)."""
import pytest
from pathlib import Path
from fastapi.testclient import TestClient
# Patch DB_PATH before importing app
import web.api as api_module
@pytest.fixture
def client(tmp_path):
db_path = tmp_path / "test.db"
api_module.DB_PATH = db_path
from web.api import app
c = TestClient(app)
# Seed data
c.post("/api/projects", json={"id": "p1", "name": "P1", "path": "/p1"})
c.post("/api/tasks", json={"project_id": "p1", "title": "Fix bug"})
return c
def test_get_task(client):
r = client.get("/api/tasks/P1-001")
assert r.status_code == 200
assert r.json()["title"] == "Fix bug"
def test_get_task_not_found(client):
r = client.get("/api/tasks/NOPE")
assert r.status_code == 404
def test_task_pipeline_empty(client):
r = client.get("/api/tasks/P1-001/pipeline")
assert r.status_code == 200
assert r.json() == []
def test_task_pipeline_with_logs(client):
# Insert agent logs directly
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.log_agent_run(conn, "p1", "debugger", "execute",
task_id="P1-001", output_summary="Found bug",
tokens_used=1000, duration_seconds=5, success=True)
models.log_agent_run(conn, "p1", "tester", "execute",
task_id="P1-001", output_summary="Tests pass",
tokens_used=500, duration_seconds=3, success=True)
conn.close()
r = client.get("/api/tasks/P1-001/pipeline")
assert r.status_code == 200
steps = r.json()
assert len(steps) == 2
assert steps[0]["agent_role"] == "debugger"
assert steps[0]["output_summary"] == "Found bug"
assert steps[1]["agent_role"] == "tester"
def test_task_full(client):
r = client.get("/api/tasks/P1-001/full")
assert r.status_code == 200
data = r.json()
assert data["id"] == "P1-001"
assert "pipeline_steps" in data
assert "related_decisions" in data
def test_task_full_not_found(client):
r = client.get("/api/tasks/NOPE/full")
assert r.status_code == 404
def test_approve_task(client):
# First set task to review
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", status="review")
conn.close()
r = client.post("/api/tasks/P1-001/approve", json={})
assert r.status_code == 200
assert r.json()["status"] == "done"
# Verify task is done
r = client.get("/api/tasks/P1-001")
assert r.json()["status"] == "done"
def test_approve_with_decision(client):
r = client.post("/api/tasks/P1-001/approve", json={
"decision_title": "Use AbortController",
"decision_description": "Fix race condition with AbortController",
"decision_type": "decision",
})
assert r.status_code == 200
assert r.json()["decision"] is not None
assert r.json()["decision"]["title"] == "Use AbortController"
def test_approve_not_found(client):
r = client.post("/api/tasks/NOPE/approve", json={})
assert r.status_code == 404
def test_reject_task(client):
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", status="review")
conn.close()
r = client.post("/api/tasks/P1-001/reject", json={
"reason": "Didn't fix the root cause"
})
assert r.status_code == 200
assert r.json()["status"] == "pending"
# Verify task is pending with review reason
r = client.get("/api/tasks/P1-001")
data = r.json()
assert data["status"] == "pending"
assert data["review"]["rejected"] == "Didn't fix the root cause"
def test_reject_not_found(client):
r = client.post("/api/tasks/NOPE/reject", json={"reason": "bad"})
assert r.status_code == 404
def test_task_pipeline_not_found(client):
r = client.get("/api/tasks/NOPE/pipeline")
assert r.status_code == 404
def test_running_endpoint_no_pipeline(client):
r = client.get("/api/tasks/P1-001/running")
assert r.status_code == 200
assert r.json()["running"] is False
def test_running_endpoint_with_pipeline(client):
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.create_pipeline(conn, "P1-001", "p1", "debug",
[{"role": "debugger"}])
conn.close()
r = client.get("/api/tasks/P1-001/running")
assert r.status_code == 200
assert r.json()["running"] is True
def test_running_endpoint_not_found(client):
r = client.get("/api/tasks/NOPE/running")
assert r.status_code == 404
def test_run_sets_in_progress(client):
"""POST /run should set task to in_progress immediately."""
r = client.post("/api/tasks/P1-001/run")
assert r.status_code == 202
r = client.get("/api/tasks/P1-001")
assert r.json()["status"] == "in_progress"
def test_run_not_found(client):
r = client.post("/api/tasks/NOPE/run")
assert r.status_code == 404
def test_project_summary_includes_review(client):
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", status="review")
conn.close()
r = client.get("/api/projects")
projects = r.json()
assert projects[0]["review_tasks"] == 1

427
tests/test_bootstrap.py Normal file
View file

@ -0,0 +1,427 @@
"""Tests for agents/bootstrap.py — tech detection, modules, decisions, obsidian."""
import json
import pytest
from pathlib import Path
from click.testing import CliRunner
from agents.bootstrap import (
detect_tech_stack, detect_modules, extract_decisions_from_claude_md,
find_vault_root, scan_obsidian, format_preview, save_to_db,
)
from core.db import init_db
from core import models
from cli.main import cli
# ---------------------------------------------------------------------------
# Tech stack detection
# ---------------------------------------------------------------------------
def test_detect_node_project(tmp_path):
(tmp_path / "package.json").write_text(json.dumps({
"dependencies": {"vue": "^3.4", "pinia": "^2.0"},
"devDependencies": {"typescript": "^5.0", "vite": "^5.0"},
}))
(tmp_path / "tsconfig.json").write_text("{}")
(tmp_path / "nuxt.config.ts").write_text("export default {}")
stack = detect_tech_stack(tmp_path)
assert "vue3" in stack
assert "typescript" in stack
assert "nuxt3" in stack
assert "pinia" in stack
assert "vite" in stack
def test_detect_python_project(tmp_path):
(tmp_path / "requirements.txt").write_text("fastapi==0.104\npydantic>=2.0\n")
(tmp_path / "pyproject.toml").write_text("[project]\nname='x'\n")
stack = detect_tech_stack(tmp_path)
assert "python" in stack
assert "fastapi" in stack
assert "pydantic" in stack
def test_detect_go_project(tmp_path):
(tmp_path / "go.mod").write_text("module example.com/foo\nrequire gin-gonic v1.9\n")
stack = detect_tech_stack(tmp_path)
assert "go" in stack
assert "gin" in stack
def test_detect_monorepo(tmp_path):
fe = tmp_path / "frontend"
fe.mkdir()
(fe / "package.json").write_text(json.dumps({
"dependencies": {"vue": "^3.0"},
}))
be = tmp_path / "backend"
be.mkdir()
(be / "requirements.txt").write_text("fastapi\n")
stack = detect_tech_stack(tmp_path)
assert "vue3" in stack
assert "fastapi" in stack
def test_detect_deep_monorepo(tmp_path):
"""Test that files nested 2-3 levels deep are found (like vdolipoperek)."""
fe = tmp_path / "frontend" / "src"
fe.mkdir(parents=True)
(tmp_path / "frontend" / "package.json").write_text(json.dumps({
"dependencies": {"vue": "^3.4"},
"devDependencies": {"vite": "^5.0", "tailwindcss": "^3.4"},
}))
(tmp_path / "frontend" / "vite.config.js").write_text("export default {}")
(tmp_path / "frontend" / "tailwind.config.js").write_text("module.exports = {}")
be = tmp_path / "backend-pg" / "src"
be.mkdir(parents=True)
(be / "index.js").write_text("const express = require('express');")
stack = detect_tech_stack(tmp_path)
assert "vue3" in stack
assert "vite" in stack
assert "tailwind" in stack
def test_detect_empty_dir(tmp_path):
assert detect_tech_stack(tmp_path) == []
# ---------------------------------------------------------------------------
# Module detection
# ---------------------------------------------------------------------------
def test_detect_modules_vue(tmp_path):
src = tmp_path / "src"
(src / "components" / "search").mkdir(parents=True)
(src / "components" / "search" / "Search.vue").write_text("<template></template>")
(src / "components" / "search" / "SearchFilter.vue").write_text("<template></template>")
(src / "api" / "auth").mkdir(parents=True)
(src / "api" / "auth" / "login.ts").write_text("import express from 'express';\nconst router = express.Router();")
modules = detect_modules(tmp_path)
names = {m["name"] for m in modules}
assert "components" in names or "search" in names
assert "api" in names or "auth" in names
def test_detect_modules_empty(tmp_path):
assert detect_modules(tmp_path) == []
def test_detect_modules_backend_pg(tmp_path):
"""Test detection in backend-pg/src/ pattern (like vdolipoperek)."""
src = tmp_path / "backend-pg" / "src" / "services"
src.mkdir(parents=True)
(src / "tourMapper.js").write_text("const express = require('express');")
(src / "dbService.js").write_text("module.exports = { query };")
modules = detect_modules(tmp_path)
assert any(m["name"] == "services" for m in modules)
def test_detect_modules_monorepo(tmp_path):
"""Full monorepo: frontend/src/ + backend-pg/src/."""
# Frontend
fe_views = tmp_path / "frontend" / "src" / "views"
fe_views.mkdir(parents=True)
(fe_views / "Hotel.vue").write_text("<template></template>")
fe_comp = tmp_path / "frontend" / "src" / "components"
fe_comp.mkdir(parents=True)
(fe_comp / "Search.vue").write_text("<template></template>")
# Backend
be_svc = tmp_path / "backend-pg" / "src" / "services"
be_svc.mkdir(parents=True)
(be_svc / "db.js").write_text("const express = require('express');")
be_routes = tmp_path / "backend-pg" / "src" / "routes"
be_routes.mkdir(parents=True)
(be_routes / "api.js").write_text("const router = require('express').Router();")
modules = detect_modules(tmp_path)
names = {m["name"] for m in modules}
assert "views" in names
assert "components" in names
assert "services" in names
assert "routes" in names
# Check types
types = {m["name"]: m["type"] for m in modules}
assert types["views"] == "frontend"
assert types["components"] == "frontend"
# ---------------------------------------------------------------------------
# Decisions from CLAUDE.md
# ---------------------------------------------------------------------------
def test_extract_decisions(tmp_path):
(tmp_path / "CLAUDE.md").write_text("""# Project
## Rules
- Use WAL mode for SQLite
ВАЖНО: docker-compose v1 глючит только raw docker commands
WORKAROUND: position:fixed breaks on iOS Safari, use transform instead
GOTCHA: Sletat API бан при параллельных запросах
FIXME: race condition in useSearch composable
## Known Issues
- Mobile bottom-sheet не работает в landscape mode
- CSS grid fallback для IE11 (но мы его не поддерживаем)
""")
decisions = extract_decisions_from_claude_md(tmp_path, "myproj", "My Project")
assert len(decisions) >= 4
types = {d["type"] for d in decisions}
assert "gotcha" in types
assert "workaround" in types
def test_extract_decisions_no_claude_md(tmp_path):
assert extract_decisions_from_claude_md(tmp_path) == []
def test_extract_decisions_filters_unrelated_sections(tmp_path):
"""Sections about Jitsi, Nextcloud, Prosody should be skipped."""
(tmp_path / "CLAUDE.md").write_text("""# vdolipoperek
## Known Issues
1. **Hotel ID mismatch** Sletat GetTours vs GetHotels разные ID
2. **db.js export** module.exports = pool (НЕ { pool })
## Jitsi + Nextcloud интеграция (2026-03-04)
ВАЖНО: JWT_APP_SECRET must be synced between Prosody and Nextcloud
GOTCHA: focus.meet.jitsi must be pinned in custom-config.js
## Prosody config
ВАЖНО: conf.d files принадлежат root писать через docker exec
## Git Sync (2026-03-03)
ВАЖНО: Все среды синхронизированы на коммите 4ee5603
""")
decisions = extract_decisions_from_claude_md(tmp_path, "vdol", "vdolipoperek")
titles = [d["title"] for d in decisions]
# Should have the real known issues
assert any("Hotel ID mismatch" in t for t in titles)
assert any("db.js export" in t for t in titles)
# Should NOT have Jitsi/Prosody/Nextcloud noise
assert not any("JWT_APP_SECRET" in t for t in titles)
assert not any("focus.meet.jitsi" in t for t in titles)
assert not any("conf.d files" in t for t in titles)
def test_extract_decisions_filters_noise(tmp_path):
"""Commit hashes and shell commands should not be decisions."""
(tmp_path / "CLAUDE.md").write_text("""# Project
## Known Issues
1. **Real bug** actual architectural issue that matters
- docker exec -it prosody bash
- ssh dev "cd /opt/project && git pull"
""")
decisions = extract_decisions_from_claude_md(tmp_path)
titles = [d["title"] for d in decisions]
assert any("Real bug" in t for t in titles)
# Shell commands should be filtered
assert not any("docker exec" in t for t in titles)
assert not any("ssh dev" in t for t in titles)
# ---------------------------------------------------------------------------
# Obsidian vault
# ---------------------------------------------------------------------------
def test_scan_obsidian(tmp_path):
# Create a mock vault
vault = tmp_path / "vault"
proj_dir = vault / "myproject"
proj_dir.mkdir(parents=True)
(proj_dir / "kanban.md").write_text("""---
kanban-plugin: board
---
## В работе
- [ ] Fix login page
- [ ] Add search filter
- [x] Setup CI/CD
## Done
- [x] Initial deploy
**ВАЖНО:** Не забыть обновить SSL сертификат
""")
(proj_dir / "notes.md").write_text("""# Notes
GOTCHA: API rate limit is 10 req/s
- [ ] Write tests for auth module
""")
result = scan_obsidian(vault, "myproject", "My Project", "myproject")
assert result["files_scanned"] == 2
assert len(result["tasks"]) >= 4 # 3 pending + at least 1 done
assert len(result["decisions"]) >= 1 # At least the ВАЖНО one
pending = [t for t in result["tasks"] if not t["done"]]
done = [t for t in result["tasks"] if t["done"]]
assert len(pending) >= 3
assert len(done) >= 1
def test_scan_obsidian_no_match(tmp_path):
vault = tmp_path / "vault"
vault.mkdir()
(vault / "other.md").write_text("# Unrelated note\nSomething else.")
result = scan_obsidian(vault, "myproject", "My Project")
assert result["files_scanned"] == 0
assert result["tasks"] == []
def test_find_vault_root_explicit(tmp_path):
vault = tmp_path / "vault"
vault.mkdir()
assert find_vault_root(vault) == vault
def test_find_vault_root_none():
assert find_vault_root(Path("/nonexistent/path")) is None
# ---------------------------------------------------------------------------
# Save to DB
# ---------------------------------------------------------------------------
def test_save_to_db(tmp_path):
conn = init_db(":memory:")
save_to_db(
conn,
project_id="test",
name="Test Project",
path=str(tmp_path),
tech_stack=["python", "fastapi"],
modules=[
{"name": "api", "type": "backend", "path": "src/api/", "file_count": 5},
{"name": "ui", "type": "frontend", "path": "src/ui/", "file_count": 8},
],
decisions=[
{"type": "gotcha", "title": "Bug X", "description": "desc",
"category": "ui"},
],
obsidian={
"tasks": [
{"title": "Fix login", "done": False, "source": "kanban"},
{"title": "Setup CI", "done": True, "source": "kanban"},
],
"decisions": [
{"type": "gotcha", "title": "API limit", "description": "10 req/s",
"category": "api", "source": "notes"},
],
"files_scanned": 2,
},
)
p = models.get_project(conn, "test")
assert p is not None
assert p["tech_stack"] == ["python", "fastapi"]
mods = models.get_modules(conn, "test")
assert len(mods) == 2
decs = models.get_decisions(conn, "test")
assert len(decs) == 2 # 1 from CLAUDE.md + 1 from Obsidian
tasks = models.list_tasks(conn, project_id="test")
assert len(tasks) == 2 # 2 from Obsidian
assert any(t["status"] == "done" for t in tasks)
assert any(t["status"] == "pending" for t in tasks)
conn.close()
# ---------------------------------------------------------------------------
# format_preview
# ---------------------------------------------------------------------------
def test_format_preview():
text = format_preview(
"vdol", "ВДОЛЬ", "~/projects/vdol",
["vue3", "typescript"],
[{"name": "search", "type": "frontend", "path": "src/search/", "file_count": 4}],
[{"type": "gotcha", "title": "Safari bug"}],
{"files_scanned": 3, "tasks": [
{"title": "Fix X", "done": False, "source": "kb"},
], "decisions": []},
)
assert "vue3" in text
assert "search" in text
assert "Safari bug" in text
assert "Fix X" in text
# ---------------------------------------------------------------------------
# CLI integration
# ---------------------------------------------------------------------------
def test_cli_bootstrap(tmp_path):
# Create a minimal project to bootstrap
proj = tmp_path / "myproj"
proj.mkdir()
(proj / "package.json").write_text(json.dumps({
"dependencies": {"vue": "^3.0"},
}))
src = proj / "src" / "components"
src.mkdir(parents=True)
(src / "App.vue").write_text("<template></template>")
db_path = tmp_path / "test.db"
runner = CliRunner()
result = runner.invoke(cli, [
"--db", str(db_path),
"bootstrap", str(proj),
"--id", "myproj",
"--name", "My Project",
"--vault", str(tmp_path / "nonexistent_vault"),
"-y",
])
assert result.exit_code == 0
assert "vue3" in result.output
assert "Saved:" in result.output
# Verify in DB
conn = init_db(db_path)
p = models.get_project(conn, "myproj")
assert p is not None
assert "vue3" in p["tech_stack"]
conn.close()
def test_cli_bootstrap_already_exists(tmp_path):
proj = tmp_path / "myproj"
proj.mkdir()
db_path = tmp_path / "test.db"
runner = CliRunner()
# Create project first
runner.invoke(cli, ["--db", str(db_path), "project", "add", "myproj", "X", str(proj)])
# Try bootstrap — should fail
result = runner.invoke(cli, [
"--db", str(db_path),
"bootstrap", str(proj),
"--id", "myproj", "--name", "X", "-y",
])
assert result.exit_code == 1
assert "already exists" in result.output

207
tests/test_cli.py Normal file
View file

@ -0,0 +1,207 @@
"""Tests for cli/main.py using click's CliRunner with in-memory-like temp DB."""
import json
import tempfile
from pathlib import Path
import pytest
from click.testing import CliRunner
from cli.main import cli
@pytest.fixture
def runner(tmp_path):
"""CliRunner that uses a temp DB file."""
db_path = tmp_path / "test.db"
return CliRunner(), ["--db", str(db_path)]
def invoke(runner_tuple, args):
runner, base = runner_tuple
result = runner.invoke(cli, base + args)
return result
# -- project --
def test_project_add_and_list(runner):
r = invoke(runner, ["project", "add", "vdol", "В долю поперёк",
"~/projects/vdolipoperek", "--tech-stack", '["vue3","nuxt"]'])
assert r.exit_code == 0
assert "vdol" in r.output
r = invoke(runner, ["project", "list"])
assert r.exit_code == 0
assert "vdol" in r.output
assert "В долю поперёк" in r.output
def test_project_list_empty(runner):
r = invoke(runner, ["project", "list"])
assert r.exit_code == 0
assert "No projects" in r.output
def test_project_list_filter_status(runner):
invoke(runner, ["project", "add", "a", "A", "/a", "--status", "active"])
invoke(runner, ["project", "add", "b", "B", "/b", "--status", "paused"])
r = invoke(runner, ["project", "list", "--status", "active"])
assert "a" in r.output
assert "b" not in r.output
def test_project_show(runner):
invoke(runner, ["project", "add", "vdol", "В долю", "/vdol",
"--tech-stack", '["vue3"]', "--priority", "2"])
r = invoke(runner, ["project", "show", "vdol"])
assert r.exit_code == 0
assert "vue3" in r.output
assert "Priority: 2" in r.output
def test_project_show_not_found(runner):
r = invoke(runner, ["project", "show", "nope"])
assert r.exit_code == 1
assert "not found" in r.output
# -- task --
def test_task_add_and_list(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
r = invoke(runner, ["task", "add", "p1", "Fix login bug", "--type", "debug"])
assert r.exit_code == 0
assert "P1-001" in r.output
r = invoke(runner, ["task", "add", "p1", "Add search"])
assert "P1-002" in r.output
r = invoke(runner, ["task", "list"])
assert "P1-001" in r.output
assert "P1-002" in r.output
def test_task_add_project_not_found(runner):
r = invoke(runner, ["task", "add", "nope", "Some task"])
assert r.exit_code == 1
assert "not found" in r.output
def test_task_list_filter(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
invoke(runner, ["project", "add", "p2", "P2", "/p2"])
invoke(runner, ["task", "add", "p1", "A"])
invoke(runner, ["task", "add", "p2", "B"])
r = invoke(runner, ["task", "list", "--project", "p1"])
assert "P1-001" in r.output
assert "P2-001" not in r.output
def test_task_show(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
invoke(runner, ["task", "add", "p1", "Fix bug", "--type", "debug"])
r = invoke(runner, ["task", "show", "P1-001"])
assert r.exit_code == 0
assert "Fix bug" in r.output
def test_task_show_not_found(runner):
r = invoke(runner, ["task", "show", "X-999"])
assert r.exit_code == 1
assert "not found" in r.output
# -- decision --
def test_decision_add_and_list(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
r = invoke(runner, ["decision", "add", "p1", "gotcha",
"Safari bug", "position:fixed breaks",
"--category", "ui", "--tags", '["ios","css"]'])
assert r.exit_code == 0
assert "gotcha" in r.output
r = invoke(runner, ["decision", "list", "p1"])
assert "Safari bug" in r.output
def test_decision_list_filter(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
invoke(runner, ["decision", "add", "p1", "gotcha", "A", "a", "--category", "ui"])
invoke(runner, ["decision", "add", "p1", "decision", "B", "b", "--category", "arch"])
r = invoke(runner, ["decision", "list", "p1", "--type", "gotcha"])
assert "A" in r.output
assert "B" not in r.output
# -- module --
def test_module_add_and_list(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
r = invoke(runner, ["module", "add", "p1", "search", "frontend", "src/search/",
"--description", "Search UI"])
assert r.exit_code == 0
assert "search" in r.output
r = invoke(runner, ["module", "list", "p1"])
assert "search" in r.output
assert "Search UI" in r.output
# -- status --
def test_status_all(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
invoke(runner, ["task", "add", "p1", "A"])
invoke(runner, ["task", "add", "p1", "B"])
r = invoke(runner, ["status"])
assert r.exit_code == 0
assert "p1" in r.output
assert "2" in r.output # total tasks
def test_status_single_project(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
invoke(runner, ["task", "add", "p1", "A"])
r = invoke(runner, ["status", "p1"])
assert r.exit_code == 0
assert "P1-001" in r.output
assert "pending" in r.output
def test_status_not_found(runner):
r = invoke(runner, ["status", "nope"])
assert r.exit_code == 1
assert "not found" in r.output
# -- cost --
def test_cost_empty(runner):
r = invoke(runner, ["cost"])
assert r.exit_code == 0
assert "No agent runs" in r.output
def test_cost_with_data(runner):
invoke(runner, ["project", "add", "p1", "P1", "/p1"])
# Insert agent log directly via models (no CLI command for this)
from core.db import init_db
from core import models as m
# Re-open the same DB the runner uses
db_path = runner[1][1]
conn = init_db(Path(db_path))
m.log_agent_run(conn, "p1", "dev", "implement",
cost_usd=0.10, tokens_used=5000)
conn.close()
r = invoke(runner, ["cost", "--last", "7d"])
assert r.exit_code == 0
assert "p1" in r.output
assert "$0.1000" in r.output

View file

@ -0,0 +1,163 @@
"""Tests for core/context_builder.py — context assembly per role."""
import pytest
from core.db import init_db
from core import models
from core.context_builder import build_context, format_prompt
@pytest.fixture
def conn():
c = init_db(":memory:")
# Seed project, modules, decisions, tasks
models.create_project(c, "vdol", "ВДОЛЬ и ПОПЕРЕК", "~/projects/vdolipoperek",
tech_stack=["vue3", "typescript", "nodejs"])
models.add_module(c, "vdol", "search", "frontend", "src/search/")
models.add_module(c, "vdol", "api", "backend", "src/api/")
models.add_decision(c, "vdol", "gotcha", "Safari bug",
"position:fixed breaks", category="ui", tags=["ios"])
models.add_decision(c, "vdol", "workaround", "API rate limit",
"10 req/s max", category="api")
models.add_decision(c, "vdol", "convention", "Use WAL mode",
"Always use WAL for SQLite", category="architecture")
models.add_decision(c, "vdol", "decision", "Auth required",
"All endpoints need auth", category="security")
models.create_task(c, "VDOL-001", "vdol", "Fix search filters",
brief={"module": "search", "route_type": "debug"})
models.create_task(c, "VDOL-002", "vdol", "Add payments",
status="in_progress")
yield c
c.close()
class TestBuildContext:
def test_pm_gets_everything(self, conn):
ctx = build_context(conn, "VDOL-001", "pm", "vdol")
assert ctx["task"]["id"] == "VDOL-001"
assert ctx["project"]["id"] == "vdol"
assert len(ctx["modules"]) == 2
assert len(ctx["decisions"]) == 4 # all decisions
assert len(ctx["active_tasks"]) == 1 # VDOL-002 in_progress
assert "pm" in ctx["available_specialists"]
def test_architect_gets_all_decisions_and_modules(self, conn):
ctx = build_context(conn, "VDOL-001", "architect", "vdol")
assert len(ctx["modules"]) == 2
assert len(ctx["decisions"]) == 4
def test_debugger_gets_only_gotcha_workaround(self, conn):
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
types = {d["type"] for d in ctx["decisions"]}
assert types <= {"gotcha", "workaround"}
assert "convention" not in types
assert "decision" not in types
assert ctx["module_hint"] == "search"
def test_frontend_dev_gets_gotcha_workaround_convention(self, conn):
ctx = build_context(conn, "VDOL-001", "frontend_dev", "vdol")
types = {d["type"] for d in ctx["decisions"]}
assert "gotcha" in types
assert "workaround" in types
assert "convention" in types
assert "decision" not in types # plain decisions excluded
def test_backend_dev_same_as_frontend(self, conn):
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
types = {d["type"] for d in ctx["decisions"]}
assert types == {"gotcha", "workaround", "convention"}
def test_reviewer_gets_only_conventions(self, conn):
ctx = build_context(conn, "VDOL-001", "reviewer", "vdol")
types = {d["type"] for d in ctx["decisions"]}
assert types == {"convention"}
def test_tester_gets_minimal_context(self, conn):
ctx = build_context(conn, "VDOL-001", "tester", "vdol")
assert ctx["task"] is not None
assert ctx["project"] is not None
assert "decisions" not in ctx
assert "modules" not in ctx
def test_security_gets_security_decisions(self, conn):
ctx = build_context(conn, "VDOL-001", "security", "vdol")
categories = {d.get("category") for d in ctx["decisions"]}
assert categories == {"security"}
def test_unknown_role_gets_fallback(self, conn):
ctx = build_context(conn, "VDOL-001", "unknown_role", "vdol")
assert "decisions" in ctx
assert len(ctx["decisions"]) > 0
class TestFormatPrompt:
def test_format_with_template(self, conn):
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
prompt = format_prompt(ctx, "debugger", "You are a debugger. Find bugs.")
assert "You are a debugger" in prompt
assert "VDOL-001" in prompt
assert "Fix search filters" in prompt
assert "vdol" in prompt
assert "vue3" in prompt
def test_format_includes_decisions(self, conn):
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
prompt = format_prompt(ctx, "debugger", "Debug this.")
assert "Safari bug" in prompt
assert "API rate limit" in prompt
# Convention should NOT be here (debugger doesn't get it)
assert "WAL mode" not in prompt
def test_format_pm_includes_specialists(self, conn):
ctx = build_context(conn, "VDOL-001", "pm", "vdol")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "Available specialists" in prompt
assert "debugger" in prompt
assert "Active tasks" in prompt
assert "VDOL-002" in prompt
def test_format_with_previous_output(self, conn):
ctx = build_context(conn, "VDOL-001", "tester", "vdol")
ctx["previous_output"] = "Found race condition in useSearch.ts"
prompt = format_prompt(ctx, "tester", "Write tests.")
assert "Previous step output" in prompt
assert "race condition" in prompt
def test_format_loads_prompt_file(self, conn):
ctx = build_context(conn, "VDOL-001", "pm", "vdol")
prompt = format_prompt(ctx, "pm") # Should load from agents/prompts/pm.md
assert "decompose" in prompt.lower() or "pipeline" in prompt.lower()
def test_format_missing_prompt_file(self, conn):
ctx = build_context(conn, "VDOL-001", "analyst", "vdol")
prompt = format_prompt(ctx, "analyst") # No analyst.md exists
assert "analyst" in prompt.lower()
def test_format_includes_language_ru(self, conn):
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
prompt = format_prompt(ctx, "debugger", "Debug.")
assert "## Language" in prompt
assert "Russian" in prompt
assert "ALWAYS respond in Russian" in prompt
def test_format_includes_language_en(self, conn):
# Update project language to en
conn.execute("UPDATE projects SET language='en' WHERE id='vdol'")
conn.commit()
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
prompt = format_prompt(ctx, "debugger", "Debug.")
assert "ALWAYS respond in English" in prompt
class TestLanguageInProject:
def test_project_has_language_default(self, conn):
p = models.get_project(conn, "vdol")
assert p["language"] == "ru"
def test_create_project_with_language(self, conn):
p = models.create_project(conn, "en-proj", "English Project", "/en",
language="en")
assert p["language"] == "en"
def test_context_carries_language(self, conn):
ctx = build_context(conn, "VDOL-001", "pm", "vdol")
assert ctx["project"]["language"] == "ru"

224
tests/test_followup.py Normal file
View file

@ -0,0 +1,224 @@
"""Tests for core/followup.py — follow-up task generation with permission handling."""
import json
import pytest
from unittest.mock import patch, MagicMock
from core.db import init_db
from core import models
from core.followup import (
generate_followups, resolve_pending_action,
_collect_pipeline_output, _next_task_id, _is_permission_blocked,
)
@pytest.fixture
def conn():
c = init_db(":memory:")
models.create_project(c, "vdol", "ВДОЛЬ", "~/projects/vdolipoperek",
tech_stack=["vue3"], language="ru")
models.create_task(c, "VDOL-001", "vdol", "Security audit",
status="done", brief={"route_type": "security_audit"})
models.log_agent_run(c, "vdol", "security", "execute",
task_id="VDOL-001",
output_summary=json.dumps({
"summary": "8 уязвимостей найдено",
"findings": [
{"severity": "HIGH", "title": "Admin endpoint без auth",
"file": "index.js", "line": 42},
{"severity": "MEDIUM", "title": "Нет rate limiting на login",
"file": "auth.js", "line": 15},
],
}, ensure_ascii=False),
success=True)
yield c
c.close()
class TestCollectPipelineOutput:
def test_collects_all_steps(self, conn):
output = _collect_pipeline_output(conn, "VDOL-001")
assert "security" in output
assert "Admin endpoint" in output
def test_empty_for_no_logs(self, conn):
assert _collect_pipeline_output(conn, "NONEXISTENT") == ""
class TestNextTaskId:
def test_increments(self, conn):
assert _next_task_id(conn, "vdol") == "VDOL-002"
def test_handles_obs_ids(self, conn):
models.create_task(conn, "VDOL-OBS-001", "vdol", "Obsidian task")
assert _next_task_id(conn, "vdol") == "VDOL-002"
class TestIsPermissionBlocked:
def test_detects_permission_denied(self):
assert _is_permission_blocked({"title": "Fix X", "brief": "permission denied on write"})
def test_detects_manual_application_ru(self):
assert _is_permission_blocked({"title": "Ручное применение фикса для auth.js"})
def test_detects_no_write_permission_ru(self):
assert _is_permission_blocked({"title": "X", "brief": "не получили разрешение на запись"})
def test_detects_read_only(self):
assert _is_permission_blocked({"title": "Apply manually", "brief": "file is read-only"})
def test_normal_item_not_blocked(self):
assert not _is_permission_blocked({"title": "Fix admin auth", "brief": "Add requireAuth"})
def test_empty_item(self):
assert not _is_permission_blocked({})
class TestGenerateFollowups:
@patch("agents.runner._run_claude")
def test_creates_followup_tasks(self, mock_claude, conn):
mock_claude.return_value = {
"output": json.dumps([
{"title": "Fix admin auth", "type": "hotfix", "priority": 2,
"brief": "Add requireAuth to admin endpoints"},
{"title": "Add rate limiting", "type": "feature", "priority": 4,
"brief": "Rate limit login to 5/15min"},
]),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 2
assert len(result["pending_actions"]) == 0
assert result["created"][0]["id"] == "VDOL-002"
assert result["created"][0]["parent_task_id"] == "VDOL-001"
@patch("agents.runner._run_claude")
def test_separates_permission_items(self, mock_claude, conn):
mock_claude.return_value = {
"output": json.dumps([
{"title": "Fix admin auth", "type": "hotfix", "priority": 2,
"brief": "Add requireAuth"},
{"title": "Ручное применение .dockerignore",
"type": "hotfix", "priority": 3,
"brief": "Не получили разрешение на запись в файл"},
{"title": "Apply CSP headers manually",
"type": "feature", "priority": 4,
"brief": "Permission denied writing nginx.conf"},
]),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 1 # Only "Fix admin auth"
assert result["created"][0]["title"] == "Fix admin auth"
assert len(result["pending_actions"]) == 2
assert result["pending_actions"][0]["type"] == "permission_fix"
assert "options" in result["pending_actions"][0]
assert "rerun" in result["pending_actions"][0]["options"]
@patch("agents.runner._run_claude")
def test_handles_empty_response(self, mock_claude, conn):
mock_claude.return_value = {"output": "[]", "returncode": 0}
result = generate_followups(conn, "VDOL-001")
assert result["created"] == []
assert result["pending_actions"] == []
@patch("agents.runner._run_claude")
def test_handles_wrapped_response(self, mock_claude, conn):
mock_claude.return_value = {
"output": json.dumps({"tasks": [
{"title": "Fix X", "priority": 3},
]}),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 1
@patch("agents.runner._run_claude")
def test_handles_invalid_json(self, mock_claude, conn):
mock_claude.return_value = {"output": "not json", "returncode": 0}
result = generate_followups(conn, "VDOL-001")
assert result["created"] == []
def test_no_logs_returns_empty(self, conn):
models.create_task(conn, "VDOL-999", "vdol", "Empty task")
result = generate_followups(conn, "VDOL-999")
assert result["created"] == []
def test_nonexistent_task(self, conn):
result = generate_followups(conn, "NOPE")
assert result["created"] == []
def test_dry_run(self, conn):
result = generate_followups(conn, "VDOL-001", dry_run=True)
assert len(result["created"]) == 1
assert result["created"][0]["_dry_run"] is True
@patch("agents.runner._run_claude")
def test_logs_generation(self, mock_claude, conn):
mock_claude.return_value = {
"output": json.dumps([{"title": "Fix A", "priority": 2}]),
"returncode": 0,
}
generate_followups(conn, "VDOL-001")
logs = conn.execute(
"SELECT * FROM agent_logs WHERE agent_role='followup_pm'"
).fetchall()
assert len(logs) == 1
@patch("agents.runner._run_claude")
def test_prompt_includes_language(self, mock_claude, conn):
mock_claude.return_value = {"output": "[]", "returncode": 0}
generate_followups(conn, "VDOL-001")
prompt = mock_claude.call_args[0][0]
assert "Russian" in prompt
class TestResolvePendingAction:
def test_skip_returns_none(self, conn):
action = {"type": "permission_fix", "original_item": {"title": "X"}}
assert resolve_pending_action(conn, "VDOL-001", action, "skip") is None
def test_manual_task_creates_task(self, conn):
action = {
"type": "permission_fix",
"original_item": {"title": "Fix .dockerignore", "type": "hotfix",
"priority": 3, "brief": "Create .dockerignore"},
}
result = resolve_pending_action(conn, "VDOL-001", action, "manual_task")
assert result is not None
assert result["title"] == "Fix .dockerignore"
assert result["parent_task_id"] == "VDOL-001"
assert result["priority"] == 3
@patch("agents.runner._run_claude")
def test_rerun_launches_pipeline(self, mock_claude, conn):
mock_claude.return_value = {
"output": json.dumps({"result": "applied fix"}),
"returncode": 0,
}
action = {
"type": "permission_fix",
"original_item": {"title": "Fix X", "type": "frontend_dev",
"brief": "Apply the fix"},
}
result = resolve_pending_action(conn, "VDOL-001", action, "rerun")
assert "rerun_result" in result
# Verify --dangerously-skip-permissions was passed
call_args = mock_claude.call_args
cmd = call_args[0][0] if call_args[0] else None
# _run_claude is called with allow_write=True which adds the flag
# Check via the cmd list in subprocess.run mock... but _run_claude
# is mocked at a higher level. Let's check the allow_write param.
# The pipeline calls run_agent with allow_write=True which calls
# _run_claude with allow_write=True
assert result["rerun_result"]["success"] is True
def test_nonexistent_task(self, conn):
action = {"type": "permission_fix", "original_item": {}}
assert resolve_pending_action(conn, "NOPE", action, "skip") is None

240
tests/test_models.py Normal file
View file

@ -0,0 +1,240 @@
"""Tests for core/models.py — all functions, in-memory SQLite."""
import pytest
from core.db import init_db
from core import models
@pytest.fixture
def conn():
"""Fresh in-memory DB for each test."""
c = init_db(db_path=":memory:")
yield c
c.close()
# -- Projects --
def test_create_and_get_project(conn):
p = models.create_project(conn, "vdol", "В долю поперёк", "~/projects/vdolipoperek",
tech_stack=["vue3", "nuxt"])
assert p["id"] == "vdol"
assert p["tech_stack"] == ["vue3", "nuxt"]
assert p["status"] == "active"
fetched = models.get_project(conn, "vdol")
assert fetched["name"] == "В долю поперёк"
def test_get_project_not_found(conn):
assert models.get_project(conn, "nope") is None
def test_list_projects_filter(conn):
models.create_project(conn, "a", "A", "/a", status="active")
models.create_project(conn, "b", "B", "/b", status="paused")
models.create_project(conn, "c", "C", "/c", status="active")
assert len(models.list_projects(conn)) == 3
assert len(models.list_projects(conn, status="active")) == 2
assert len(models.list_projects(conn, status="paused")) == 1
def test_update_project(conn):
models.create_project(conn, "x", "X", "/x", priority=5)
updated = models.update_project(conn, "x", priority=1, status="maintenance")
assert updated["priority"] == 1
assert updated["status"] == "maintenance"
def test_update_project_tech_stack_json(conn):
models.create_project(conn, "x", "X", "/x", tech_stack=["python"])
updated = models.update_project(conn, "x", tech_stack=["python", "fastapi"])
assert updated["tech_stack"] == ["python", "fastapi"]
# -- Tasks --
def test_create_and_get_task(conn):
models.create_project(conn, "p1", "P1", "/p1")
t = models.create_task(conn, "P1-001", "p1", "Fix bug",
brief={"summary": "broken login"})
assert t["id"] == "P1-001"
assert t["brief"] == {"summary": "broken login"}
assert t["status"] == "pending"
def test_list_tasks_filters(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.create_project(conn, "p2", "P2", "/p2")
models.create_task(conn, "P1-001", "p1", "Task A", status="pending")
models.create_task(conn, "P1-002", "p1", "Task B", status="done")
models.create_task(conn, "P2-001", "p2", "Task C", status="pending")
assert len(models.list_tasks(conn)) == 3
assert len(models.list_tasks(conn, project_id="p1")) == 2
assert len(models.list_tasks(conn, status="pending")) == 2
assert len(models.list_tasks(conn, project_id="p1", status="done")) == 1
def test_update_task(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "Task")
updated = models.update_task(conn, "P1-001", status="in_progress",
spec={"steps": [1, 2, 3]})
assert updated["status"] == "in_progress"
assert updated["spec"] == {"steps": [1, 2, 3]}
assert updated["updated_at"] is not None
def test_subtask(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "Parent")
child = models.create_task(conn, "P1-001a", "p1", "Child",
parent_task_id="P1-001")
assert child["parent_task_id"] == "P1-001"
# -- Decisions --
def test_add_and_get_decisions(conn):
models.create_project(conn, "p1", "P1", "/p1")
d = models.add_decision(conn, "p1", "gotcha", "iOS Safari bottom sheet",
"position:fixed breaks on iOS Safari",
category="ui", tags=["ios-safari", "css"])
assert d["type"] == "gotcha"
assert d["tags"] == ["ios-safari", "css"]
results = models.get_decisions(conn, "p1")
assert len(results) == 1
def test_decisions_filter_by_category(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "decision", "Use WAL", "perf",
category="architecture")
models.add_decision(conn, "p1", "gotcha", "Safari bug", "css",
category="ui")
assert len(models.get_decisions(conn, "p1", category="ui")) == 1
def test_decisions_filter_by_tags(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "gotcha", "Bug A", "desc",
tags=["safari", "css"])
models.add_decision(conn, "p1", "gotcha", "Bug B", "desc",
tags=["chrome", "js"])
models.add_decision(conn, "p1", "gotcha", "Bug C", "desc",
tags=["safari", "js"])
assert len(models.get_decisions(conn, "p1", tags=["safari"])) == 2
assert len(models.get_decisions(conn, "p1", tags=["js"])) == 2
assert len(models.get_decisions(conn, "p1", tags=["css"])) == 1
def test_decisions_filter_by_types(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "decision", "A", "a")
models.add_decision(conn, "p1", "gotcha", "B", "b")
models.add_decision(conn, "p1", "workaround", "C", "c")
assert len(models.get_decisions(conn, "p1", types=["gotcha", "workaround"])) == 2
def test_decisions_limit(conn):
models.create_project(conn, "p1", "P1", "/p1")
for i in range(10):
models.add_decision(conn, "p1", "decision", f"D{i}", f"desc{i}")
assert len(models.get_decisions(conn, "p1", limit=3)) == 3
# -- Modules --
def test_add_and_get_modules(conn):
models.create_project(conn, "p1", "P1", "/p1")
m = models.add_module(conn, "p1", "search", "frontend", "src/search/",
description="Search UI", dependencies=["auth"])
assert m["name"] == "search"
assert m["dependencies"] == ["auth"]
mods = models.get_modules(conn, "p1")
assert len(mods) == 1
# -- Agent Logs --
def test_log_agent_run(conn):
models.create_project(conn, "p1", "P1", "/p1")
log = models.log_agent_run(conn, "p1", "developer", "implement",
tokens_used=5000, model="sonnet",
cost_usd=0.015, duration_seconds=45)
assert log["agent_role"] == "developer"
assert log["cost_usd"] == 0.015
assert log["success"] == 1 # SQLite boolean
# -- Pipelines --
def test_create_and_update_pipeline(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "Task")
pipe = models.create_pipeline(conn, "P1-001", "p1", "feature",
[{"step": "architect"}, {"step": "dev"}])
assert pipe["status"] == "running"
assert pipe["steps"] == [{"step": "architect"}, {"step": "dev"}]
updated = models.update_pipeline(conn, pipe["id"], status="completed",
total_cost_usd=0.05, total_tokens=10000)
assert updated["status"] == "completed"
assert updated["completed_at"] is not None
# -- Support --
def test_create_and_list_tickets(conn):
models.create_project(conn, "p1", "P1", "/p1")
t = models.create_ticket(conn, "p1", "telegram_bot", "Не работает поиск",
client_id="tg:12345", classification="bug")
assert t["source"] == "telegram_bot"
assert t["status"] == "new"
tickets = models.list_tickets(conn, project_id="p1")
assert len(tickets) == 1
assert len(models.list_tickets(conn, status="resolved")) == 0
# -- Statistics --
def test_project_summary(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "A", status="done")
models.create_task(conn, "P1-002", "p1", "B", status="in_progress")
models.create_task(conn, "P1-003", "p1", "C", status="blocked")
summary = models.get_project_summary(conn)
assert len(summary) == 1
s = summary[0]
assert s["total_tasks"] == 3
assert s["done_tasks"] == 1
assert s["active_tasks"] == 1
assert s["blocked_tasks"] == 1
def test_cost_summary(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.log_agent_run(conn, "p1", "dev", "implement",
cost_usd=0.10, tokens_used=5000)
models.log_agent_run(conn, "p1", "reviewer", "review",
cost_usd=0.05, tokens_used=2000)
costs = models.get_cost_summary(conn, days=1)
assert len(costs) == 1
assert costs[0]["total_cost_usd"] == pytest.approx(0.15)
assert costs[0]["total_tokens"] == 7000
assert costs[0]["runs"] == 2
def test_cost_summary_empty(conn):
models.create_project(conn, "p1", "P1", "/p1")
assert models.get_cost_summary(conn, days=7) == []

276
tests/test_runner.py Normal file
View file

@ -0,0 +1,276 @@
"""Tests for agents/runner.py — agent execution with mocked claude CLI."""
import json
import pytest
from unittest.mock import patch, MagicMock
from core.db import init_db
from core import models
from agents.runner import run_agent, run_pipeline, _try_parse_json
@pytest.fixture
def conn():
c = init_db(":memory:")
models.create_project(c, "vdol", "ВДОЛЬ", "~/projects/vdolipoperek",
tech_stack=["vue3"])
models.create_task(c, "VDOL-001", "vdol", "Fix bug",
brief={"route_type": "debug"})
yield c
c.close()
def _mock_claude_success(output_data):
"""Create a mock subprocess result with successful claude output."""
mock = MagicMock()
mock.stdout = json.dumps(output_data) if isinstance(output_data, dict) else output_data
mock.stderr = ""
mock.returncode = 0
return mock
def _mock_claude_failure(error_msg):
mock = MagicMock()
mock.stdout = ""
mock.stderr = error_msg
mock.returncode = 1
return mock
# ---------------------------------------------------------------------------
# run_agent
# ---------------------------------------------------------------------------
class TestRunAgent:
@patch("agents.runner.subprocess.run")
def test_successful_agent_run(self, mock_run, conn):
mock_run.return_value = _mock_claude_success({
"result": "Found race condition in useSearch.ts",
"usage": {"total_tokens": 5000},
"cost_usd": 0.015,
})
result = run_agent(conn, "debugger", "VDOL-001", "vdol")
assert result["success"] is True
assert result["role"] == "debugger"
assert result["model"] == "sonnet"
assert result["duration_seconds"] >= 0
# Verify claude was called with right args
call_args = mock_run.call_args
cmd = call_args[0][0]
assert "claude" in cmd[0]
assert "-p" in cmd
assert "--output-format" in cmd
assert "json" in cmd
@patch("agents.runner.subprocess.run")
def test_failed_agent_run(self, mock_run, conn):
mock_run.return_value = _mock_claude_failure("API error")
result = run_agent(conn, "debugger", "VDOL-001", "vdol")
assert result["success"] is False
# Should be logged in agent_logs
logs = conn.execute("SELECT * FROM agent_logs WHERE task_id='VDOL-001'").fetchall()
assert len(logs) == 1
assert logs[0]["success"] == 0
def test_dry_run_returns_prompt(self, conn):
result = run_agent(conn, "debugger", "VDOL-001", "vdol", dry_run=True)
assert result["dry_run"] is True
assert result["prompt"] is not None
assert "VDOL-001" in result["prompt"]
assert result["output"] is None
@patch("agents.runner.subprocess.run")
def test_agent_logs_to_db(self, mock_run, conn):
mock_run.return_value = _mock_claude_success({"result": "ok"})
run_agent(conn, "tester", "VDOL-001", "vdol")
logs = conn.execute("SELECT * FROM agent_logs WHERE agent_role='tester'").fetchall()
assert len(logs) == 1
assert logs[0]["project_id"] == "vdol"
@patch("agents.runner.subprocess.run")
def test_full_output_saved_to_db(self, mock_run, conn):
"""Bug fix: output_summary must contain the FULL output, not truncated."""
long_json = json.dumps({
"result": json.dumps({
"summary": "Security audit complete",
"findings": [{"title": f"Finding {i}", "severity": "HIGH"} for i in range(50)],
}),
})
mock = MagicMock()
mock.stdout = long_json
mock.stderr = ""
mock.returncode = 0
mock_run.return_value = mock
run_agent(conn, "security", "VDOL-001", "vdol")
logs = conn.execute("SELECT output_summary FROM agent_logs WHERE agent_role='security'").fetchall()
assert len(logs) == 1
output = logs[0]["output_summary"]
assert output is not None
assert len(output) > 1000 # Must not be truncated
# Should contain all 50 findings
assert "Finding 49" in output
@patch("agents.runner.subprocess.run")
def test_dict_output_saved_as_json_string(self, mock_run, conn):
"""When claude returns structured JSON, it must be saved as string."""
mock_run.return_value = _mock_claude_success({
"result": {"status": "ok", "files": ["a.py", "b.py"]},
})
result = run_agent(conn, "debugger", "VDOL-001", "vdol")
# output should be a string (JSON serialized), not a dict
assert isinstance(result["raw_output"], str)
logs = conn.execute("SELECT output_summary FROM agent_logs WHERE agent_role='debugger'").fetchall()
saved = logs[0]["output_summary"]
assert isinstance(saved, str)
assert "a.py" in saved
@patch("agents.runner.subprocess.run")
def test_previous_output_passed(self, mock_run, conn):
mock_run.return_value = _mock_claude_success({"result": "tests pass"})
run_agent(conn, "tester", "VDOL-001", "vdol",
previous_output="Found bug in line 42")
call_args = mock_run.call_args
prompt = call_args[0][0][2] # -p argument
assert "line 42" in prompt
# ---------------------------------------------------------------------------
# run_pipeline
# ---------------------------------------------------------------------------
class TestRunPipeline:
@patch("agents.runner.subprocess.run")
def test_successful_pipeline(self, mock_run, conn):
mock_run.return_value = _mock_claude_success({"result": "done"})
steps = [
{"role": "debugger", "brief": "find bug"},
{"role": "tester", "depends_on": "debugger", "brief": "verify"},
]
result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True
assert result["steps_completed"] == 2
assert len(result["results"]) == 2
# Pipeline created in DB
pipe = conn.execute("SELECT * FROM pipelines WHERE task_id='VDOL-001'").fetchone()
assert pipe is not None
assert pipe["status"] == "completed"
# Task updated to review
task = models.get_task(conn, "VDOL-001")
assert task["status"] == "review"
@patch("agents.runner.subprocess.run")
def test_pipeline_fails_on_step(self, mock_run, conn):
# First step succeeds, second fails
mock_run.side_effect = [
_mock_claude_success({"result": "found bug"}),
_mock_claude_failure("compilation error"),
]
steps = [
{"role": "debugger", "brief": "find"},
{"role": "frontend_dev", "brief": "fix"},
{"role": "tester", "brief": "test"},
]
result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is False
assert result["steps_completed"] == 1 # Only debugger completed
assert "frontend_dev" in result["error"]
# Pipeline marked as failed
pipe = conn.execute("SELECT * FROM pipelines WHERE task_id='VDOL-001'").fetchone()
assert pipe["status"] == "failed"
# Task marked as blocked
task = models.get_task(conn, "VDOL-001")
assert task["status"] == "blocked"
def test_pipeline_dry_run(self, conn):
steps = [
{"role": "debugger", "brief": "find"},
{"role": "tester", "brief": "verify"},
]
result = run_pipeline(conn, "VDOL-001", steps, dry_run=True)
assert result["dry_run"] is True
assert result["success"] is True
assert result["steps_completed"] == 2
# No pipeline created in DB
pipes = conn.execute("SELECT * FROM pipelines").fetchall()
assert len(pipes) == 0
@patch("agents.runner.subprocess.run")
def test_pipeline_chains_output(self, mock_run, conn):
"""Output from step N is passed as previous_output to step N+1."""
call_count = [0]
def side_effect(*args, **kwargs):
call_count[0] += 1
if call_count[0] == 1:
return _mock_claude_success({"result": "bug is in line 42"})
return _mock_claude_success({"result": "test written"})
mock_run.side_effect = side_effect
steps = [
{"role": "debugger", "brief": "find"},
{"role": "tester", "brief": "write test"},
]
run_pipeline(conn, "VDOL-001", steps)
# Second call should include first step's output in prompt
second_call = mock_run.call_args_list[1]
prompt = second_call[0][0][2] # -p argument
assert "line 42" in prompt or "bug" in prompt
def test_pipeline_task_not_found(self, conn):
result = run_pipeline(conn, "NONEXISTENT", [{"role": "debugger"}])
assert result["success"] is False
assert "not found" in result["error"]
# ---------------------------------------------------------------------------
# JSON parsing
# ---------------------------------------------------------------------------
class TestTryParseJson:
def test_direct_json(self):
assert _try_parse_json('{"a": 1}') == {"a": 1}
def test_json_in_code_fence(self):
text = 'Some text\n```json\n{"a": 1}\n```\nMore text'
assert _try_parse_json(text) == {"a": 1}
def test_json_embedded_in_text(self):
text = 'Here is the result: {"status": "ok", "count": 42} and more'
result = _try_parse_json(text)
assert result == {"status": "ok", "count": 42}
def test_empty_string(self):
assert _try_parse_json("") is None
def test_no_json(self):
assert _try_parse_json("just plain text") is None
def test_json_array(self):
assert _try_parse_json('[1, 2, 3]') == [1, 2, 3]

416
web/api.py Normal file
View file

@ -0,0 +1,416 @@
"""
Kin Web API FastAPI backend reading ~/.kin/kin.db via core.models.
Run: uvicorn web.api:app --reload --port 8420
"""
import subprocess
import sys
from pathlib import Path
# Ensure project root on sys.path
sys.path.insert(0, str(Path(__file__).parent.parent))
from fastapi import FastAPI, HTTPException, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from core.db import init_db
from core import models
from agents.bootstrap import (
detect_tech_stack, detect_modules, extract_decisions_from_claude_md,
find_vault_root, scan_obsidian, save_to_db,
)
DB_PATH = Path.home() / ".kin" / "kin.db"
app = FastAPI(title="Kin API", version="0.1.0")
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:5173", "http://127.0.0.1:5173"],
allow_methods=["*"],
allow_headers=["*"],
)
def get_conn():
return init_db(DB_PATH)
# ---------------------------------------------------------------------------
# Projects
# ---------------------------------------------------------------------------
@app.get("/api/projects")
def list_projects(status: str | None = None):
conn = get_conn()
summary = models.get_project_summary(conn)
if status:
summary = [s for s in summary if s["status"] == status]
conn.close()
return summary
@app.get("/api/projects/{project_id}")
def get_project(project_id: str):
conn = get_conn()
p = models.get_project(conn, project_id)
if not p:
conn.close()
raise HTTPException(404, f"Project '{project_id}' not found")
tasks = models.list_tasks(conn, project_id=project_id)
mods = models.get_modules(conn, project_id)
decisions = models.get_decisions(conn, project_id)
conn.close()
return {**p, "tasks": tasks, "modules": mods, "decisions": decisions}
class ProjectCreate(BaseModel):
id: str
name: str
path: str
tech_stack: list[str] | None = None
status: str = "active"
priority: int = 5
@app.post("/api/projects")
def create_project(body: ProjectCreate):
conn = get_conn()
if models.get_project(conn, body.id):
conn.close()
raise HTTPException(409, f"Project '{body.id}' already exists")
p = models.create_project(
conn, body.id, body.name, body.path,
tech_stack=body.tech_stack, status=body.status, priority=body.priority,
)
conn.close()
return p
# ---------------------------------------------------------------------------
# Tasks
# ---------------------------------------------------------------------------
@app.get("/api/tasks/{task_id}")
def get_task(task_id: str):
conn = get_conn()
t = models.get_task(conn, task_id)
conn.close()
if not t:
raise HTTPException(404, f"Task '{task_id}' not found")
return t
class TaskCreate(BaseModel):
project_id: str
title: str
priority: int = 5
route_type: str | None = None
@app.post("/api/tasks")
def create_task(body: TaskCreate):
conn = get_conn()
p = models.get_project(conn, body.project_id)
if not p:
conn.close()
raise HTTPException(404, f"Project '{body.project_id}' not found")
# Auto-generate task ID
existing = models.list_tasks(conn, project_id=body.project_id)
prefix = body.project_id.upper()
max_num = 0
for t in existing:
if t["id"].startswith(prefix + "-"):
try:
num = int(t["id"].split("-", 1)[1])
max_num = max(max_num, num)
except ValueError:
pass
task_id = f"{prefix}-{max_num + 1:03d}"
brief = {"route_type": body.route_type} if body.route_type else None
t = models.create_task(conn, task_id, body.project_id, body.title,
priority=body.priority, brief=brief)
conn.close()
return t
@app.get("/api/tasks/{task_id}/pipeline")
def get_task_pipeline(task_id: str):
"""Get agent_logs for a task (pipeline steps)."""
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
rows = conn.execute(
"""SELECT id, agent_role, action, output_summary, success,
duration_seconds, tokens_used, model, cost_usd, created_at
FROM agent_logs WHERE task_id = ? ORDER BY created_at""",
(task_id,),
).fetchall()
steps = [dict(r) for r in rows]
conn.close()
return steps
@app.get("/api/tasks/{task_id}/full")
def get_task_full(task_id: str):
"""Task + pipeline steps + related decisions."""
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
rows = conn.execute(
"""SELECT id, agent_role, action, output_summary, success,
duration_seconds, tokens_used, model, cost_usd, created_at
FROM agent_logs WHERE task_id = ? ORDER BY created_at""",
(task_id,),
).fetchall()
steps = [dict(r) for r in rows]
decisions = models.get_decisions(conn, t["project_id"])
# Filter to decisions linked to this task
task_decisions = [d for d in decisions if d.get("task_id") == task_id]
conn.close()
return {**t, "pipeline_steps": steps, "related_decisions": task_decisions}
class TaskApprove(BaseModel):
decision_title: str | None = None
decision_description: str | None = None
decision_type: str = "decision"
create_followups: bool = False
@app.post("/api/tasks/{task_id}/approve")
def approve_task(task_id: str, body: TaskApprove | None = None):
"""Approve a task: set status=done, optionally add decision and create follow-ups."""
from core.followup import generate_followups
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
models.update_task(conn, task_id, status="done")
decision = None
if body and body.decision_title:
decision = models.add_decision(
conn, t["project_id"], body.decision_type,
body.decision_title, body.decision_description or body.decision_title,
task_id=task_id,
)
followup_tasks = []
pending_actions = []
if body and body.create_followups:
result = generate_followups(conn, task_id)
followup_tasks = result["created"]
pending_actions = result["pending_actions"]
conn.close()
return {
"status": "done",
"decision": decision,
"followup_tasks": followup_tasks,
"needs_decision": len(pending_actions) > 0,
"pending_actions": pending_actions,
}
class ResolveAction(BaseModel):
action: dict
choice: str # "rerun" | "manual_task" | "skip"
@app.post("/api/tasks/{task_id}/resolve")
def resolve_action(task_id: str, body: ResolveAction):
"""Resolve a pending permission action from follow-up generation."""
from core.followup import resolve_pending_action
if body.choice not in ("rerun", "manual_task", "skip"):
raise HTTPException(400, f"Invalid choice: {body.choice}")
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
result = resolve_pending_action(conn, task_id, body.action, body.choice)
conn.close()
return {"choice": body.choice, "result": result}
class TaskReject(BaseModel):
reason: str
@app.post("/api/tasks/{task_id}/reject")
def reject_task(task_id: str, body: TaskReject):
"""Reject a task: set status=pending with reason in review field."""
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
models.update_task(conn, task_id, status="pending", review={"rejected": body.reason})
conn.close()
return {"status": "pending", "reason": body.reason}
@app.get("/api/tasks/{task_id}/running")
def is_task_running(task_id: str):
"""Check if task has an active (running) pipeline."""
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
row = conn.execute(
"SELECT id, status FROM pipelines WHERE task_id = ? ORDER BY created_at DESC LIMIT 1",
(task_id,),
).fetchone()
conn.close()
if row and row["status"] == "running":
return {"running": True, "pipeline_id": row["id"]}
return {"running": False}
@app.post("/api/tasks/{task_id}/run")
def run_task(task_id: str):
"""Launch pipeline for a task in background. Returns 202."""
conn = get_conn()
t = models.get_task(conn, task_id)
if not t:
conn.close()
raise HTTPException(404, f"Task '{task_id}' not found")
# Set task to in_progress immediately so UI updates
models.update_task(conn, task_id, status="in_progress")
conn.close()
# Launch kin run in background subprocess
kin_root = Path(__file__).parent.parent
try:
proc = subprocess.Popen(
[sys.executable, "-m", "cli.main", "--db", str(DB_PATH),
"run", task_id],
cwd=str(kin_root),
stdout=subprocess.DEVNULL,
)
import logging
logging.getLogger("kin").info(f"Pipeline started for {task_id}, pid={proc.pid}")
except Exception as e:
raise HTTPException(500, f"Failed to start pipeline: {e}")
return JSONResponse({"status": "started", "task_id": task_id}, status_code=202)
# ---------------------------------------------------------------------------
# Decisions
# ---------------------------------------------------------------------------
@app.get("/api/decisions")
def list_decisions(
project: str = Query(...),
category: str | None = None,
tag: list[str] | None = Query(None),
type: list[str] | None = Query(None),
):
conn = get_conn()
decisions = models.get_decisions(
conn, project, category=category, tags=tag, types=type,
)
conn.close()
return decisions
class DecisionCreate(BaseModel):
project_id: str
type: str
title: str
description: str
category: str | None = None
tags: list[str] | None = None
task_id: str | None = None
@app.post("/api/decisions")
def create_decision(body: DecisionCreate):
conn = get_conn()
p = models.get_project(conn, body.project_id)
if not p:
conn.close()
raise HTTPException(404, f"Project '{body.project_id}' not found")
d = models.add_decision(
conn, body.project_id, body.type, body.title, body.description,
category=body.category, tags=body.tags, task_id=body.task_id,
)
conn.close()
return d
# ---------------------------------------------------------------------------
# Cost
# ---------------------------------------------------------------------------
@app.get("/api/cost")
def cost_summary(days: int = 7):
conn = get_conn()
costs = models.get_cost_summary(conn, days=days)
conn.close()
return costs
# ---------------------------------------------------------------------------
# Support
# ---------------------------------------------------------------------------
@app.get("/api/support/tickets")
def list_tickets(project: str | None = None, status: str | None = None):
conn = get_conn()
tickets = models.list_tickets(conn, project_id=project, status=status)
conn.close()
return tickets
# ---------------------------------------------------------------------------
# Bootstrap
# ---------------------------------------------------------------------------
class BootstrapRequest(BaseModel):
path: str
id: str
name: str
vault_path: str | None = None
@app.post("/api/bootstrap")
def bootstrap(body: BootstrapRequest):
project_path = Path(body.path).expanduser().resolve()
if not project_path.is_dir():
raise HTTPException(400, f"Path '{body.path}' is not a directory")
conn = get_conn()
if models.get_project(conn, body.id):
conn.close()
raise HTTPException(409, f"Project '{body.id}' already exists")
tech_stack = detect_tech_stack(project_path)
modules = detect_modules(project_path)
decisions = extract_decisions_from_claude_md(project_path, body.id, body.name)
obsidian = None
vault_root = find_vault_root(Path(body.vault_path) if body.vault_path else None)
if vault_root:
dir_name = project_path.name
obs = scan_obsidian(vault_root, body.id, body.name, dir_name)
if obs["tasks"] or obs["decisions"]:
obsidian = obs
save_to_db(conn, body.id, body.name, str(project_path),
tech_stack, modules, decisions, obsidian)
p = models.get_project(conn, body.id)
conn.close()
return {
"project": p,
"modules_count": len(modules),
"decisions_count": len(decisions) + len((obsidian or {}).get("decisions", [])),
"tasks_count": len((obsidian or {}).get("tasks", [])),
}

24
web/frontend/.gitignore vendored Normal file
View file

@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

12
web/frontend/index.html Normal file
View file

@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Kin</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.ts"></script>
</body>
</html>

2345
web/frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load diff

26
web/frontend/package.json Normal file
View file

@ -0,0 +1,26 @@
{
"name": "frontend",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vue-tsc -b && vite build",
"preview": "vite preview"
},
"dependencies": {
"vue": "^3.5.30",
"vue-router": "^4.6.4"
},
"devDependencies": {
"@types/node": "^24.12.0",
"@vitejs/plugin-vue": "^6.0.5",
"@vue/tsconfig": "^0.9.0",
"autoprefixer": "^10.4.27",
"postcss": "^8.5.8",
"tailwindcss": "^3.4.19",
"typescript": "~5.9.3",
"vite": "^8.0.0",
"vue-tsc": "^3.2.5"
}
}

View file

@ -0,0 +1,6 @@
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}

16
web/frontend/src/App.vue Normal file
View file

@ -0,0 +1,16 @@
<script setup lang="ts">
</script>
<template>
<div class="min-h-screen">
<header class="border-b border-gray-800 px-6 py-4 flex items-center justify-between">
<router-link to="/" class="text-lg font-bold text-gray-100 hover:text-white no-underline">
Kin
</router-link>
<span class="text-xs text-gray-600">multi-agent orchestrator</span>
</header>
<main class="max-w-6xl mx-auto px-6 py-6">
<router-view />
</main>
</div>
</template>

132
web/frontend/src/api.ts Normal file
View file

@ -0,0 +1,132 @@
const BASE = 'http://localhost:8420/api'
async function get<T>(path: string): Promise<T> {
const res = await fetch(`${BASE}${path}`)
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`)
return res.json()
}
async function post<T>(path: string, body: unknown): Promise<T> {
const res = await fetch(`${BASE}${path}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
})
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`)
return res.json()
}
export interface Project {
id: string
name: string
path: string
status: string
priority: number
tech_stack: string[] | null
created_at: string
total_tasks: number
done_tasks: number
active_tasks: number
blocked_tasks: number
review_tasks: number
}
export interface ProjectDetail extends Project {
tasks: Task[]
modules: Module[]
decisions: Decision[]
}
export interface Task {
id: string
project_id: string
title: string
status: string
priority: number
assigned_role: string | null
parent_task_id: string | null
brief: Record<string, unknown> | null
spec: Record<string, unknown> | null
created_at: string
updated_at: string
}
export interface Decision {
id: number
project_id: string
task_id: string | null
type: string
category: string | null
title: string
description: string
tags: string[] | null
created_at: string
}
export interface Module {
id: number
project_id: string
name: string
type: string
path: string
description: string | null
owner_role: string | null
dependencies: string[] | null
}
export interface PipelineStep {
id: number
agent_role: string
action: string
output_summary: string | null
success: boolean | number
duration_seconds: number | null
tokens_used: number | null
model: string | null
cost_usd: number | null
created_at: string
}
export interface TaskFull extends Task {
pipeline_steps: PipelineStep[]
related_decisions: Decision[]
}
export interface PendingAction {
type: string
description: string
original_item: Record<string, unknown>
options: string[]
}
export interface CostEntry {
project_id: string
project_name: string
runs: number
total_tokens: number
total_cost_usd: number
total_duration_seconds: number
}
export const api = {
projects: () => get<Project[]>('/projects'),
project: (id: string) => get<ProjectDetail>(`/projects/${id}`),
task: (id: string) => get<Task>(`/tasks/${id}`),
taskFull: (id: string) => get<TaskFull>(`/tasks/${id}/full`),
taskPipeline: (id: string) => get<PipelineStep[]>(`/tasks/${id}/pipeline`),
cost: (days = 7) => get<CostEntry[]>(`/cost?days=${days}`),
createProject: (data: { id: string; name: string; path: string; tech_stack?: string[]; priority?: number }) =>
post<Project>('/projects', data),
createTask: (data: { project_id: string; title: string; priority?: number; route_type?: string }) =>
post<Task>('/tasks', data),
approveTask: (id: string, data?: { decision_title?: string; decision_description?: string; decision_type?: string; create_followups?: boolean }) =>
post<{ status: string; followup_tasks: Task[]; needs_decision: boolean; pending_actions: PendingAction[] }>(`/tasks/${id}/approve`, data || {}),
resolveAction: (id: string, action: PendingAction, choice: string) =>
post<{ choice: string; result: unknown }>(`/tasks/${id}/resolve`, { action, choice }),
rejectTask: (id: string, reason: string) =>
post<{ status: string }>(`/tasks/${id}/reject`, { reason }),
runTask: (id: string) =>
post<{ status: string }>(`/tasks/${id}/run`, {}),
bootstrap: (data: { path: string; id: string; name: string }) =>
post<{ project: Project }>('/bootstrap', data),
}

View file

@ -0,0 +1,19 @@
<script setup lang="ts">
defineProps<{ color?: string; text: string }>()
const colors: Record<string, string> = {
green: 'bg-green-900/50 text-green-400 border-green-800',
blue: 'bg-blue-900/50 text-blue-400 border-blue-800',
red: 'bg-red-900/50 text-red-400 border-red-800',
yellow: 'bg-yellow-900/50 text-yellow-400 border-yellow-800',
gray: 'bg-gray-800/50 text-gray-400 border-gray-700',
purple: 'bg-purple-900/50 text-purple-400 border-purple-800',
orange: 'bg-orange-900/50 text-orange-400 border-orange-800',
}
</script>
<template>
<span class="text-xs px-2 py-0.5 rounded border" :class="colors[color || 'gray']">
{{ text }}
</span>
</template>

View file

@ -0,0 +1,18 @@
<script setup lang="ts">
defineProps<{ title: string }>()
const emit = defineEmits<{ close: [] }>()
</script>
<template>
<div class="fixed inset-0 z-50 flex items-center justify-center bg-black/60" @click.self="emit('close')">
<div class="bg-gray-900 border border-gray-700 rounded-lg w-full max-w-lg mx-4 shadow-2xl">
<div class="flex items-center justify-between px-5 py-3 border-b border-gray-800">
<h3 class="text-sm font-semibold text-gray-200">{{ title }}</h3>
<button @click="emit('close')" class="text-gray-500 hover:text-gray-300 text-lg leading-none">&times;</button>
</div>
<div class="px-5 py-4">
<slot />
</div>
</div>
</div>
</template>

18
web/frontend/src/main.ts Normal file
View file

@ -0,0 +1,18 @@
import { createApp } from 'vue'
import { createRouter, createWebHistory } from 'vue-router'
import './style.css'
import App from './App.vue'
import Dashboard from './views/Dashboard.vue'
import ProjectView from './views/ProjectView.vue'
import TaskDetail from './views/TaskDetail.vue'
const router = createRouter({
history: createWebHistory(),
routes: [
{ path: '/', component: Dashboard },
{ path: '/project/:id', component: ProjectView, props: true },
{ path: '/task/:id', component: TaskDetail, props: true },
],
})
createApp(App).use(router).mount('#app')

View file

@ -0,0 +1,8 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
body {
@apply bg-gray-950 text-gray-100;
font-family: ui-monospace, 'SF Mono', Consolas, monospace;
}

View file

@ -0,0 +1,187 @@
<script setup lang="ts">
import { ref, onMounted, computed } from 'vue'
import { api, type Project, type CostEntry } from '../api'
import Badge from '../components/Badge.vue'
import Modal from '../components/Modal.vue'
const projects = ref<Project[]>([])
const costs = ref<CostEntry[]>([])
const loading = ref(true)
const error = ref('')
// Add project modal
const showAdd = ref(false)
const form = ref({ id: '', name: '', path: '', tech_stack: '', priority: 5 })
const formError = ref('')
// Bootstrap modal
const showBootstrap = ref(false)
const bsForm = ref({ id: '', name: '', path: '' })
const bsError = ref('')
const bsResult = ref('')
async function load() {
try {
loading.value = true
;[projects.value, costs.value] = await Promise.all([api.projects(), api.cost(7)])
} catch (e: any) {
error.value = e.message
} finally {
loading.value = false
}
}
let dashPollTimer: ReturnType<typeof setInterval> | null = null
onMounted(async () => {
await load()
// Poll if there are running tasks
checkAndPoll()
})
function checkAndPoll() {
const hasRunning = projects.value.some(p => p.active_tasks > 0)
if (hasRunning && !dashPollTimer) {
dashPollTimer = setInterval(load, 5000)
} else if (!hasRunning && dashPollTimer) {
clearInterval(dashPollTimer)
dashPollTimer = null
}
}
const costMap = computed(() => {
const m: Record<string, number> = {}
for (const c of costs.value) m[c.project_id] = c.total_cost_usd
return m
})
const totalCost = computed(() => costs.value.reduce((s, c) => s + c.total_cost_usd, 0))
function statusColor(s: string) {
if (s === 'active') return 'green'
if (s === 'paused') return 'yellow'
if (s === 'maintenance') return 'orange'
return 'gray'
}
async function addProject() {
formError.value = ''
try {
const ts = form.value.tech_stack ? form.value.tech_stack.split(',').map(s => s.trim()).filter(Boolean) : undefined
await api.createProject({ ...form.value, tech_stack: ts, priority: form.value.priority })
showAdd.value = false
form.value = { id: '', name: '', path: '', tech_stack: '', priority: 5 }
await load()
} catch (e: any) {
formError.value = e.message
}
}
async function runBootstrap() {
bsError.value = ''
bsResult.value = ''
try {
const res = await api.bootstrap(bsForm.value)
bsResult.value = `Created: ${res.project.id} (${res.project.name})`
await load()
} catch (e: any) {
bsError.value = e.message
}
}
</script>
<template>
<div>
<div class="flex items-center justify-between mb-6">
<div>
<h1 class="text-xl font-bold text-gray-100">Dashboard</h1>
<p class="text-sm text-gray-500" v-if="totalCost > 0">Cost this week: ${{ totalCost.toFixed(2) }}</p>
</div>
<div class="flex gap-2">
<button @click="showBootstrap = true"
class="px-3 py-1.5 text-xs bg-purple-900/50 text-purple-400 border border-purple-800 rounded hover:bg-purple-900">
Bootstrap
</button>
<button @click="showAdd = true"
class="px-3 py-1.5 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700">
+ Project
</button>
</div>
</div>
<p v-if="loading" class="text-gray-500 text-sm">Loading...</p>
<p v-else-if="error" class="text-red-400 text-sm">{{ error }}</p>
<div v-else class="grid gap-3">
<router-link
v-for="p in projects" :key="p.id"
:to="`/project/${p.id}`"
class="block border border-gray-800 rounded-lg p-4 hover:border-gray-600 transition-colors no-underline"
>
<div class="flex items-center justify-between mb-2">
<div class="flex items-center gap-2">
<span class="text-sm font-semibold text-gray-200">{{ p.id }}</span>
<Badge :text="p.status" :color="statusColor(p.status)" />
<span class="text-sm text-gray-400">{{ p.name }}</span>
</div>
<div class="flex items-center gap-3 text-xs text-gray-500">
<span v-if="costMap[p.id]">${{ costMap[p.id]?.toFixed(2) }}/wk</span>
<span>pri {{ p.priority }}</span>
</div>
</div>
<div class="flex gap-4 text-xs">
<span class="text-gray-500">{{ p.total_tasks }} tasks</span>
<span v-if="p.active_tasks" class="text-blue-400">
<span class="inline-block w-1.5 h-1.5 bg-blue-500 rounded-full animate-pulse mr-0.5"></span>
{{ p.active_tasks }} active
</span>
<span v-if="p.review_tasks" class="text-yellow-400">{{ p.review_tasks }} awaiting review</span>
<span v-if="p.blocked_tasks" class="text-red-400">{{ p.blocked_tasks }} blocked</span>
<span v-if="p.done_tasks" class="text-green-500">{{ p.done_tasks }} done</span>
<span v-if="p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) > 0" class="text-gray-500">
{{ p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) }} pending
</span>
</div>
</router-link>
</div>
<!-- Add Project Modal -->
<Modal v-if="showAdd" title="Add Project" @close="showAdd = false">
<form @submit.prevent="addProject" class="space-y-3">
<input v-model="form.id" placeholder="ID (e.g. vdol)" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.name" placeholder="Name" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.path" placeholder="Path (e.g. ~/projects/myproj)" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.tech_stack" placeholder="Tech stack (comma-separated)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model.number="form.priority" type="number" min="1" max="10" placeholder="Priority (1-10)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<p v-if="formError" class="text-red-400 text-xs">{{ formError }}</p>
<button type="submit"
class="w-full py-2 bg-blue-900/50 text-blue-400 border border-blue-800 rounded text-sm hover:bg-blue-900">
Create
</button>
</form>
</Modal>
<!-- Bootstrap Modal -->
<Modal v-if="showBootstrap" title="Bootstrap Project" @close="showBootstrap = false">
<form @submit.prevent="runBootstrap" class="space-y-3">
<input v-model="bsForm.path" placeholder="Project path (e.g. ~/projects/vdolipoperek)" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="bsForm.id" placeholder="ID (e.g. vdol)" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="bsForm.name" placeholder="Name" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<p v-if="bsError" class="text-red-400 text-xs">{{ bsError }}</p>
<p v-if="bsResult" class="text-green-400 text-xs">{{ bsResult }}</p>
<button type="submit"
class="w-full py-2 bg-purple-900/50 text-purple-400 border border-purple-800 rounded text-sm hover:bg-purple-900">
Bootstrap
</button>
</form>
</Modal>
</div>
</template>

View file

@ -0,0 +1,332 @@
<script setup lang="ts">
import { ref, onMounted, computed } from 'vue'
import { api, type ProjectDetail } from '../api'
import Badge from '../components/Badge.vue'
import Modal from '../components/Modal.vue'
const props = defineProps<{ id: string }>()
const project = ref<ProjectDetail | null>(null)
const loading = ref(true)
const error = ref('')
const activeTab = ref<'tasks' | 'decisions' | 'modules'>('tasks')
// Filters
const taskStatusFilter = ref('')
const decisionTypeFilter = ref('')
const decisionSearch = ref('')
// Add task modal
const showAddTask = ref(false)
const taskForm = ref({ title: '', priority: 5, route_type: '' })
const taskFormError = ref('')
// Add decision modal
const showAddDecision = ref(false)
const decForm = ref({ type: 'decision', title: '', description: '', category: '', tags: '' })
const decFormError = ref('')
async function load() {
try {
loading.value = true
project.value = await api.project(props.id)
} catch (e: any) {
error.value = e.message
} finally {
loading.value = false
}
}
onMounted(load)
const filteredTasks = computed(() => {
if (!project.value) return []
let tasks = project.value.tasks
if (taskStatusFilter.value) tasks = tasks.filter(t => t.status === taskStatusFilter.value)
return tasks
})
const filteredDecisions = computed(() => {
if (!project.value) return []
let decs = project.value.decisions
if (decisionTypeFilter.value) decs = decs.filter(d => d.type === decisionTypeFilter.value)
if (decisionSearch.value) {
const q = decisionSearch.value.toLowerCase()
decs = decs.filter(d => d.title.toLowerCase().includes(q) || d.description.toLowerCase().includes(q))
}
return decs
})
function taskStatusColor(s: string) {
const m: Record<string, string> = {
pending: 'gray', in_progress: 'blue', review: 'purple',
done: 'green', blocked: 'red', decomposed: 'yellow',
}
return m[s] || 'gray'
}
function decTypeColor(t: string) {
const m: Record<string, string> = {
decision: 'blue', gotcha: 'red', workaround: 'yellow',
rejected_approach: 'gray', convention: 'purple',
}
return m[t] || 'gray'
}
function modTypeColor(t: string) {
const m: Record<string, string> = {
frontend: 'blue', backend: 'green', shared: 'purple', infra: 'orange',
}
return m[t] || 'gray'
}
const taskStatuses = computed(() => {
if (!project.value) return []
const s = new Set(project.value.tasks.map(t => t.status))
return Array.from(s).sort()
})
const decTypes = computed(() => {
if (!project.value) return []
const s = new Set(project.value.decisions.map(d => d.type))
return Array.from(s).sort()
})
async function addTask() {
taskFormError.value = ''
try {
await api.createTask({
project_id: props.id,
title: taskForm.value.title,
priority: taskForm.value.priority,
route_type: taskForm.value.route_type || undefined,
})
showAddTask.value = false
taskForm.value = { title: '', priority: 5, route_type: '' }
await load()
} catch (e: any) {
taskFormError.value = e.message
}
}
async function runTask(taskId: string, event: Event) {
event.preventDefault()
event.stopPropagation()
if (!confirm(`Run pipeline for ${taskId}?`)) return
try {
await api.runTask(taskId)
await load()
} catch (e: any) {
error.value = e.message
}
}
async function addDecision() {
decFormError.value = ''
try {
const tags = decForm.value.tags ? decForm.value.tags.split(',').map(s => s.trim()).filter(Boolean) : undefined
const body = {
project_id: props.id,
type: decForm.value.type,
title: decForm.value.title,
description: decForm.value.description,
category: decForm.value.category || undefined,
tags,
}
const res = await fetch('http://localhost:8420/api/decisions', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
})
if (!res.ok) throw new Error('Failed')
showAddDecision.value = false
decForm.value = { type: 'decision', title: '', description: '', category: '', tags: '' }
await load()
} catch (e: any) {
decFormError.value = e.message
}
}
</script>
<template>
<div v-if="loading" class="text-gray-500 text-sm">Loading...</div>
<div v-else-if="error" class="text-red-400 text-sm">{{ error }}</div>
<div v-else-if="project">
<!-- Header -->
<div class="mb-6">
<div class="flex items-center gap-2 mb-1">
<router-link to="/" class="text-gray-600 hover:text-gray-400 text-sm no-underline">&larr; back</router-link>
</div>
<div class="flex items-center gap-3 mb-2">
<h1 class="text-xl font-bold text-gray-100">{{ project.id }}</h1>
<span class="text-gray-400">{{ project.name }}</span>
<Badge :text="project.status" :color="project.status === 'active' ? 'green' : 'gray'" />
</div>
<div class="flex gap-2 flex-wrap mb-2" v-if="project.tech_stack?.length">
<Badge v-for="t in project.tech_stack" :key="t" :text="t" color="purple" />
</div>
<p class="text-xs text-gray-600">{{ project.path }}</p>
</div>
<!-- Tabs -->
<div class="flex gap-1 mb-4 border-b border-gray-800">
<button v-for="tab in (['tasks', 'decisions', 'modules'] as const)" :key="tab"
@click="activeTab = tab"
class="px-4 py-2 text-sm border-b-2 transition-colors"
:class="activeTab === tab
? 'text-gray-200 border-blue-500'
: 'text-gray-500 border-transparent hover:text-gray-300'">
{{ tab.charAt(0).toUpperCase() + tab.slice(1) }}
<span class="text-xs text-gray-600 ml-1">
{{ tab === 'tasks' ? project.tasks.length
: tab === 'decisions' ? project.decisions.length
: project.modules.length }}
</span>
</button>
</div>
<!-- Tasks Tab -->
<div v-if="activeTab === 'tasks'">
<div class="flex items-center justify-between mb-3">
<div class="flex gap-2">
<select v-model="taskStatusFilter"
class="bg-gray-800 border border-gray-700 rounded px-2 py-1 text-xs text-gray-300">
<option value="">All statuses</option>
<option v-for="s in taskStatuses" :key="s" :value="s">{{ s }}</option>
</select>
</div>
<button @click="showAddTask = true"
class="px-3 py-1 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700">
+ Task
</button>
</div>
<div v-if="filteredTasks.length === 0" class="text-gray-600 text-sm">No tasks.</div>
<div v-else class="space-y-1">
<router-link v-for="t in filteredTasks" :key="t.id"
:to="`/task/${t.id}`"
class="flex items-center justify-between px-3 py-2 border border-gray-800 rounded text-sm hover:border-gray-600 no-underline block transition-colors">
<div class="flex items-center gap-2 min-w-0">
<span class="text-gray-500 shrink-0 w-24">{{ t.id }}</span>
<Badge :text="t.status" :color="taskStatusColor(t.status)" />
<span class="text-gray-300 truncate">{{ t.title }}</span>
<span v-if="t.parent_task_id" class="text-[10px] text-gray-600 shrink-0">from {{ t.parent_task_id }}</span>
</div>
<div class="flex items-center gap-2 text-xs text-gray-600 shrink-0">
<span v-if="t.assigned_role">{{ t.assigned_role }}</span>
<span>pri {{ t.priority }}</span>
<button v-if="t.status === 'pending'"
@click="runTask(t.id, $event)"
class="px-2 py-0.5 bg-blue-900/40 text-blue-400 border border-blue-800 rounded hover:bg-blue-900 text-[10px]"
title="Run pipeline">&#9654;</button>
<span v-if="t.status === 'in_progress'"
class="inline-block w-2 h-2 bg-blue-500 rounded-full animate-pulse" title="Running"></span>
</div>
</router-link>
</div>
</div>
<!-- Decisions Tab -->
<div v-if="activeTab === 'decisions'">
<div class="flex items-center justify-between mb-3">
<div class="flex gap-2">
<select v-model="decisionTypeFilter"
class="bg-gray-800 border border-gray-700 rounded px-2 py-1 text-xs text-gray-300">
<option value="">All types</option>
<option v-for="t in decTypes" :key="t" :value="t">{{ t }}</option>
</select>
<input v-model="decisionSearch" placeholder="Search..."
class="bg-gray-800 border border-gray-700 rounded px-2 py-1 text-xs text-gray-300 placeholder-gray-600 w-48" />
</div>
<button @click="showAddDecision = true"
class="px-3 py-1 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700">
+ Decision
</button>
</div>
<div v-if="filteredDecisions.length === 0" class="text-gray-600 text-sm">No decisions.</div>
<div v-else class="space-y-2">
<div v-for="d in filteredDecisions" :key="d.id"
class="px-3 py-2 border border-gray-800 rounded hover:border-gray-700">
<div class="flex items-center gap-2 mb-1">
<span class="text-gray-600 text-xs">#{{ d.id }}</span>
<Badge :text="d.type" :color="decTypeColor(d.type)" />
<Badge v-if="d.category" :text="d.category" color="gray" />
</div>
<div class="text-sm text-gray-300">{{ d.title }}</div>
<div v-if="d.description !== d.title" class="text-xs text-gray-500 mt-1">{{ d.description }}</div>
<div v-if="d.tags?.length" class="flex gap-1 mt-1">
<Badge v-for="tag in d.tags" :key="tag" :text="tag" color="purple" />
</div>
</div>
</div>
</div>
<!-- Modules Tab -->
<div v-if="activeTab === 'modules'">
<div v-if="project.modules.length === 0" class="text-gray-600 text-sm">No modules.</div>
<div v-else class="space-y-1">
<div v-for="m in project.modules" :key="m.id"
class="flex items-center justify-between px-3 py-2 border border-gray-800 rounded text-sm hover:border-gray-700">
<div class="flex items-center gap-2">
<span class="text-gray-300 font-medium">{{ m.name }}</span>
<Badge :text="m.type" :color="modTypeColor(m.type)" />
</div>
<div class="flex items-center gap-3 text-xs text-gray-600">
<span>{{ m.path }}</span>
<span v-if="m.owner_role">{{ m.owner_role }}</span>
<span v-if="m.description">{{ m.description }}</span>
</div>
</div>
</div>
</div>
<!-- Add Task Modal -->
<Modal v-if="showAddTask" title="Add Task" @close="showAddTask = false">
<form @submit.prevent="addTask" class="space-y-3">
<input v-model="taskForm.title" placeholder="Task title" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<select v-model="taskForm.route_type"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-300">
<option value="">No type</option>
<option value="debug">debug</option>
<option value="feature">feature</option>
<option value="refactor">refactor</option>
<option value="hotfix">hotfix</option>
</select>
<input v-model.number="taskForm.priority" type="number" min="1" max="10" placeholder="Priority"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<p v-if="taskFormError" class="text-red-400 text-xs">{{ taskFormError }}</p>
<button type="submit"
class="w-full py-2 bg-blue-900/50 text-blue-400 border border-blue-800 rounded text-sm hover:bg-blue-900">
Create
</button>
</form>
</Modal>
<!-- Add Decision Modal -->
<Modal v-if="showAddDecision" title="Add Decision" @close="showAddDecision = false">
<form @submit.prevent="addDecision" class="space-y-3">
<select v-model="decForm.type" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-300">
<option value="decision">decision</option>
<option value="gotcha">gotcha</option>
<option value="workaround">workaround</option>
<option value="convention">convention</option>
<option value="rejected_approach">rejected_approach</option>
</select>
<input v-model="decForm.title" placeholder="Title" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<textarea v-model="decForm.description" placeholder="Description" rows="3" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y"></textarea>
<input v-model="decForm.category" placeholder="Category (e.g. ui, api, security)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="decForm.tags" placeholder="Tags (comma-separated)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<p v-if="decFormError" class="text-red-400 text-xs">{{ decFormError }}</p>
<button type="submit"
class="w-full py-2 bg-blue-900/50 text-blue-400 border border-blue-800 rounded text-sm hover:bg-blue-900">
Create
</button>
</form>
</Modal>
</div>
</template>

View file

@ -0,0 +1,351 @@
<script setup lang="ts">
import { ref, onMounted, onUnmounted, computed } from 'vue'
import { api, type TaskFull, type PipelineStep, type PendingAction } from '../api'
import Badge from '../components/Badge.vue'
import Modal from '../components/Modal.vue'
const props = defineProps<{ id: string }>()
const task = ref<TaskFull | null>(null)
const loading = ref(true)
const error = ref('')
const selectedStep = ref<PipelineStep | null>(null)
const polling = ref(false)
let pollTimer: ReturnType<typeof setInterval> | null = null
// Approve modal
const showApprove = ref(false)
const approveForm = ref({ title: '', description: '', type: 'decision', createFollowups: true })
const approveLoading = ref(false)
const followupResults = ref<{ id: string; title: string }[]>([])
const pendingActions = ref<PendingAction[]>([])
const resolvingAction = ref(false)
// Reject modal
const showReject = ref(false)
const rejectReason = ref('')
async function load() {
try {
const prev = task.value
task.value = await api.taskFull(props.id)
// Auto-start polling if task is in_progress
if (task.value.status === 'in_progress' && !polling.value) {
startPolling()
}
// Stop polling when pipeline done
if (prev?.status === 'in_progress' && task.value.status !== 'in_progress') {
stopPolling()
}
} catch (e: any) {
error.value = e.message
} finally {
loading.value = false
}
}
function startPolling() {
if (polling.value) return
polling.value = true
pollTimer = setInterval(load, 3000)
}
function stopPolling() {
polling.value = false
if (pollTimer) { clearInterval(pollTimer); pollTimer = null }
}
onMounted(load)
onUnmounted(stopPolling)
function statusColor(s: string) {
const m: Record<string, string> = {
pending: 'gray', in_progress: 'blue', review: 'yellow',
done: 'green', blocked: 'red', decomposed: 'purple',
}
return m[s] || 'gray'
}
const roleIcons: Record<string, string> = {
pm: '\u{1F9E0}', security: '\u{1F6E1}', debugger: '\u{1F50D}',
frontend_dev: '\u{1F4BB}', backend_dev: '\u{2699}', tester: '\u{2705}',
reviewer: '\u{1F4CB}', architect: '\u{1F3D7}', followup_pm: '\u{1F4DD}',
}
function stepStatusClass(step: PipelineStep) {
if (step.success) return 'border-green-700 bg-green-950/30'
return 'border-red-700 bg-red-950/30'
}
function stepStatusIcon(step: PipelineStep) {
return step.success ? '\u2713' : '\u2717'
}
function stepStatusColor(step: PipelineStep) {
return step.success ? 'text-green-400' : 'text-red-400'
}
function formatOutput(text: string | null): string {
if (!text) return ''
try {
const parsed = JSON.parse(text)
return JSON.stringify(parsed, null, 2)
} catch {
return text
}
}
async function approve() {
if (!task.value) return
approveLoading.value = true
followupResults.value = []
pendingActions.value = []
try {
const data: Record<string, unknown> = {
create_followups: approveForm.value.createFollowups,
}
if (approveForm.value.title) {
data.decision_title = approveForm.value.title
data.decision_description = approveForm.value.description
data.decision_type = approveForm.value.type
}
const res = await api.approveTask(props.id, data as any)
if (res.followup_tasks?.length) {
followupResults.value = res.followup_tasks.map(t => ({ id: t.id, title: t.title }))
}
if (res.pending_actions?.length) {
pendingActions.value = res.pending_actions
}
if (!res.followup_tasks?.length && !res.pending_actions?.length) {
showApprove.value = false
}
approveForm.value = { title: '', description: '', type: 'decision', createFollowups: true }
await load()
} catch (e: any) {
error.value = e.message
} finally {
approveLoading.value = false
}
}
async function resolveAction(action: PendingAction, choice: string) {
resolvingAction.value = true
try {
const res = await api.resolveAction(props.id, action, choice)
pendingActions.value = pendingActions.value.filter(a => a !== action)
if (choice === 'manual_task' && res.result && typeof res.result === 'object' && 'id' in res.result) {
followupResults.value.push({ id: (res.result as any).id, title: (res.result as any).title })
}
if (choice === 'rerun') {
await load()
}
} catch (e: any) {
error.value = e.message
} finally {
resolvingAction.value = false
}
}
async function reject() {
if (!task.value || !rejectReason.value) return
try {
await api.rejectTask(props.id, rejectReason.value)
showReject.value = false
rejectReason.value = ''
await load()
} catch (e: any) {
error.value = e.message
}
}
async function runPipeline() {
try {
await api.runTask(props.id)
startPolling()
await load()
} catch (e: any) {
error.value = e.message
}
}
const hasSteps = computed(() => (task.value?.pipeline_steps?.length ?? 0) > 0)
const isRunning = computed(() => task.value?.status === 'in_progress')
</script>
<template>
<div v-if="loading && !task" class="text-gray-500 text-sm">Loading...</div>
<div v-else-if="error && !task" class="text-red-400 text-sm">{{ error }}</div>
<div v-else-if="task">
<!-- Header -->
<div class="mb-6">
<div class="flex items-center gap-2 mb-1">
<router-link :to="`/project/${task.project_id}`" class="text-gray-600 hover:text-gray-400 text-sm no-underline">
&larr; {{ task.project_id }}
</router-link>
</div>
<div class="flex items-center gap-3 mb-2">
<h1 class="text-xl font-bold text-gray-100">{{ task.id }}</h1>
<span class="text-gray-400">{{ task.title }}</span>
<Badge :text="task.status" :color="statusColor(task.status)" />
<span v-if="isRunning" class="inline-block w-2 h-2 bg-blue-500 rounded-full animate-pulse"></span>
<span class="text-xs text-gray-600">pri {{ task.priority }}</span>
</div>
<div v-if="task.brief" class="text-xs text-gray-500 mb-1">
Brief: {{ JSON.stringify(task.brief) }}
</div>
<div v-if="task.assigned_role" class="text-xs text-gray-500">
Assigned: {{ task.assigned_role }}
</div>
</div>
<!-- Pipeline Graph -->
<div v-if="hasSteps || isRunning" class="mb-6">
<h2 class="text-sm font-semibold text-gray-300 mb-3">
Pipeline
<span v-if="isRunning" class="text-blue-400 text-xs font-normal ml-2 animate-pulse">running...</span>
</h2>
<div class="flex items-center gap-1 overflow-x-auto pb-2">
<template v-for="(step, i) in task.pipeline_steps" :key="step.id">
<div v-if="i > 0" class="text-gray-600 text-lg shrink-0 px-1">&rarr;</div>
<button
@click="selectedStep = selectedStep?.id === step.id ? null : step"
class="border rounded-lg px-3 py-2 min-w-[120px] text-left transition-all shrink-0"
:class="[
stepStatusClass(step),
selectedStep?.id === step.id ? 'ring-1 ring-blue-500' : '',
]"
>
<div class="flex items-center gap-1.5 mb-1">
<span class="text-base">{{ roleIcons[step.agent_role] || '\u{1F916}' }}</span>
<span class="text-xs font-medium text-gray-300">{{ step.agent_role }}</span>
<span :class="stepStatusColor(step)" class="text-xs ml-auto">{{ stepStatusIcon(step) }}</span>
</div>
<div class="flex gap-2 text-[10px] text-gray-500">
<span v-if="step.duration_seconds">{{ step.duration_seconds }}s</span>
<span v-if="step.tokens_used">{{ step.tokens_used?.toLocaleString() }}tk</span>
<span v-if="step.cost_usd">${{ step.cost_usd?.toFixed(3) }}</span>
</div>
</button>
</template>
</div>
</div>
<!-- No pipeline -->
<div v-if="!hasSteps && !isRunning" class="mb-6 text-sm text-gray-600">
No pipeline steps yet.
</div>
<!-- Selected step output -->
<div v-if="selectedStep" class="mb-6">
<h2 class="text-sm font-semibold text-gray-300 mb-2">
Output: {{ selectedStep.agent_role }}
<span class="text-xs text-gray-600 font-normal ml-2">{{ selectedStep.created_at }}</span>
</h2>
<div class="border border-gray-800 rounded-lg bg-gray-900/50 overflow-hidden">
<pre class="p-4 text-xs text-gray-300 overflow-x-auto whitespace-pre-wrap max-h-[600px] overflow-y-auto">{{ formatOutput(selectedStep.output_summary) }}</pre>
</div>
</div>
<!-- Related Decisions -->
<div v-if="task.related_decisions?.length" class="mb-6">
<h2 class="text-sm font-semibold text-gray-300 mb-2">Related Decisions</h2>
<div class="space-y-1">
<div v-for="d in task.related_decisions" :key="d.id"
class="px-3 py-2 border border-gray-800 rounded text-xs">
<Badge :text="d.type" :color="d.type === 'gotcha' ? 'red' : 'blue'" />
<span class="text-gray-300 ml-2">{{ d.title }}</span>
</div>
</div>
</div>
<!-- Actions Bar -->
<div class="sticky bottom-0 bg-gray-950 border-t border-gray-800 py-3 flex gap-3 -mx-6 px-6 mt-8">
<button v-if="task.status === 'review'"
@click="showApprove = true"
class="px-4 py-2 text-sm bg-green-900/50 text-green-400 border border-green-800 rounded hover:bg-green-900">
&#10003; Approve
</button>
<button v-if="task.status === 'review' || task.status === 'in_progress'"
@click="showReject = true"
class="px-4 py-2 text-sm bg-red-900/50 text-red-400 border border-red-800 rounded hover:bg-red-900">
&#10007; Reject
</button>
<button v-if="task.status === 'pending' || task.status === 'blocked'"
@click="runPipeline"
:disabled="polling"
class="px-4 py-2 text-sm bg-blue-900/50 text-blue-400 border border-blue-800 rounded hover:bg-blue-900 disabled:opacity-50">
<span v-if="polling" class="inline-block w-3 h-3 border-2 border-blue-400 border-t-transparent rounded-full animate-spin mr-1"></span>
{{ polling ? 'Pipeline running...' : '&#9654; Run Pipeline' }}
</button>
</div>
<!-- Approve Modal -->
<Modal v-if="showApprove" title="Approve Task" @close="showApprove = false; followupResults = []; pendingActions = []">
<!-- Pending permission actions -->
<div v-if="pendingActions.length" class="space-y-3">
<p class="text-sm text-yellow-400">Permission issues need your decision:</p>
<div v-for="(action, i) in pendingActions" :key="i"
class="border border-yellow-900/50 rounded-lg p-3 space-y-2">
<p class="text-sm text-gray-300">{{ action.description }}</p>
<div class="flex gap-2">
<button @click="resolveAction(action, 'rerun')" :disabled="resolvingAction"
class="px-3 py-1 text-xs bg-blue-900/50 text-blue-400 border border-blue-800 rounded hover:bg-blue-900 disabled:opacity-50">
Rerun (skip permissions)
</button>
<button @click="resolveAction(action, 'manual_task')" :disabled="resolvingAction"
class="px-3 py-1 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700 disabled:opacity-50">
Create task
</button>
<button @click="resolveAction(action, 'skip')" :disabled="resolvingAction"
class="px-3 py-1 text-xs bg-gray-800 text-gray-500 border border-gray-700 rounded hover:bg-gray-700 disabled:opacity-50">
Skip
</button>
</div>
</div>
</div>
<!-- Follow-up results -->
<div v-if="followupResults.length && !pendingActions.length" class="space-y-3">
<p class="text-sm text-green-400">Task approved. Created {{ followupResults.length }} follow-up tasks:</p>
<div class="space-y-1">
<router-link v-for="f in followupResults" :key="f.id" :to="`/task/${f.id}`"
class="block px-3 py-2 border border-gray-800 rounded text-sm text-gray-300 hover:border-gray-600 no-underline">
<span class="text-gray-500">{{ f.id }}</span> {{ f.title }}
</router-link>
</div>
<button @click="showApprove = false; followupResults = []"
class="w-full py-2 bg-gray-800 text-gray-300 border border-gray-700 rounded text-sm hover:bg-gray-700">
Close
</button>
</div>
<!-- Approve form -->
<form v-if="!followupResults.length && !pendingActions.length" @submit.prevent="approve" class="space-y-3">
<label class="flex items-center gap-2 text-sm text-gray-300 cursor-pointer">
<input type="checkbox" v-model="approveForm.createFollowups"
class="rounded border-gray-600 bg-gray-800 text-blue-500" />
Create follow-up tasks from pipeline results
</label>
<p class="text-xs text-gray-500">Optionally record a decision:</p>
<input v-model="approveForm.title" placeholder="Decision title (optional)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<textarea v-if="approveForm.title" v-model="approveForm.description" placeholder="Description"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y" rows="2"></textarea>
<button type="submit" :disabled="approveLoading"
class="w-full py-2 bg-green-900/50 text-green-400 border border-green-800 rounded text-sm hover:bg-green-900 disabled:opacity-50">
{{ approveLoading ? 'Processing...' : 'Approve &amp; mark done' }}
</button>
</form>
</Modal>
<!-- Reject Modal -->
<Modal v-if="showReject" title="Reject Task" @close="showReject = false">
<form @submit.prevent="reject" class="space-y-3">
<textarea v-model="rejectReason" placeholder="Why are you rejecting this?" rows="3" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y"></textarea>
<button type="submit"
class="w-full py-2 bg-red-900/50 text-red-400 border border-red-800 rounded text-sm hover:bg-red-900">
Reject &amp; return to pending
</button>
</form>
</Modal>
</div>
</template>

View file

@ -0,0 +1,10 @@
/** @type {import('tailwindcss').Config} */
export default {
darkMode: 'class',
content: ["./index.html", "./src/**/*.{vue,ts}"],
theme: {
extend: {},
},
plugins: [],
}

View file

@ -0,0 +1,16 @@
{
"extends": "@vue/tsconfig/tsconfig.dom.json",
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.app.tsbuildinfo",
"types": ["vite/client"],
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["src/**/*.ts", "src/**/*.tsx", "src/**/*.vue"]
}

View file

@ -0,0 +1,7 @@
{
"files": [],
"references": [
{ "path": "./tsconfig.app.json" },
{ "path": "./tsconfig.node.json" }
]
}

View file

@ -0,0 +1,26 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.node.tsbuildinfo",
"target": "ES2023",
"lib": ["ES2023"],
"module": "ESNext",
"types": ["node"],
"skipLibCheck": true,
/* Bundler mode */
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"moduleDetection": "force",
"noEmit": true,
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"erasableSyntaxOnly": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true
},
"include": ["vite.config.ts"]
}

View file

@ -0,0 +1,7 @@
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
// https://vite.dev/config/
export default defineConfig({
plugins: [vue()],
})