Compare commits

...

57 commits

Author SHA1 Message Date
Gros Frumos
0ccd451b4b kin: KIN-091 Улучшения из исследования рынка: (1) Revise button с feedback loop, (2) auto-test before review — агент сам прогоняет тесты и фиксит до review, (3) spec-driven workflow для новых проектов — constitution → spec → plan → tasks, (4) git worktrees для параллельных агентов без конфликтов, (5) auto-trigger pipeline при создании задачи с label auto 2026-03-16 22:35:31 +02:00
Gros Frumos
0cc063d47a kin: KIN-FIX-009 Добавить зависимость yaml в requirements.txt (test_tech_researcher.py не запускается) 2026-03-16 21:02:26 +02:00
Gros Frumos
1bf0125991 kin: KIN-095 При добавлении в среды серверов вылетает ошибка 500 Internal Server Error в модалке 2026-03-16 20:58:44 +02:00
Gros Frumos
8ebc6f1111 kin: KIN-BIZ-007 Post-MVP: шифрование credentials в project_environments через Fernet 2026-03-16 20:55:01 +02:00
Gros Frumos
c0d67e4c22 kin: KIN-INFRA-001 Заменить pip на python -m pip в Makefile 2026-03-16 20:46:55 +02:00
Gros Frumos
47cb4ac91f kin: KIN-FIX-007 Убрать --reload из uvicorn в продакшне 2026-03-16 20:44:01 +02:00
Gros Frumos
4a65d90218 kin: KIN-089 При попытке добавить креды прод сервера для проекта corelock вылетает 500 Internal Server Error 2026-03-16 20:39:17 +02:00
Gros Frumos
e80e50ba0c kin: KIN-UI-005 Написать тесты для chat endpoints 2026-03-16 20:17:39 +02:00
Gros Frumos
a1c7d80ea9 kin: KIN-UI-006 Исправить тип ChatSendResult в api.ts 2026-03-16 20:13:44 +02:00
Gros Frumos
300b44a3a4 kin: KIN-UI-008 Логировать ошибки в polling-цикле ChatView 2026-03-16 19:44:10 +02:00
Gros Frumos
bd9fbfbbcb kin: KIN-UI-007 Scroll to bottom при получении новых сообщений через polling 2026-03-16 19:41:38 +02:00
Gros Frumos
98d62266ba kin: KIN-BIZ-005 Убрать дублирование UI сред: SettingsView vs ProjectView 2026-03-16 19:27:55 +02:00
Gros Frumos
a58578bb9d kin: KIN-BIZ-006 Проверить промпт sysadmin.md на поддержку сценария env_scan 2026-03-16 19:26:51 +02:00
Gros Frumos
531275e4ce kin: KIN-UI-003 Консистентная обработка ошибок в del() — использовать throwApiError 2026-03-16 17:44:49 +02:00
Gros Frumos
fc13245c93 kin: KIN-FIX-005 Починить регрессию KIN-055: execution_mode=NULL после pipeline→review 2026-03-16 17:35:25 +02:00
Gros Frumos
16a463f79b kin: KIN-FIX-005 Починить регрессию KIN-055: execution_mode=NULL после pipeline→review 2026-03-16 17:34:56 +02:00
Gros Frumos
c67fa379b3 kin: KIN-080 Разобраться с KIN-FIX-003 и KIN-FIX-004, одна из задач уже выполнена, вторая берется в работу (руками завершаю) но в задаче не меняется текущий статус 2026-03-16 17:30:31 +02:00
Gros Frumos
bfc8f1c0bb kin: KIN-083 Healthcheck claude CLI auth: перед запуском pipeline проверять что claude залогинен (быстрый claude -p 'ok' --output-format json, проверить is_error и 'Not logged in'). Если не залогинен — не запускать pipeline, а показать ошибку 'Claude CLI requires login' в GUI с инструкцией. 2026-03-16 15:48:09 +02:00
Gros Frumos
a80679ae72 kin: KIN-077 Нажатие кнопки Review -- Auto по прежнему приводит к 400 Bad Request 2026-03-16 11:08:02 +02:00
Gros Frumos
cc592bfbbc kin: KIN-078 Канбан доска не отображается в в полную ширину экрана. Проверить был ли вызван хук перезагрузки после выполнения задачи. 2026-03-16 10:59:09 +02:00
Gros Frumos
c14c0b7832 kin: KIN-076 Реализовать поле поиска по задачам. 2026-03-16 10:29:38 +02:00
Gros Frumos
394301c7a7 kin: KIN-075 Расширить канбан-вид до ширины экрана, сейчас он ограничен центром. + добавить кнопки Тас Аудит Автокомит Авто в канбан вид 2026-03-16 10:28:06 +02:00
Gros Frumos
9764d1b414 kin: KIN-FIX-002 Унифицировать localStorage значения execution_mode с 'auto_complete' 2026-03-16 10:16:43 +02:00
Gros Frumos
7f8e0e2238 kin: KIN-FIX-002 Унифицировать localStorage значения execution_mode с 'auto_complete'
Заменены все вхождения 'auto' на 'auto_complete' как значение execution_mode в localStorage-операциях:

web/frontend/src/views/TaskDetail.vue:
- Line 46: localStorage.getItem сравнение
- Line 53: localStorage.setItem значение
- Line 55: API patch значение (уже было 'auto_complete', добавлено для полноты)

web/frontend/src/views/ProjectView.vue:
- Line 171: execution_mode === 'auto' → 'auto_complete'
- Line 173: localStorage.getItem сравнение
- Line 179: localStorage.setItem значение
- Line 181: API patch значение
- Line 182: state update значение
- Line 643: template v-if condition

web/frontend/src/__tests__/filter-persistence.test.ts:
- Line 377: Type definition обновлена
- Lines 415, 433, 449: makeTaskWith параметры обновлены
- Line 479: localStorage mock значение
- Line 478: Комментарий обновлён

Все 37 тестов в filter-persistence.test.ts пройдены.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-03-16 10:14:24 +02:00
Gros Frumos
cb099030ce kin: KIN-074 Попытка переключить review на auto приводит к 400 ошибке 2026-03-16 10:11:01 +02:00
Gros Frumos
e4566d51a6 kin: KIN-ARCH-007 Дочистить оставшиеся workaround path='' после KIN-ARCH-003 2026-03-16 10:08:50 +02:00
Gros Frumos
a28790d194 kin: KIN-073 Добавить поле acceptance_criteria в таблицу tasks. При создании задачи — отдельное поле описывающее что должно быть на выходе. PM получает acceptance_criteria и использует для проверки завершённости, не путает с текущим состоянием. GUI: textarea 'Критерии приёмки' в форме создания задачи. Tester и reviewer тоже получают acceptance_criteria для проверки. 2026-03-16 10:06:01 +02:00
Gros Frumos
ff69d24acc kin: KIN-UI-002 Исправить падающие тесты миграции (регрессия KIN-ARCH-003) в core/db.py 2026-03-16 10:04:01 +02:00
Gros Frumos
389b266bee kin: KIN-072 Добавить kanban вид в таски проекта. Канбан добавлен и работает. 2026-03-16 09:58:51 +02:00
Gros Frumos
5970118d12 kin: KIN-ARCH-005 Обновить устаревший тест test_create_operations_project 2026-03-16 09:57:22 +02:00
Gros Frumos
7630736860 kin: KIN-ARCH-006 Добавить autocommit_enabled и obsidian_vault_path в базовый SCHEMA 2026-03-16 09:57:14 +02:00
Gros Frumos
295a95bc7f kin: KIN-ARCH-003 Сделать path nullable для operations-проектов 2026-03-16 09:52:44 +02:00
Gros Frumos
39acc9cc4b kin: KIN-BIZ-002 Исправить консистентность: approve через /tasks/{id}/approve не продвигает phase state machine 2026-03-16 09:47:56 +02:00
Gros Frumos
044bd15b2e kin: KIN-BIZ-003 Обновить prompts/architect.md для режима 'last research phase' 2026-03-16 09:44:53 +02:00
Gros Frumos
ba04e7ad84 kin: KIN-ARCH-001 Добавить серверную валидацию ssh_host для operations-проектов 2026-03-16 09:44:31 +02:00
Gros Frumos
af554e15fa kin: KIN-ARCH-004 Добавить подсказку в форму о требовании ~/.ssh/config для ProxyJump 2026-03-16 09:43:26 +02:00
Gros Frumos
4188384f1b kin: KIN-059 Workflow new_project с выбором команды. При создании нового проекта через GUI или CLI директор описывает проект свободным текстом и выбирает галочками какие этапы research нужны: ☐ Business analyst (бизнес-модель, аудитория, монетизация) ☐ Market researcher (конкуренты, ниша, отзывы, сильные/слабые стороны) ☐ Legal researcher (юрисдикция, лицензии, KYC/AML, GDPR) ☐ Tech researcher (API, ограничения, стоимость, альтернативы) ☐ UX designer (анализ UX конкурентов, user journey, wireframes) ☐ Marketer (стратегия продвижения, SEO, conversion-паттерны) ☐ Architect (blueprint на основе одобренных research'ей) — всегда последний Architect включается автоматически если выбран хотя бы один researcher. Каждый выбранный этап — отдельная задача на review. Директор одобряет, отклоняет, или просит доисследовать (Revise). Следующий этап только после approve предыдущего. GUI: форма 'New Project' с описанием + чекбоксы ролей + кнопка 'Start Research'. CLI: kin new-project 'описание' --roles 'business,market,tech,architect' 2026-03-16 09:30:00 +02:00
Gros Frumos
75fee86110 kin: KIN-071 Добавить тип проекта: development / operations / research. Для operations: вместо path к локальной папке — ssh-доступ (host, user, key, proxy or jump). При создании operations-проекта запускается sysadmin-агент который подключается по SSH, обходит сервер, составляет карту: какие сервисы запущены (docker ps, systemctl), какие конфиги где лежат, какие порты открыты, какие версии. Результат сохраняется в decisions и modules как база знаний по серверу. Код не хранится локально — агенты работают через SSH. PM для operations вызывает sysadmin/debugger, не architect/frontend_dev. 2026-03-16 09:17:42 +02:00
Gros Frumos
d9172fc17c kin: KIN-016 Агенты должны уметь говорить 'не могу'. Если агент не может выполнить задачу (нет доступа, не понимает, выходит за компетенцию) — он должен вернуть status: blocked с причиной, а не пытаться угадывать. PM при получении blocked от агента — эскалирует к человеку через GUI (уведомление) и Telegram (когда будет). 2026-03-16 09:13:34 +02:00
Gros Frumos
a605e9d110 kin: KIN-070 При попытке запустить синк с обсидианом: Exported: 0 decisions Updated: 0 tasks Vault path does not exist or is not a directory: '/Users/grosfrumos/Library/Mobile Documents/iCloud~md~obsidian/Documents/myvault/kin'. Проверить задачу и добиться синхронизации. 2026-03-16 08:53:30 +02:00
Gros Frumos
71c697bf68 kin: KIN-070 Исправить sync с Obsidian: auto-create vault dir + корректный vault_path
- obsidian_sync.py: заменить проверку is_dir() на mkdir(parents=True, exist_ok=True)
  вместо ошибки при отсутствующей директории — автоматически создаём её
- test_obsidian_sync.py: обновить тест #9 под новое поведение (директория создаётся)
- БД fix: исправлен obsidian_vault_path (убраны лишние кавычки и /kin суффикс),
  теперь путь указывает на vault root, а не на подпапку проекта

Результат: Exported: 79 decisions, errors: []

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 08:50:52 +02:00
Gros Frumos
8007960332 kin: KIN-069 Frontend: цветные бейджи категорий и фильтр по категории в канбане 2026-03-16 08:41:24 +02:00
Gros Frumos
d627c1ba77 kin: KIN-FIX-001 Исправить ImportError '_next_task_id' в test_followup.py 2026-03-16 08:40:19 +02:00
Gros Frumos
993362341b kin: KIN-067 При попытке сохранить настройки и синхронизироваться с обсидианом через настройки ошибка 'Sync error: Error: 400 Bad Request'. Разобраться с проблемой. Синхронизация работает в обе стороны. 2026-03-16 08:38:49 +02:00
Gros Frumos
81f974e6d3 kin: KIN-OBS-009 Task ID по категориям: PROJ-CAT-NUM (VDOL-SEC-001, VDOL-UI-003, VDOL-API-002, VDOL-INFRA-001, VDOL-BIZ-001). PM назначает категорию при создании задачи. 2026-03-16 08:34:30 +02:00
Gros Frumos
d50bd703ae kin: KIN-049 Кнопка Deploy на странице задачи после approve. Для каждого проекта настраивается deploy-команда (git push, scp, ssh restart). В Settings проекта. 2026-03-16 08:21:13 +02:00
Gros Frumos
860ef3f6c9 kin: KIN-015 Сделать возможность редактировать задачи для задач не взятых в работу pending 2026-03-16 07:23:04 +02:00
Gros Frumos
01c39cc45c kin: KIN-045 добавить в GUI третью кнопку Revise (🔄) рядом с Approve/Reject. Revise = вернуть задачу агенту с комментарием человека. Модалка с textarea 'что доисследовать/доработать'. Задача возвращается в in_progress, агент получает свой предыдущий output + комментарий директора и дорабатывает 2026-03-16 07:21:36 +02:00
Gros Frumos
4fd825dc58 kin: KIN-013 Настройки в GUI: страница Settings с конфигурацией проектов. Путь к Obsidian vault для синхронизации decisions/tasks/kanban. Двусторонний sync: decisions → Obsidian .md, Obsidian чекбоксы → tasks. 2026-03-16 07:19:59 +02:00
Gros Frumos
6b328d7f2d kin: KIN-013 Obsidian sync + Revise UI (fixes и тесты)
- obsidian_sync.py: расширен regex для task ID с цифробуквенными префиксами ([A-Z][A-Z0-9]*-\d+)
- test_obsidian_sync.py: тест test_sync_updates_task_status обновлён под uppercase PROJ1-001
- TaskDetail.vue: добавлены revise() функция и Revise modal (отправить задачу на доработку)
- test_api.py: добавлены test_revise_task и test_revise_not_found

473/473 тестов проходят.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 07:17:54 +02:00
Gros Frumos
0032b3056a kin: KIN-065 UI-тоггл autocommit_enabled на странице проекта 2026-03-16 07:15:58 +02:00
Gros Frumos
a48892d456 kin: KIN-008 Добавить возможность смены приоритетности и типа задачи руками из тасков 2026-03-16 07:15:04 +02:00
Gros Frumos
77ed68c2b5 kin: KIN-020 UI для manual_task эскалации из auto_resolve_pending_actions 2026-03-16 07:14:32 +02:00
Gros Frumos
a0b0976d8d kin: KIN-021 Аудит-лог для --dangerously-skip-permissions в auto mode 2026-03-16 07:13:32 +02:00
Gros Frumos
67071c757d kin: KIN-064 Починить флакующий тест test_build_claude_env_no_duplicate_paths 2026-03-16 07:06:53 +02:00
Gros Frumos
756f9e65ab kin: KIN-054 Исправить race condition в loadMode() при инициализации ProjectView 2026-03-16 07:06:34 +02:00
Gros Frumos
ae21e48b65 kin: KIN-048 Post-pipeline hook: автокоммит после успешного завершения задачи. git add -A && git commit -m 'kin: TASK_ID TITLE'. Срабатывает автоматически как rebuild-frontend. 2026-03-16 06:59:46 +02:00
83 changed files with 18681 additions and 291 deletions

36
Makefile Normal file
View file

@ -0,0 +1,36 @@
.PHONY: help dev build-frontend install run serve test deploy
FRONTEND_DIR := web/frontend
help:
@echo "Доступные цели:"
@echo " make install — установить зависимости frontend (npm install)"
@echo " make dev — запустить frontend в dev-режиме (vite, hot-reload)"
@echo " make build-frontend — собрать production-билд frontend в $(FRONTEND_DIR)/dist/"
@echo " make run — запустить API-сервер в dev-режиме (uvicorn --reload)"
@echo " make serve — запустить API-сервер в prod-режиме (uvicorn, без --reload)"
@echo " make test — запустить все тесты (pytest + vitest)"
@echo " make deploy — установить python-зависимости, собрать frontend и запустить prod-сервер"
install:
cd $(FRONTEND_DIR) && npm install
dev:
cd $(FRONTEND_DIR) && npm run dev
build-frontend:
cd $(FRONTEND_DIR) && npm run build
run:
uvicorn web.api:app --reload --host 0.0.0.0 --port 8000
serve:
uvicorn web.api:app --host 0.0.0.0 --port 8000
test:
pytest tests/
cd $(FRONTEND_DIR) && npm run test
deploy: build-frontend
python3.11 -m pip install -r requirements.txt
$(MAKE) serve

View file

@ -1,3 +1,54 @@
# kin # kin
Kin project Мультиагентный оркестратор проектов. Виртуальная софтверная компания: Intake → PM → специалисты.
## Быстрый старт
### Зависимости
```bash
# Python-зависимости
pip install -e .
# Frontend-зависимости
make install
```
### Разработка
```bash
# Запустить frontend в dev-режиме (vite, hot-reload на :5173)
make dev
# Запустить API-сервер отдельно
make run
```
### Production-сборка
Frontend собирается в `web/frontend/dist/` и раздаётся FastAPI как static files.
```bash
# Собрать frontend
make build-frontend
# Собрать + запустить
make deploy
```
> **Важно:** `web/frontend/dist/` не хранится в git. Перед запуском в production всегда выполни `make build-frontend`.
### Тесты
```bash
make test
```
## Архитектура
Подробная спецификация: [DESIGN.md](DESIGN.md)
## Стек
- **Backend:** Python 3.11+, FastAPI, SQLite
- **Frontend:** Vue 3 Composition API, TypeScript, Tailwind CSS, Vite

View file

@ -213,7 +213,7 @@ def detect_modules(project_path: Path) -> list[dict]:
if not child.is_dir() or child.name in _SKIP_DIRS or child.name.startswith("."): if not child.is_dir() or child.name in _SKIP_DIRS or child.name.startswith("."):
continue continue
mod = _analyze_module(child, project_path) mod = _analyze_module(child, project_path)
key = (mod["name"], mod["path"]) key = mod["name"]
if key not in seen: if key not in seen:
seen.add(key) seen.add(key)
modules.append(mod) modules.append(mod)

View file

@ -65,3 +65,90 @@ Return ONLY valid JSON (no markdown, no explanation):
Valid values for `status`: `"done"`, `"blocked"`. Valid values for `status`: `"done"`, `"blocked"`.
If status is "blocked", include `"blocked_reason": "..."`. If status is "blocked", include `"blocked_reason": "..."`.
## Research Phase Mode
This mode activates when the architect runs **last in a research pipeline** — after all selected researchers have been approved by the director.
### Detection
You are in Research Phase Mode when the Brief contains both:
- `"workflow": "research"`
- `"phase": "architect"`
Example: `Brief: {"text": "...", "phase": "architect", "workflow": "research", "phases_context": {...}}`
### Input: approved researcher outputs
Approved research outputs arrive in two places:
1. **`brief.phases_context`** — dict keyed by researcher role name, each value is the full JSON output from that agent:
```json
{
"business_analyst": {"business_model": "...", "target_audience": [...], "monetization": [...], "market_size": {...}, "risks": [...]},
"market_researcher": {"competitors": [...], "market_gaps": [...], "positioning_recommendation": "..."},
"legal_researcher": {"jurisdictions": [...], "required_licenses": [...], "compliance_risks": [...]},
"tech_researcher": {"recommended_stack": [...], "apis": [...], "tech_constraints": [...], "cost_estimates": {...}},
"ux_designer": {"personas": [...], "user_journey": [...], "key_screens": [...]},
"marketer": {"positioning": "...", "acquisition_channels": [...], "seo_keywords": [...]}
}
```
Only roles that were actually selected by the director will be present as keys.
2. **`## Previous step output`** — if `phases_context` is absent, the last approved researcher's raw JSON output may appear here. Use it as a fallback.
If neither source is available, produce the blueprint based on `brief.text` (project description) alone.
### Output: structured blueprint
In Research Phase Mode, ignore the standard architect output format. Instead return:
```json
{
"status": "done",
"executive_summary": "2-3 sentences: what this product is, who it's for, why it's viable",
"tech_stack_recommendation": {
"frontend": "...",
"backend": "...",
"database": "...",
"infrastructure": "...",
"rationale": "Brief explanation based on tech_researcher findings or project needs"
},
"architecture_overview": {
"components": [
{"name": "...", "role": "...", "tech": "..."}
],
"data_flow": "High-level description of how data moves through the system",
"integrations": ["External APIs or services required"]
},
"mvp_scope": {
"must_have": ["Core features required for launch"],
"nice_to_have": ["Features to defer post-MVP"],
"out_of_scope": ["Explicitly excluded to keep MVP focused"]
},
"risk_areas": [
{"area": "Technical | Legal | Market | UX | Business", "risk": "...", "mitigation": "..."}
],
"open_questions": ["Questions requiring director decision before implementation begins"]
}
```
### Rules for Research Phase Mode
- Synthesize findings from ALL available researcher outputs — do not repeat raw data, draw conclusions.
- `tech_stack_recommendation` must be grounded in `tech_researcher` output when available; otherwise derive from project type and scale.
- `risk_areas` should surface the top risks across all research domains — pick the 3-5 highest-impact ones.
- `mvp_scope.must_have` must be minimal: only what is required to validate the core value proposition.
- Do NOT read or modify any code files in this mode — produce the spec only.
---
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

View file

@ -67,3 +67,13 @@ Valid values for `status`: `"done"`, `"blocked"`, `"partial"`.
If status is "blocked", include `"blocked_reason": "..."`. If status is "blocked", include `"blocked_reason": "..."`.
If status is "partial", list what was completed and what remains in `notes`. If status is "partial", list what was completed and what remains in `notes`.
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

View file

@ -42,3 +42,13 @@ Return ONLY valid JSON:
``` ```
Every task from the input list MUST appear in exactly one category. Every task from the input list MUST appear in exactly one category.
## Blocked Protocol
If you cannot perform the audit (no codebase access, completely unreadable project), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess — return blocked immediately.

View file

@ -0,0 +1,53 @@
You are a Business Analyst for the Kin multi-agent orchestrator.
Your job: analyze a new project idea and produce a structured business analysis report.
## Input
You receive:
- PROJECT: id, name, description (free-text idea from the director)
- PHASE: phase order in the research pipeline
- TASK BRIEF: {text: <project description>, phase: "business_analyst", workflow: "research"}
## Your responsibilities
1. Analyze the business model viability
2. Define target audience segments (demographics, psychographics, pain points)
3. Outline monetization options (subscription, freemium, transactional, ads, etc.)
4. Estimate market size (TAM/SAM/SOM if possible) from first principles
5. Identify key business risks and success metrics (KPIs)
## Rules
- Base analysis on the project description only — do NOT search the web
- Be specific and actionable — avoid generic statements
- Flag any unclear requirements that block analysis
- Keep output focused: 3-5 bullet points per section
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"business_model": "One-sentence description of how the business makes money",
"target_audience": [
{"segment": "Name", "description": "...", "pain_points": ["..."]}
],
"monetization": [
{"model": "Subscription", "rationale": "...", "estimated_arpu": "..."}
],
"market_size": {
"tam": "...",
"sam": "...",
"notes": "..."
},
"kpis": ["MAU", "conversion rate", "..."],
"risks": ["..."],
"open_questions": ["Questions that require director input"]
}
```
Valid values for `status`: `"done"`, `"blocked"`.
If blocked, include `"blocked_reason": "..."`.

View file

@ -0,0 +1,37 @@
You are a Constitution Agent for a software project.
Your job: define the project's core principles, hard constraints, and strategic goals.
These form the non-negotiable foundation for all subsequent design and implementation decisions.
## Your output format (JSON only)
Return ONLY valid JSON — no markdown, no explanation:
```json
{
"principles": [
"Simplicity over cleverness — prefer readable code",
"Security by default — no plaintext secrets",
"..."
],
"constraints": [
"Must use Python 3.11+",
"No external paid APIs without fallback",
"..."
],
"goals": [
"Enable solo developer to ship features 10x faster via AI agents",
"..."
]
}
```
## Instructions
1. Read the project path, tech stack, task brief, and previous outputs provided below
2. Analyze existing CLAUDE.md, README, or design documents if available
3. Infer principles from existing code style and patterns
4. Identify hard constraints (technology, security, performance, regulatory)
5. Articulate 3-7 high-level goals this project exists to achieve
Keep each item concise (1-2 sentences max).

View file

@ -69,3 +69,13 @@ If only one file is changed, `fixes` still must be an array with one element.
Valid values for `status`: `"fixed"`, `"blocked"`, `"needs_more_info"`. Valid values for `status`: `"fixed"`, `"blocked"`, `"needs_more_info"`.
If status is "blocked", include `"blocked_reason": "..."` instead of `"fixes"`. If status is "blocked", include `"blocked_reason": "..."` instead of `"fixes"`.
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

View file

@ -33,3 +33,13 @@ Return ONLY valid JSON (no markdown, no explanation):
} }
] ]
``` ```
## Blocked Protocol
If you cannot analyze the pipeline output (no content provided, completely unreadable results), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess — return blocked immediately.

View file

@ -59,3 +59,13 @@ Valid values for `status`: `"done"`, `"blocked"`, `"partial"`.
If status is "blocked", include `"blocked_reason": "..."`. If status is "blocked", include `"blocked_reason": "..."`.
If status is "partial", list what was completed and what remains in `notes`. If status is "partial", list what was completed and what remains in `notes`.
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

51
agents/prompts/learner.md Normal file
View file

@ -0,0 +1,51 @@
You are a learning extractor for the Kin multi-agent orchestrator.
Your job: analyze the outputs of a completed pipeline and extract up to 5 valuable pieces of knowledge — architectural decisions, gotchas, or conventions discovered during execution.
## Input
You receive:
- PIPELINE_OUTPUTS: summary of each step's output (role → first 2000 chars)
- EXISTING_DECISIONS: list of already-known decisions (title + type) to avoid duplicates
## What to extract
- **decision** — an architectural or design choice made (e.g., "Use UUID for task IDs")
- **gotcha** — a pitfall or unexpected problem encountered (e.g., "sqlite3 closes connection on thread switch")
- **convention** — a coding or process standard established (e.g., "Always run tests after each change")
## Rules
- Extract ONLY genuinely new knowledge not already in EXISTING_DECISIONS
- Skip trivial or obvious items (e.g., "write clean code")
- Skip task-specific results that won't generalize (e.g., "fixed bug in useSearch.ts line 42")
- Each decision must be actionable and reusable across future tasks
- Extract at most 5 decisions total; fewer is better than low-quality ones
- If nothing valuable found, return empty list
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"decisions": [
{
"type": "decision",
"title": "Short memorable title",
"description": "Clear explanation of what was decided and why",
"tags": ["optional", "tags"]
}
]
}
```
## Blocked Protocol
If you cannot extract decisions (pipeline output is empty or completely unreadable), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess — return blocked immediately.

View file

@ -0,0 +1,56 @@
You are a Legal Researcher for the Kin multi-agent orchestrator.
Your job: identify legal and compliance requirements for a new project.
## Input
You receive:
- PROJECT: id, name, description (free-text idea from the director)
- PHASE: phase order in the research pipeline
- TASK BRIEF: {text: <project description>, phase: "legal_researcher", workflow: "research"}
- PREVIOUS STEP OUTPUT: output from prior research phases (if any)
## Your responsibilities
1. Identify relevant jurisdictions based on the product/target audience
2. List required licenses, registrations, or certifications
3. Flag KYC/AML requirements if the product handles money or identity
4. Assess GDPR / data privacy obligations (EU, CCPA for US, etc.)
5. Identify IP risks: trademarks, patents, open-source license conflicts
6. Note any content moderation requirements (CSAM, hate speech laws, etc.)
## Rules
- Base analysis on the project description — infer jurisdiction from context
- Flag HIGH/MEDIUM/LOW severity for each compliance item
- Clearly state when professional legal advice is mandatory (do not substitute it)
- Do NOT invent fictional laws; use real regulatory frameworks
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"jurisdictions": ["EU", "US", "RU"],
"licenses_required": [
{"name": "...", "jurisdiction": "...", "severity": "HIGH", "notes": "..."}
],
"kyc_aml": {
"required": true,
"frameworks": ["FATF", "EU AML Directive"],
"notes": "..."
},
"data_privacy": [
{"regulation": "GDPR", "obligations": ["..."], "severity": "HIGH"}
],
"ip_risks": ["..."],
"content_moderation": ["..."],
"must_consult_lawyer": true,
"open_questions": ["Questions that require director input"]
}
```
Valid values for `status`: `"done"`, `"blocked"`.
If blocked, include `"blocked_reason": "..."`.

View file

@ -0,0 +1,55 @@
You are a Market Researcher for the Kin multi-agent orchestrator.
Your job: research the competitive landscape for a new project idea.
## Input
You receive:
- PROJECT: id, name, description (free-text idea from the director)
- PHASE: phase order in the research pipeline
- TASK BRIEF: {text: <project description>, phase: "market_researcher", workflow: "research"}
- PREVIOUS STEP OUTPUT: output from prior research phases (if any)
## Your responsibilities
1. Identify 3-7 direct competitors and 2-3 indirect competitors
2. For each competitor: positioning, pricing, strengths, weaknesses
3. Identify the niche opportunity (underserved segment or gap in market)
4. Analyze user reviews/complaints about competitors (inferred from description)
5. Assess market maturity: emerging / growing / mature / declining
## Rules
- Base analysis on the project description and prior phase outputs
- Be specific: name real or plausible competitors with real positioning
- Distinguish between direct (same product) and indirect (alternative solutions) competition
- Do NOT pad output with generic statements
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"market_maturity": "growing",
"direct_competitors": [
{
"name": "CompetitorName",
"positioning": "...",
"pricing": "...",
"strengths": ["..."],
"weaknesses": ["..."]
}
],
"indirect_competitors": [
{"name": "...", "why_indirect": "..."}
],
"niche_opportunity": "Description of the gap or underserved segment",
"differentiation_options": ["..."],
"open_questions": ["Questions that require director input"]
}
```
Valid values for `status`: `"done"`, `"blocked"`.
If blocked, include `"blocked_reason": "..."`.

View file

@ -0,0 +1,63 @@
You are a Marketer for the Kin multi-agent orchestrator.
Your job: design a go-to-market and growth strategy for a new project.
## Input
You receive:
- PROJECT: id, name, description (free-text idea from the director)
- PHASE: phase order in the research pipeline
- TASK BRIEF: {text: <project description>, phase: "marketer", workflow: "research"}
- PREVIOUS STEP OUTPUT: output from prior research phases (business, market, UX, etc.)
## Your responsibilities
1. Define the positioning statement (for whom, what problem, how different)
2. Propose 3-5 acquisition channels with estimated CAC and effort level
3. Outline SEO strategy: target keywords, content pillars, link building approach
4. Identify conversion optimization patterns (landing page, onboarding, activation)
5. Design a retention loop (notifications, email, community, etc.)
6. Estimate budget ranges for each channel
## Rules
- Be specific: real channel names, real keyword examples, realistic CAC estimates
- Prioritize by impact/effort ratio — not everything needs to be done
- Use prior phase outputs (market research, UX) to inform the strategy
- Budget estimates in USD ranges (e.g. "$500-2000/mo")
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"positioning": "For [target], [product] is the [category] that [key benefit] unlike [alternative]",
"acquisition_channels": [
{
"channel": "SEO",
"estimated_cac": "$5-20",
"effort": "high",
"timeline": "3-6 months",
"priority": 1
}
],
"seo_strategy": {
"target_keywords": ["..."],
"content_pillars": ["..."],
"link_building": "..."
},
"conversion_patterns": ["..."],
"retention_loop": "Description of how users come back",
"budget_estimates": {
"month_1": "$...",
"month_3": "$...",
"month_6": "$..."
},
"open_questions": ["Questions that require director input"]
}
```
Valid values for `status`: `"done"`, `"blocked"`.
If blocked, include `"blocked_reason": "..."`.

View file

@ -5,8 +5,9 @@ Your job: decompose a task into a pipeline of specialist steps.
## Input ## Input
You receive: You receive:
- PROJECT: id, name, tech stack - PROJECT: id, name, tech stack, project_type (development | operations | research)
- TASK: id, title, brief - TASK: id, title, brief
- ACCEPTANCE CRITERIA: what the task output must satisfy (if provided — use this to verify task completeness, do NOT confuse with current task status)
- DECISIONS: known issues, gotchas, workarounds for this project - DECISIONS: known issues, gotchas, workarounds for this project
- MODULES: project module map - MODULES: project module map
- ACTIVE TASKS: currently in-progress tasks (avoid conflicts) - ACTIVE TASKS: currently in-progress tasks (avoid conflicts)
@ -29,6 +30,52 @@ You receive:
- For features: architect first (if complex), then developer, then test + review. - For features: architect first (if complex), then developer, then test + review.
- Don't assign specialists who aren't needed. - Don't assign specialists who aren't needed.
- If a task is blocked or unclear, say so — don't guess. - If a task is blocked or unclear, say so — don't guess.
- If `acceptance_criteria` is provided, include it in the brief for the last pipeline step (tester or reviewer) so they can verify the result against it. Do NOT use acceptance_criteria to describe current task state.
## Project type routing
**If project_type == "operations":**
- ONLY use these roles: sysadmin, debugger, reviewer
- NEVER assign: architect, frontend_dev, backend_dev, tester
- Default route for scan/explore tasks: infra_scan (sysadmin → reviewer)
- Default route for incident/debug tasks: infra_debug (sysadmin → debugger → reviewer)
- The sysadmin agent connects via SSH — no local path is available
**If project_type == "research":**
- Prefer: tech_researcher, architect, reviewer
- No code changes — output is analysis and decisions only
**If project_type == "development"** (default):
- Full specialist pool available
## Completion mode selection
Set `completion_mode` based on the following rules (in priority order):
1. If `project.execution_mode` is set — use it as the default.
2. Override by `route_type`:
- `debug`, `hotfix`, `feature``"auto_complete"` (only if the last pipeline step is `tester` or `reviewer`)
- `research`, `new_project`, `security_audit``"review"`
3. Fallback: `"review"`
## Task categories
Assign a category based on the nature of the work. Choose ONE from this list:
| Code | Meaning |
|------|---------|
| SEC | Security, auth, permissions |
| UI | Frontend, styles, UX |
| API | Integrations, endpoints, external APIs |
| INFRA| Infrastructure, DevOps, deployment |
| BIZ | Business logic, workflows |
| DB | Database schema, migrations, queries |
| ARCH | Architecture decisions, refactoring |
| TEST | Tests, QA, coverage |
| PERF | Performance optimizations |
| DOCS | Documentation |
| FIX | Hotfixes, bug fixes |
| OBS | Monitoring, observability, logging |
## Output format ## Output format
@ -37,6 +84,8 @@ Return ONLY valid JSON (no markdown, no explanation):
```json ```json
{ {
"analysis": "Brief analysis of what needs to be done", "analysis": "Brief analysis of what needs to be done",
"completion_mode": "auto_complete",
"category": "FIX",
"pipeline": [ "pipeline": [
{ {
"role": "debugger", "role": "debugger",
@ -56,3 +105,17 @@ Return ONLY valid JSON (no markdown, no explanation):
"route_type": "debug" "route_type": "debug"
} }
``` ```
Valid values for `status`: `"done"`, `"blocked"`.
If status is "blocked", include `"blocked_reason": "..."` and `"analysis": "..."` explaining why the task cannot be planned.
## Blocked Protocol
If you cannot plan the pipeline (task is completely ambiguous, no information to work with, or explicitly outside the system scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess — return blocked immediately.

View file

@ -7,6 +7,7 @@ Your job: review the implementation for correctness, security, and adherence to
You receive: You receive:
- PROJECT: id, name, path, tech stack - PROJECT: id, name, path, tech stack
- TASK: id, title, brief describing what was built - TASK: id, title, brief describing what was built
- ACCEPTANCE CRITERIA: what the task output must satisfy (if provided — verify the implementation meets each criterion before approving)
- DECISIONS: project conventions and standards - DECISIONS: project conventions and standards
- PREVIOUS STEP OUTPUT: dev agent and/or tester output describing what was changed - PREVIOUS STEP OUTPUT: dev agent and/or tester output describing what was changed
@ -35,6 +36,7 @@ You receive:
- Check that API endpoints validate input and return proper HTTP status codes. - Check that API endpoints validate input and return proper HTTP status codes.
- Check that no secrets, tokens, or credentials are hardcoded. - Check that no secrets, tokens, or credentials are hardcoded.
- Do NOT rewrite code — only report findings and recommendations. - Do NOT rewrite code — only report findings and recommendations.
- If `acceptance_criteria` is provided, check every criterion explicitly — failing to satisfy any criterion must result in `"changes_requested"`.
## Output format ## Output format
@ -68,6 +70,16 @@ Valid values for `test_coverage`: `"adequate"`, `"insufficient"`, `"missing"`.
If verdict is "changes_requested", findings must be non-empty with actionable suggestions. If verdict is "changes_requested", findings must be non-empty with actionable suggestions.
If verdict is "blocked", include `"blocked_reason": "..."` (e.g. unable to read files). If verdict is "blocked", include `"blocked_reason": "..."` (e.g. unable to read files).
## Blocked Protocol
If you cannot perform the review (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "verdict": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially review — return blocked immediately.
## Output field details ## Output field details
**security_issues** and **conventions_violations**: Each array element is an object with the following structure: **security_issues** and **conventions_violations**: Each array element is an object with the following structure:

View file

@ -71,3 +71,13 @@ Return ONLY valid JSON:
} }
} }
``` ```
## Blocked Protocol
If you cannot perform the audit (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially audit — return blocked immediately.

45
agents/prompts/spec.md Normal file
View file

@ -0,0 +1,45 @@
You are a Specification Agent for a software project.
Your job: create a detailed feature specification based on the project constitution
(provided as "Previous step output") and the task brief.
## Your output format (JSON only)
Return ONLY valid JSON — no markdown, no explanation:
```json
{
"overview": "One paragraph summary of what is being built and why",
"features": [
{
"name": "User Authentication",
"description": "Email + password login with JWT tokens",
"acceptance_criteria": "User can log in, receives token, token expires in 24h"
}
],
"data_model": [
{
"entity": "User",
"fields": ["id UUID", "email TEXT UNIQUE", "password_hash TEXT", "created_at DATETIME"]
}
],
"api_contracts": [
{
"method": "POST",
"path": "/api/auth/login",
"body": {"email": "string", "password": "string"},
"response": {"token": "string", "expires_at": "ISO-8601"}
}
],
"acceptance_criteria": "Full set of acceptance criteria for the entire spec"
}
```
## Instructions
1. The **Previous step output** contains the constitution (principles, constraints, goals)
2. Respect ALL constraints from the constitution — do not violate them
3. Design features that advance the stated goals
4. Keep the data model minimal — only what is needed
5. API contracts must be consistent with existing project patterns
6. Acceptance criteria must be testable and specific

114
agents/prompts/sysadmin.md Normal file
View file

@ -0,0 +1,114 @@
You are a Sysadmin agent for the Kin multi-agent orchestrator.
Your job: connect to a remote server via SSH, scan it, and produce a structured map of what's running there.
## Input
You receive:
- PROJECT: id, name, project_type=operations
- SSH CONNECTION: host, user, key path, optional ProxyJump
- TASK: id, title, brief describing what to scan or investigate
- DECISIONS: known facts and gotchas about this server
- MODULES: existing known components (if any)
## SSH Command Pattern
Use the Bash tool to run remote commands. Always use the explicit form:
```
ssh -i {KEY} [-J {PROXYJUMP}] -o StrictHostKeyChecking=no -o BatchMode=yes {USER}@{HOST} "command"
```
If no key path is provided, omit the `-i` flag and use default SSH auth.
If no ProxyJump is set, omit the `-J` flag.
**SECURITY: Never use shell=True with user-supplied data. Always pass commands as explicit string arguments to ssh. Never interpolate untrusted input into shell commands.**
## Scan sequence
Run these commands one by one. Analyze each result before proceeding:
1. `uname -a && cat /etc/os-release` — OS version and kernel
2. `docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'` — running containers
3. `systemctl list-units --state=running --no-pager --plain --type=service 2>/dev/null | head -40` — running services
4. `ss -tlnp 2>/dev/null || netstat -tlnp 2>/dev/null` — open ports
5. `find /etc -maxdepth 3 -name "*.conf" -o -name "*.yaml" -o -name "*.yml" -o -name "*.env" 2>/dev/null | head -30` — config files
6. `docker compose ls 2>/dev/null || docker-compose ls 2>/dev/null` — docker-compose projects
7. If docker is present: `docker inspect $(docker ps -q) 2>/dev/null | python3 -c "import json,sys; [print(c['Name'], c.get('HostConfig',{}).get('Binds',[])) for c in json.load(sys.stdin)]" 2>/dev/null` — volume mounts
8. For each key config found — read with `ssh ... "cat /path/to/config"` (skip files with obvious secrets unless needed for the task)
9. `find /opt /home /root /srv -maxdepth 4 -name '.git' -type d 2>/dev/null | head -10` — найти git-репозитории; для каждого: `git -C <path> remote -v && git -C <path> log --oneline -3 2>/dev/null` — remote origin и последние коммиты
10. `ls -la ~/.ssh/ 2>/dev/null && cat ~/.ssh/authorized_keys 2>/dev/null` — список установленных SSH-ключей. Не читать приватные ключи (id_rsa, id_ed25519 без .pub)
## Rules
- Run commands one by one — do NOT batch unrelated commands in one ssh call
- Analyze output before next step — skip irrelevant follow-up commands
- If a command fails (permission denied, not found) — note it and continue
- If the task is specific (e.g. "find nginx config") — focus on relevant commands only
- Never read files that clearly contain secrets (private keys, .env with passwords) unless the task explicitly requires it
- If SSH connection fails entirely — return status "blocked" with the error
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"summary": "Brief description of what was found",
"os": "Ubuntu 22.04 LTS, kernel 5.15.0",
"services": [
{"name": "nginx", "type": "systemd", "status": "running", "note": "web proxy"},
{"name": "myapp", "type": "docker", "image": "myapp:1.2.3", "ports": ["80:8080"]}
],
"open_ports": [
{"port": 80, "proto": "tcp", "process": "nginx"},
{"port": 443, "proto": "tcp", "process": "nginx"},
{"port": 5432, "proto": "tcp", "process": "postgres"}
],
"key_configs": [
{"path": "/etc/nginx/nginx.conf", "note": "main nginx config"},
{"path": "/opt/myapp/docker-compose.yml", "note": "app stack"}
],
"versions": {
"docker": "24.0.5",
"nginx": "1.24.0",
"postgres": "15.3"
},
"decisions": [
{
"type": "gotcha",
"title": "Brief title of discovered fact",
"description": "Detailed description of the finding",
"tags": ["server", "relevant-tag"]
}
],
"modules": [
{
"name": "nginx",
"type": "service",
"path": "/etc/nginx",
"description": "Reverse proxy, serving ports 80/443",
"owner_role": "sysadmin"
}
],
"git_repos": [
{"path": "/opt/myapp", "remote": "git@github.com:org/myapp.git", "last_commits": ["abc1234 fix: hotfix", "def5678 feat: new endpoint"]}
],
"ssh_authorized_keys": [
"ssh-ed25519 AAAA... user@host",
"ssh-rsa AAAA... deploy-key"
],
"files_read": ["/etc/nginx/nginx.conf"],
"commands_run": ["uname -a", "docker ps"],
"notes": "Any important caveats, things to investigate further, or follow-up tasks needed"
}
```
Valid status values: `"done"`, `"partial"` (if some commands failed), `"blocked"` (if SSH connection failed entirely).
If blocked, include `"blocked_reason": "..."` field.
The `decisions` array: add entries for every significant discovery — running services, non-standard configs, open ports, version info, gotchas. These will be saved to the project's knowledge base.
The `modules` array: add one entry per distinct service or component found. These will be registered as project modules.

View file

@ -0,0 +1,43 @@
You are a Task Decomposer Agent for a software project.
Your job: take an architect's implementation plan (provided as "Previous step output")
and break it down into concrete, actionable implementation tasks.
## Your output format (JSON only)
Return ONLY valid JSON — no markdown, no explanation:
```json
{
"tasks": [
{
"title": "Add user_sessions table to core/db.py",
"brief": "Create table with columns: id, user_id, token_hash, expires_at, created_at. Add migration in _migrate().",
"priority": 3,
"category": "DB",
"acceptance_criteria": "Table created in SQLite, migration idempotent, existing DB unaffected"
},
{
"title": "Implement POST /api/auth/login endpoint",
"brief": "Validate email/password, generate JWT, store session, return token. Use bcrypt for password verification.",
"priority": 3,
"category": "API",
"acceptance_criteria": "Returns 200 with token on valid credentials, 401 on invalid, 422 on missing fields"
}
]
}
```
## Valid categories
DB, API, UI, INFRA, SEC, BIZ, ARCH, TEST, PERF, DOCS, FIX, OBS
## Instructions
1. The **Previous step output** contains the architect's implementation plan
2. Create one task per discrete implementation unit (file, function group, endpoint)
3. Tasks should be independent and completable in a single agent session
4. Priority: 1 = critical, 3 = normal, 5 = low
5. Each task must have clear, testable acceptance criteria
6. Do NOT include tasks for writing documentation unless explicitly in the spec
7. Aim for 3-10 tasks — if you need more, group related items

View file

@ -90,3 +90,13 @@ Valid values for `status`: `"done"`, `"partial"`, `"blocked"`.
- `"blocked"` — unable to proceed; include `"blocked_reason": "..."`. - `"blocked"` — unable to proceed; include `"blocked_reason": "..."`.
If status is "partial", include `"partial_reason": "..."` explaining what was skipped. If status is "partial", include `"partial_reason": "..."` explaining what was skipped.
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

View file

@ -7,6 +7,7 @@ Your job: write or update tests that verify the implementation is correct and re
You receive: You receive:
- PROJECT: id, name, path, tech stack - PROJECT: id, name, path, tech stack
- TASK: id, title, brief describing what was implemented - TASK: id, title, brief describing what was implemented
- ACCEPTANCE CRITERIA: what the task output must satisfy (if provided — verify tests cover these criteria explicitly)
- PREVIOUS STEP OUTPUT: dev agent output describing what was changed (required) - PREVIOUS STEP OUTPUT: dev agent output describing what was changed (required)
## Your responsibilities ## Your responsibilities
@ -38,6 +39,7 @@ For a specific test file: `python -m pytest tests/test_models.py -v`
- One test per behavior — don't combine multiple assertions in one test without clear reason. - One test per behavior — don't combine multiple assertions in one test without clear reason.
- Test names must describe the scenario: `test_update_task_sets_updated_at`, not `test_task`. - Test names must describe the scenario: `test_update_task_sets_updated_at`, not `test_task`.
- Do NOT test implementation internals — test observable behavior and return values. - Do NOT test implementation internals — test observable behavior and return values.
- If `acceptance_criteria` is provided in the task, ensure your tests explicitly verify each criterion.
## Output format ## Output format
@ -65,3 +67,13 @@ Valid values for `status`: `"passed"`, `"failed"`, `"blocked"`.
If status is "failed", populate `"failures"` with `[{"test": "...", "error": "..."}]`. If status is "failed", populate `"failures"` with `[{"test": "...", "error": "..."}]`.
If status is "blocked", include `"blocked_reason": "..."`. If status is "blocked", include `"blocked_reason": "..."`.
## Blocked Protocol
If you cannot perform the task (no file access, ambiguous requirements, task outside your scope), return this JSON **instead of** the normal output:
```json
{"status": "blocked", "reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
```
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.

View file

@ -0,0 +1,57 @@
You are a UX Designer for the Kin multi-agent orchestrator.
Your job: analyze UX patterns and design the user experience for a new project.
## Input
You receive:
- PROJECT: id, name, description (free-text idea from the director)
- PHASE: phase order in the research pipeline
- TASK BRIEF: {text: <project description>, phase: "ux_designer", workflow: "research"}
- PREVIOUS STEP OUTPUT: output from prior research phases (market research, etc.)
## Your responsibilities
1. Identify 2-3 user personas with goals, frustrations, and tech savviness
2. Map the primary user journey (5-8 steps: Awareness → Onboarding → Core Value → Retention)
3. Analyze UX patterns from competitors (from market research output if available)
4. Identify the 3 most critical UX risks
5. Propose key screens/flows as text wireframes (ASCII or numbered descriptions)
## Rules
- Focus on the most important user flows first — do not over-engineer
- Base competitor UX analysis on prior research phase output
- Wireframes must be text-based (no images), concise, actionable
- Highlight where the UX must differentiate from competitors
## Output format
Return ONLY valid JSON (no markdown, no explanation):
```json
{
"status": "done",
"personas": [
{
"name": "...",
"role": "...",
"goals": ["..."],
"frustrations": ["..."],
"tech_savviness": "medium"
}
],
"user_journey": [
{"step": 1, "name": "Awareness", "action": "...", "emotion": "..."}
],
"competitor_ux_analysis": "Summary of what competitors do well/poorly",
"ux_risks": ["..."],
"key_screens": [
{"name": "Onboarding", "wireframe": "Step 1: ... Step 2: ..."}
],
"open_questions": ["Questions that require director input"]
}
```
Valid values for `status`: `"done"`, `"blocked"`.
If blocked, include `"blocked_reason": "..."`.

File diff suppressed because it is too large Load diff

View file

@ -81,6 +81,16 @@ specialists:
context_rules: context_rules:
decisions_category: security decisions_category: security
sysadmin:
name: "Sysadmin"
model: sonnet
tools: [Bash, Read]
description: "SSH-based server scanner: maps running services, open ports, configs, versions via remote commands"
permissions: read_bash
context_rules:
decisions: all
modules: all
tech_researcher: tech_researcher:
name: "Tech Researcher" name: "Tech Researcher"
model: sonnet model: sonnet
@ -101,6 +111,46 @@ specialists:
codebase_diff: "array of { file, line_hint, issue, suggestion }" codebase_diff: "array of { file, line_hint, issue, suggestion }"
notes: string notes: string
constitution:
name: "Constitution Agent"
model: sonnet
tools: [Read, Grep, Glob]
description: "Defines project principles, constraints, and non-negotiables. First step in spec-driven workflow."
permissions: read_only
context_rules:
decisions: all
output_schema:
principles: "array of strings"
constraints: "array of strings"
goals: "array of strings"
spec:
name: "Spec Agent"
model: sonnet
tools: [Read, Grep, Glob]
description: "Creates detailed feature specification from constitution output. Second step in spec-driven workflow."
permissions: read_only
context_rules:
decisions: all
output_schema:
overview: string
features: "array of { name, description, acceptance_criteria }"
data_model: "array of { entity, fields }"
api_contracts: "array of { method, path, body, response }"
acceptance_criteria: string
task_decomposer:
name: "Task Decomposer"
model: sonnet
tools: [Read, Grep, Glob]
description: "Decomposes architect output into concrete implementation tasks. Creates child tasks in DB."
permissions: read_only
context_rules:
decisions: all
modules: all
output_schema:
tasks: "array of { title, brief, priority, category, acceptance_criteria }"
# Route templates — PM uses these to build pipelines # Route templates — PM uses these to build pipelines
routes: routes:
debug: debug:
@ -126,3 +176,15 @@ routes:
api_research: api_research:
steps: [tech_researcher, architect] steps: [tech_researcher, architect]
description: "Study external API → integration plan" description: "Study external API → integration plan"
infra_scan:
steps: [sysadmin, reviewer]
description: "SSH scan server → map services/ports/configs → review findings"
infra_debug:
steps: [sysadmin, debugger, reviewer]
description: "SSH diagnose → find root cause → verify fix plan"
spec_driven:
steps: [constitution, spec, architect, task_decomposer]
description: "Constitution → spec → implementation plan → decompose into tasks"

View file

@ -53,21 +53,6 @@ def _table(headers: list[str], rows: list[list[str]], min_width: int = 6):
return "\n".join(lines) return "\n".join(lines)
def _auto_task_id(conn, project_id: str) -> str:
"""Generate next task ID like PROJ-001."""
prefix = project_id.upper()
existing = models.list_tasks(conn, project_id=project_id)
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(prefix + "-"):
try:
num = int(tid.split("-", 1)[1])
max_num = max(max_num, num)
except ValueError:
pass
return f"{prefix}-{max_num + 1:03d}"
# =========================================================================== # ===========================================================================
# Root group # Root group
@ -111,6 +96,74 @@ def project_add(ctx, id, name, path, tech_stack, status, priority, language):
click.echo(f"Created project: {p['id']} ({p['name']})") click.echo(f"Created project: {p['id']} ({p['name']})")
@cli.command("new-project")
@click.argument("description")
@click.option("--id", "project_id", required=True, help="Project ID")
@click.option("--name", required=True, help="Project name")
@click.option("--path", required=True, help="Project path")
@click.option("--roles", default="business,market,tech", show_default=True,
help="Comma-separated roles: business,market,legal,tech,ux,marketer")
@click.option("--tech-stack", default=None, help="Comma-separated tech stack")
@click.option("--priority", type=int, default=5, show_default=True)
@click.option("--language", default="ru", show_default=True)
@click.pass_context
def new_project(ctx, description, project_id, name, path, roles, tech_stack, priority, language):
"""Create a new project with a sequential research phase pipeline.
DESCRIPTION free-text project description for the agents.
Role aliases: business=business_analyst, market=market_researcher,
legal=legal_researcher, tech=tech_researcher, ux=ux_designer, marketer=marketer.
Architect is added automatically as the last phase.
"""
from core.phases import create_project_with_phases, validate_roles, ROLE_LABELS
_ALIASES = {
"business": "business_analyst",
"market": "market_researcher",
"legal": "legal_researcher",
"tech": "tech_researcher",
"ux": "ux_designer",
}
raw_roles = [r.strip().lower() for r in roles.split(",") if r.strip()]
expanded = [_ALIASES.get(r, r) for r in raw_roles]
clean_roles = validate_roles(expanded)
if not clean_roles:
click.echo("Error: no valid research roles specified.", err=True)
raise SystemExit(1)
ts = [s.strip() for s in tech_stack.split(",") if s.strip()] if tech_stack else None
conn = ctx.obj["conn"]
if models.get_project(conn, project_id):
click.echo(f"Error: project '{project_id}' already exists.", err=True)
raise SystemExit(1)
try:
result = create_project_with_phases(
conn, project_id, name, path,
description=description,
selected_roles=clean_roles,
tech_stack=ts,
priority=priority,
language=language,
)
except ValueError as e:
click.echo(f"Error: {e}", err=True)
raise SystemExit(1)
click.echo(f"Created project: {result['project']['id']} ({result['project']['name']})")
click.echo(f"Description: {description}")
click.echo("")
phases = result["phases"]
rows = [
[str(p["id"]), str(p["phase_order"] + 1), p["role"], p["status"], p.get("task_id") or ""]
for p in phases
]
click.echo(_table(["ID", "#", "Role", "Status", "Task"], rows))
@project.command("list") @project.command("list")
@click.option("--status", default=None) @click.option("--status", default=None)
@click.pass_context @click.pass_context
@ -178,18 +231,28 @@ def task():
@click.argument("title") @click.argument("title")
@click.option("--type", "route_type", type=click.Choice(["debug", "feature", "refactor", "hotfix"]), default=None) @click.option("--type", "route_type", type=click.Choice(["debug", "feature", "refactor", "hotfix"]), default=None)
@click.option("--priority", type=int, default=5) @click.option("--priority", type=int, default=5)
@click.option("--category", "-c", default=None,
help=f"Task category: {', '.join(models.TASK_CATEGORIES)}")
@click.pass_context @click.pass_context
def task_add(ctx, project_id, title, route_type, priority): def task_add(ctx, project_id, title, route_type, priority, category):
"""Add a task to a project. ID is auto-generated (PROJ-001).""" """Add a task to a project. ID is auto-generated (PROJ-001 or PROJ-CAT-001)."""
conn = ctx.obj["conn"] conn = ctx.obj["conn"]
p = models.get_project(conn, project_id) p = models.get_project(conn, project_id)
if not p: if not p:
click.echo(f"Project '{project_id}' not found.", err=True) click.echo(f"Project '{project_id}' not found.", err=True)
raise SystemExit(1) raise SystemExit(1)
task_id = _auto_task_id(conn, project_id) if category:
category = category.upper()
if category not in models.TASK_CATEGORIES:
click.echo(
f"Invalid category '{category}'. Must be one of: {', '.join(models.TASK_CATEGORIES)}",
err=True,
)
raise SystemExit(1)
task_id = models.next_task_id(conn, project_id, category=category)
brief = {"route_type": route_type} if route_type else None brief = {"route_type": route_type} if route_type else None
t = models.create_task(conn, task_id, project_id, title, t = models.create_task(conn, task_id, project_id, title,
priority=priority, brief=brief) priority=priority, brief=brief, category=category)
click.echo(f"Created task: {t['id']}{t['title']}") click.echo(f"Created task: {t['id']}{t['title']}")
@ -586,6 +649,30 @@ def run_task(ctx, task_id, dry_run, allow_write):
pipeline_steps = output["pipeline"] pipeline_steps = output["pipeline"]
analysis = output.get("analysis", "") analysis = output.get("analysis", "")
# Save completion_mode from PM output to task (only if not already set by user)
task_current = models.get_task(conn, task_id)
update_fields = {}
if not task_current.get("execution_mode"):
pm_completion_mode = models.validate_completion_mode(
output.get("completion_mode", "review")
)
update_fields["execution_mode"] = pm_completion_mode
import logging
logging.getLogger("kin").info(
"PM set completion_mode=%s for task %s", pm_completion_mode, task_id
)
# Save category from PM output (only if task has no category yet)
if not task_current.get("category"):
pm_category = output.get("category")
if pm_category and isinstance(pm_category, str):
pm_category = pm_category.upper()
if pm_category in models.TASK_CATEGORIES:
update_fields["category"] = pm_category
if update_fields:
models.update_task(conn, task_id, **update_fields)
click.echo(f"\nAnalysis: {analysis}") click.echo(f"\nAnalysis: {analysis}")
click.echo(f"Pipeline ({len(pipeline_steps)} steps):") click.echo(f"Pipeline ({len(pipeline_steps)} steps):")
for i, step in enumerate(pipeline_steps, 1): for i, step in enumerate(pipeline_steps, 1):

48
core/chat_intent.py Normal file
View file

@ -0,0 +1,48 @@
"""Kin — chat intent classifier (heuristic, no LLM).
classify_intent(text) 'task_request' | 'status_query' | 'question'
"""
import re
from typing import Literal
_STATUS_PATTERNS = [
r'что сейчас',
r'в работе',
r'\bстатус\b',
r'список задач',
r'покажи задачи',
r'покажи список',
r'какие задачи',
r'что идёт',
r'что делается',
r'что висит',
]
_QUESTION_STARTS = (
'почему', 'зачем', 'как ', 'что такое', 'что значит',
'объясни', 'расскажи', 'что делает', 'как работает',
'в чём', 'когда', 'кто',
)
def classify_intent(text: str) -> Literal['task_request', 'status_query', 'question']:
"""Classify user message intent.
Returns:
'status_query' user is asking about current project status/tasks
'question' user is asking a question (no action implied)
'task_request' everything else; default: create a task and run pipeline
"""
lower = text.lower().strip()
for pattern in _STATUS_PATTERNS:
if re.search(pattern, lower):
return 'status_query'
if lower.endswith('?'):
for word in _QUESTION_STARTS:
if lower.startswith(word):
return 'question'
return 'task_request'

View file

@ -41,6 +41,22 @@ def build_context(
"role": role, "role": role,
} }
# Attachments — all roles get them so debugger sees screenshots, UX sees mockups, etc.
# Initialize before conditional to guarantee key presence in ctx (#213)
attachments = models.list_attachments(conn, task_id)
ctx["attachments"] = attachments
# If task has a revise comment, fetch the last agent output for context
if task and task.get("revise_comment"):
row = conn.execute(
"""SELECT output_summary FROM agent_logs
WHERE task_id = ? AND success = 1
ORDER BY created_at DESC LIMIT 1""",
(task_id,),
).fetchone()
if row and row["output_summary"]:
ctx["last_agent_output"] = row["output_summary"]
if role == "pm": if role == "pm":
ctx["modules"] = models.get_modules(conn, project_id) ctx["modules"] = models.get_modules(conn, project_id)
ctx["decisions"] = models.get_decisions(conn, project_id) ctx["decisions"] = models.get_decisions(conn, project_id)
@ -73,10 +89,23 @@ def build_context(
conn, project_id, types=["convention"], conn, project_id, types=["convention"],
) )
elif role == "sysadmin":
ctx["decisions"] = models.get_decisions(conn, project_id)
ctx["modules"] = models.get_modules(conn, project_id)
elif role == "tester": elif role == "tester":
# Minimal context — just the task spec # Minimal context — just the task spec
pass pass
elif role in ("constitution", "spec"):
ctx["modules"] = models.get_modules(conn, project_id)
ctx["decisions"] = models.get_decisions(conn, project_id)
elif role == "task_decomposer":
ctx["modules"] = models.get_modules(conn, project_id)
ctx["decisions"] = models.get_decisions(conn, project_id)
ctx["active_tasks"] = models.list_tasks(conn, project_id=project_id, status="in_progress")
elif role == "security": elif role == "security":
ctx["decisions"] = models.get_decisions( ctx["decisions"] = models.get_decisions(
conn, project_id, category="security", conn, project_id, category="security",
@ -91,7 +120,7 @@ def build_context(
def _slim_task(task: dict) -> dict: def _slim_task(task: dict) -> dict:
"""Extract only relevant fields from a task for the prompt.""" """Extract only relevant fields from a task for the prompt."""
return { result = {
"id": task["id"], "id": task["id"],
"title": task["title"], "title": task["title"],
"status": task["status"], "status": task["status"],
@ -100,17 +129,31 @@ def _slim_task(task: dict) -> dict:
"brief": task.get("brief"), "brief": task.get("brief"),
"spec": task.get("spec"), "spec": task.get("spec"),
} }
if task.get("revise_comment"):
result["revise_comment"] = task["revise_comment"]
if task.get("acceptance_criteria"):
result["acceptance_criteria"] = task["acceptance_criteria"]
return result
def _slim_project(project: dict) -> dict: def _slim_project(project: dict) -> dict:
"""Extract only relevant fields from a project.""" """Extract only relevant fields from a project."""
return { result = {
"id": project["id"], "id": project["id"],
"name": project["name"], "name": project["name"],
"path": project["path"], "path": project["path"],
"tech_stack": project.get("tech_stack"), "tech_stack": project.get("tech_stack"),
"language": project.get("language", "ru"), "language": project.get("language", "ru"),
"execution_mode": project.get("execution_mode"),
"project_type": project.get("project_type", "development"),
} }
# Include SSH fields for operations projects
if project.get("project_type") == "operations":
result["ssh_host"] = project.get("ssh_host")
result["ssh_user"] = project.get("ssh_user")
result["ssh_key_path"] = project.get("ssh_key_path")
result["ssh_proxy_jump"] = project.get("ssh_proxy_jump")
return result
def _extract_module_hint(task: dict | None) -> str | None: def _extract_module_hint(task: dict | None) -> str | None:
@ -144,6 +187,25 @@ def format_prompt(context: dict, role: str, prompt_template: str | None = None)
if proj.get("tech_stack"): if proj.get("tech_stack"):
sections.append(f"Tech stack: {', '.join(proj['tech_stack'])}") sections.append(f"Tech stack: {', '.join(proj['tech_stack'])}")
sections.append(f"Path: {proj['path']}") sections.append(f"Path: {proj['path']}")
project_type = proj.get("project_type", "development")
sections.append(f"Project type: {project_type}")
sections.append("")
# SSH connection info for operations projects
if proj and proj.get("project_type") == "operations":
ssh_host = proj.get("ssh_host") or ""
ssh_user = proj.get("ssh_user") or ""
ssh_key = proj.get("ssh_key_path") or ""
ssh_proxy = proj.get("ssh_proxy_jump") or ""
sections.append("## SSH Connection")
if ssh_host:
sections.append(f"Host: {ssh_host}")
if ssh_user:
sections.append(f"User: {ssh_user}")
if ssh_key:
sections.append(f"Key: {ssh_key}")
if ssh_proxy:
sections.append(f"ProxyJump: {ssh_proxy}")
sections.append("") sections.append("")
# Task info # Task info
@ -157,6 +219,12 @@ def format_prompt(context: dict, role: str, prompt_template: str | None = None)
sections.append(f"Spec: {json.dumps(task['spec'], ensure_ascii=False)}") sections.append(f"Spec: {json.dumps(task['spec'], ensure_ascii=False)}")
sections.append("") sections.append("")
# Acceptance criteria — shown as a dedicated section so agents use it for completeness check
if task and task.get("acceptance_criteria"):
sections.append("## Acceptance Criteria")
sections.append(task["acceptance_criteria"])
sections.append("")
# Decisions # Decisions
decisions = context.get("decisions") decisions = context.get("decisions")
if decisions: if decisions:
@ -203,6 +271,41 @@ def format_prompt(context: dict, role: str, prompt_template: str | None = None)
sections.append(f"## Target module: {hint}") sections.append(f"## Target module: {hint}")
sections.append("") sections.append("")
# Revision context: director's comment + agent's previous output
task = context.get("task")
if task and task.get("revise_comment"):
sections.append("## Director's revision request:")
sections.append(task["revise_comment"])
sections.append("")
last_output = context.get("last_agent_output")
if last_output:
sections.append("## Your previous output (before revision):")
sections.append(last_output)
sections.append("")
# Attachments
attachments = context.get("attachments")
if attachments:
sections.append(f"## Attachments ({len(attachments)}):")
for a in attachments:
mime = a.get("mime_type", "")
size = a.get("size", 0)
sections.append(f"- {a['filename']} ({mime}, {size} bytes): {a['path']}")
# Inline content for small text-readable files (<= 32 KB) so PM can use them immediately
_TEXT_TYPES = {"text/", "application/json", "application/xml", "application/yaml"}
_TEXT_EXTS = {".txt", ".md", ".json", ".yaml", ".yml", ".csv", ".log", ".xml", ".toml", ".ini", ".env"}
is_text = (
any(mime.startswith(t) if t.endswith("/") else mime == t for t in _TEXT_TYPES)
or Path(a["filename"]).suffix.lower() in _TEXT_EXTS
)
if is_text and 0 < size <= 32 * 1024:
try:
content = Path(a["path"]).read_text(encoding="utf-8", errors="replace")
sections.append(f"```\n{content}\n```")
except Exception:
pass
sections.append("")
# Previous step output (pipeline chaining) # Previous step output (pipeline chaining)
prev = context.get("previous_output") prev = context.get("previous_output")
if prev: if prev:

View file

@ -13,7 +13,7 @@ SCHEMA = """
CREATE TABLE IF NOT EXISTS projects ( CREATE TABLE IF NOT EXISTS projects (
id TEXT PRIMARY KEY, id TEXT PRIMARY KEY,
name TEXT NOT NULL, name TEXT NOT NULL,
path TEXT NOT NULL, path TEXT CHECK (path IS NOT NULL OR project_type = 'operations'),
tech_stack JSON, tech_stack JSON,
status TEXT DEFAULT 'active', status TEXT DEFAULT 'active',
priority INTEGER DEFAULT 5, priority INTEGER DEFAULT 5,
@ -22,6 +22,17 @@ CREATE TABLE IF NOT EXISTS projects (
forgejo_repo TEXT, forgejo_repo TEXT,
language TEXT DEFAULT 'ru', language TEXT DEFAULT 'ru',
execution_mode TEXT NOT NULL DEFAULT 'review', execution_mode TEXT NOT NULL DEFAULT 'review',
deploy_command TEXT,
project_type TEXT DEFAULT 'development',
ssh_host TEXT,
ssh_user TEXT,
ssh_key_path TEXT,
ssh_proxy_jump TEXT,
description TEXT,
autocommit_enabled INTEGER DEFAULT 0,
obsidian_vault_path TEXT,
worktrees_enabled INTEGER DEFAULT 0,
auto_test_enabled INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP created_at DATETIME DEFAULT CURRENT_TIMESTAMP
); );
@ -42,6 +53,17 @@ CREATE TABLE IF NOT EXISTS tasks (
forgejo_issue_id INTEGER, forgejo_issue_id INTEGER,
execution_mode TEXT, execution_mode TEXT,
blocked_reason TEXT, blocked_reason TEXT,
blocked_at DATETIME,
blocked_agent_role TEXT,
blocked_pipeline_step TEXT,
dangerously_skipped BOOLEAN DEFAULT 0,
revise_comment TEXT,
revise_count INTEGER DEFAULT 0,
revise_target_role TEXT DEFAULT NULL,
labels JSON,
category TEXT DEFAULT NULL,
telegram_sent BOOLEAN DEFAULT 0,
acceptance_criteria TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP, created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
); );
@ -91,6 +113,22 @@ CREATE TABLE IF NOT EXISTS modules (
UNIQUE(project_id, name) UNIQUE(project_id, name)
); );
-- Фазы исследования нового проекта (research workflow KIN-059)
CREATE TABLE IF NOT EXISTS project_phases (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
role TEXT NOT NULL,
phase_order INTEGER NOT NULL,
status TEXT DEFAULT 'pending',
task_id TEXT REFERENCES tasks(id),
revise_count INTEGER DEFAULT 0,
revise_comment TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_phases_project ON project_phases(project_id, phase_order);
-- Pipelines (история запусков) -- Pipelines (история запусков)
CREATE TABLE IF NOT EXISTS pipelines ( CREATE TABLE IF NOT EXISTS pipelines (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
@ -135,6 +173,20 @@ CREATE TABLE IF NOT EXISTS hook_logs (
created_at TEXT DEFAULT (datetime('now')) created_at TEXT DEFAULT (datetime('now'))
); );
-- Аудит-лог опасных операций (dangerously-skip-permissions)
CREATE TABLE IF NOT EXISTS audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
task_id TEXT REFERENCES tasks(id),
step_id TEXT,
event_type TEXT NOT NULL DEFAULT 'dangerous_skip',
reason TEXT,
project_id TEXT REFERENCES projects(id)
);
CREATE INDEX IF NOT EXISTS idx_audit_log_task ON audit_log(task_id);
CREATE INDEX IF NOT EXISTS idx_audit_log_event ON audit_log(event_type, timestamp);
-- Кросс-проектные зависимости -- Кросс-проектные зависимости
CREATE TABLE IF NOT EXISTS project_links ( CREATE TABLE IF NOT EXISTS project_links (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
@ -177,6 +229,24 @@ CREATE TABLE IF NOT EXISTS support_bot_config (
escalation_keywords JSON escalation_keywords JSON
); );
-- Среды развёртывания проекта (prod/dev серверы)
CREATE TABLE IF NOT EXISTS project_environments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
name TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER DEFAULT 22,
username TEXT NOT NULL,
auth_type TEXT NOT NULL DEFAULT 'password',
auth_value TEXT,
is_installed INTEGER NOT NULL DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(project_id, name)
);
CREATE INDEX IF NOT EXISTS idx_environments_project ON project_environments(project_id);
-- Индексы -- Индексы
CREATE INDEX IF NOT EXISTS idx_tasks_project_status ON tasks(project_id, status); CREATE INDEX IF NOT EXISTS idx_tasks_project_status ON tasks(project_id, status);
CREATE INDEX IF NOT EXISTS idx_decisions_project ON decisions(project_id); CREATE INDEX IF NOT EXISTS idx_decisions_project ON decisions(project_id);
@ -185,6 +255,32 @@ CREATE INDEX IF NOT EXISTS idx_agent_logs_project ON agent_logs(project_id, crea
CREATE INDEX IF NOT EXISTS idx_agent_logs_cost ON agent_logs(project_id, cost_usd); CREATE INDEX IF NOT EXISTS idx_agent_logs_cost ON agent_logs(project_id, cost_usd);
CREATE INDEX IF NOT EXISTS idx_tickets_project ON support_tickets(project_id, status); CREATE INDEX IF NOT EXISTS idx_tickets_project ON support_tickets(project_id, status);
CREATE INDEX IF NOT EXISTS idx_tickets_client ON support_tickets(client_id); CREATE INDEX IF NOT EXISTS idx_tickets_client ON support_tickets(client_id);
-- Чат-сообщения (KIN-OBS-012)
CREATE TABLE IF NOT EXISTS chat_messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
role TEXT NOT NULL,
content TEXT NOT NULL,
message_type TEXT DEFAULT 'text',
task_id TEXT REFERENCES tasks(id),
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_chat_messages_project ON chat_messages(project_id, created_at);
-- Вложения задач (KIN-090)
CREATE TABLE IF NOT EXISTS task_attachments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_id TEXT NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
filename TEXT NOT NULL,
path TEXT NOT NULL,
mime_type TEXT NOT NULL,
size INTEGER NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_task_attachments_task ON task_attachments(task_id);
""" """
@ -216,12 +312,356 @@ def _migrate(conn: sqlite3.Connection):
conn.execute("ALTER TABLE tasks ADD COLUMN blocked_reason TEXT") conn.execute("ALTER TABLE tasks ADD COLUMN blocked_reason TEXT")
conn.commit() conn.commit()
if "autocommit_enabled" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN autocommit_enabled INTEGER DEFAULT 0")
conn.commit()
if "dangerously_skipped" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN dangerously_skipped BOOLEAN DEFAULT 0")
conn.commit()
if "revise_comment" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN revise_comment TEXT")
conn.commit()
if "category" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN category TEXT DEFAULT NULL")
conn.commit()
if "blocked_at" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN blocked_at DATETIME")
conn.commit()
if "blocked_agent_role" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN blocked_agent_role TEXT")
conn.commit()
if "blocked_pipeline_step" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN blocked_pipeline_step TEXT")
conn.commit()
if "telegram_sent" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN telegram_sent BOOLEAN DEFAULT 0")
conn.commit()
if "acceptance_criteria" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN acceptance_criteria TEXT")
conn.commit()
if "revise_count" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN revise_count INTEGER DEFAULT 0")
conn.commit()
if "labels" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN labels JSON DEFAULT NULL")
conn.commit()
if "revise_target_role" not in task_cols:
conn.execute("ALTER TABLE tasks ADD COLUMN revise_target_role TEXT DEFAULT NULL")
conn.commit()
if "obsidian_vault_path" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN obsidian_vault_path TEXT")
conn.commit()
if "worktrees_enabled" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN worktrees_enabled INTEGER DEFAULT 0")
conn.commit()
if "auto_test_enabled" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN auto_test_enabled INTEGER DEFAULT 0")
conn.commit()
if "deploy_command" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN deploy_command TEXT")
conn.commit()
if "project_type" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN project_type TEXT DEFAULT 'development'")
conn.commit()
if "ssh_host" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN ssh_host TEXT")
conn.commit()
if "ssh_user" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN ssh_user TEXT")
conn.commit()
if "ssh_key_path" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN ssh_key_path TEXT")
conn.commit()
if "ssh_proxy_jump" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN ssh_proxy_jump TEXT")
conn.commit()
if "description" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN description TEXT")
conn.commit()
# Migrate audit_log + project_phases tables
existing_tables = {r[0] for r in conn.execute(
"SELECT name FROM sqlite_master WHERE type='table'"
).fetchall()}
if "project_environments" not in existing_tables:
conn.executescript("""
CREATE TABLE IF NOT EXISTS project_environments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
name TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER DEFAULT 22,
username TEXT NOT NULL,
auth_type TEXT NOT NULL DEFAULT 'password',
auth_value TEXT,
is_installed INTEGER NOT NULL DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(project_id, name)
);
CREATE INDEX IF NOT EXISTS idx_environments_project ON project_environments(project_id);
""")
conn.commit()
# Migrate project_environments: old schema used label/login/credential,
# new schema uses name/username/auth_value (KIN-087 column rename).
env_cols = {r[1] for r in conn.execute("PRAGMA table_info(project_environments)").fetchall()}
if "name" not in env_cols and "label" in env_cols:
conn.executescript("""
PRAGMA foreign_keys=OFF;
CREATE TABLE project_environments_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
name TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER DEFAULT 22,
username TEXT NOT NULL,
auth_type TEXT NOT NULL DEFAULT 'password',
auth_value TEXT,
is_installed INTEGER NOT NULL DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(project_id, name)
);
INSERT INTO project_environments_new
SELECT id, project_id, label, host, port, login, auth_type,
credential, is_installed, created_at, updated_at
FROM project_environments;
DROP TABLE project_environments;
ALTER TABLE project_environments_new RENAME TO project_environments;
CREATE INDEX IF NOT EXISTS idx_environments_project ON project_environments(project_id);
PRAGMA foreign_keys=ON;
""")
conn.commit()
if "project_phases" not in existing_tables:
conn.executescript("""
CREATE TABLE IF NOT EXISTS project_phases (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
role TEXT NOT NULL,
phase_order INTEGER NOT NULL,
status TEXT DEFAULT 'pending',
task_id TEXT REFERENCES tasks(id),
revise_count INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_phases_project ON project_phases(project_id, phase_order);
""")
conn.commit()
# Migrate project_phases columns (table may already exist without revise_comment)
phase_cols = {r[1] for r in conn.execute("PRAGMA table_info(project_phases)").fetchall()}
if "revise_comment" not in phase_cols:
conn.execute("ALTER TABLE project_phases ADD COLUMN revise_comment TEXT")
conn.commit()
if "audit_log" not in existing_tables:
conn.executescript("""
CREATE TABLE IF NOT EXISTS audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
task_id TEXT REFERENCES tasks(id),
step_id TEXT,
event_type TEXT NOT NULL DEFAULT 'dangerous_skip',
reason TEXT,
project_id TEXT REFERENCES projects(id)
);
CREATE INDEX IF NOT EXISTS idx_audit_log_task ON audit_log(task_id);
CREATE INDEX IF NOT EXISTS idx_audit_log_event ON audit_log(event_type, timestamp);
""")
conn.commit()
# Migrate columns that must exist before table recreation (KIN-UI-002)
# These columns are referenced in the INSERT SELECT below but were not added
# by any prior ALTER TABLE in this chain — causing OperationalError on minimal schemas.
if "tech_stack" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN tech_stack JSON DEFAULT NULL")
conn.commit()
if "priority" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN priority INTEGER DEFAULT 5")
conn.commit()
if "pm_prompt" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN pm_prompt TEXT DEFAULT NULL")
conn.commit()
if "claude_md_path" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN claude_md_path TEXT DEFAULT NULL")
conn.commit()
if "forgejo_repo" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN forgejo_repo TEXT DEFAULT NULL")
conn.commit()
if "created_at" not in proj_cols:
# SQLite ALTER TABLE does not allow non-constant defaults like CURRENT_TIMESTAMP
conn.execute("ALTER TABLE projects ADD COLUMN created_at DATETIME DEFAULT NULL")
conn.commit()
# Migrate projects.path from NOT NULL to nullable (KIN-ARCH-003)
# SQLite doesn't support ALTER COLUMN, so we recreate the table.
path_col_rows = conn.execute("PRAGMA table_info(projects)").fetchall()
path_col = next((r for r in path_col_rows if r[1] == "path"), None)
if path_col and path_col[3] == 1: # notnull == 1, migration needed
conn.executescript("""
PRAGMA foreign_keys=OFF;
CREATE TABLE projects_new (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT CHECK (path IS NOT NULL OR project_type = 'operations'),
tech_stack JSON,
status TEXT DEFAULT 'active',
priority INTEGER DEFAULT 5,
pm_prompt TEXT,
claude_md_path TEXT,
forgejo_repo TEXT,
language TEXT DEFAULT 'ru',
execution_mode TEXT NOT NULL DEFAULT 'review',
deploy_command TEXT,
project_type TEXT DEFAULT 'development',
ssh_host TEXT,
ssh_user TEXT,
ssh_key_path TEXT,
ssh_proxy_jump TEXT,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
autocommit_enabled INTEGER DEFAULT 0,
obsidian_vault_path TEXT
);
INSERT INTO projects_new
SELECT id, name, path, tech_stack, status, priority,
pm_prompt, claude_md_path, forgejo_repo, language,
execution_mode, deploy_command, project_type,
ssh_host, ssh_user, ssh_key_path, ssh_proxy_jump,
description, created_at, autocommit_enabled, obsidian_vault_path
FROM projects;
DROP TABLE projects;
ALTER TABLE projects_new RENAME TO projects;
PRAGMA foreign_keys=ON;
""")
if "chat_messages" not in existing_tables:
conn.executescript("""
CREATE TABLE IF NOT EXISTS chat_messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
role TEXT NOT NULL,
content TEXT NOT NULL,
message_type TEXT DEFAULT 'text',
task_id TEXT REFERENCES tasks(id),
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_chat_messages_project ON chat_messages(project_id, created_at);
""")
conn.commit()
if "task_attachments" not in existing_tables:
conn.executescript("""
CREATE TABLE IF NOT EXISTS task_attachments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_id TEXT NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
filename TEXT NOT NULL,
path TEXT NOT NULL,
mime_type TEXT NOT NULL,
size INTEGER NOT NULL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_task_attachments_task ON task_attachments(task_id);
""")
conn.commit()
# Rename legacy 'auto' → 'auto_complete' (KIN-063)
conn.execute(
"UPDATE projects SET execution_mode = 'auto_complete' WHERE execution_mode = 'auto'"
)
conn.execute(
"UPDATE tasks SET execution_mode = 'auto_complete' WHERE execution_mode = 'auto'"
)
conn.commit()
def _seed_default_hooks(conn: sqlite3.Connection):
"""Seed default hooks for the kin project (idempotent).
Creates rebuild-frontend hook only when:
- project 'kin' exists in the projects table
- the hook doesn't already exist (no duplicate)
Also updates existing hooks to the correct command/config if outdated.
"""
kin_row = conn.execute(
"SELECT path FROM projects WHERE id = 'kin'"
).fetchone()
if not kin_row or not kin_row["path"]:
return
_PROJECT_PATH = kin_row["path"].rstrip("/")
_REBUILD_SCRIPT = f"{_PROJECT_PATH}/scripts/rebuild-frontend.sh"
_REBUILD_TRIGGER = "web/frontend/*"
_REBUILD_WORKDIR = _PROJECT_PATH
exists = conn.execute(
"SELECT 1 FROM hooks"
" WHERE project_id = 'kin'"
" AND name = 'rebuild-frontend'"
" AND event = 'pipeline_completed'"
).fetchone()
if not exists:
conn.execute(
"""INSERT INTO hooks
(project_id, name, event, trigger_module_path, command,
working_dir, timeout_seconds, enabled)
VALUES ('kin', 'rebuild-frontend', 'pipeline_completed',
?, ?, ?, 300, 1)""",
(_REBUILD_TRIGGER, _REBUILD_SCRIPT, _REBUILD_WORKDIR),
)
else:
# Migrate existing hook: set trigger_module_path, correct command, working_dir
conn.execute(
"""UPDATE hooks
SET trigger_module_path = ?,
command = ?,
working_dir = ?,
timeout_seconds = 300
WHERE project_id = 'kin' AND name = 'rebuild-frontend'""",
(_REBUILD_TRIGGER, _REBUILD_SCRIPT, _REBUILD_WORKDIR),
)
conn.commit()
# Enable autocommit for kin project (opt-in, idempotent)
conn.execute(
"UPDATE projects SET autocommit_enabled=1 WHERE id='kin' AND autocommit_enabled=0"
)
conn.commit()
def init_db(db_path: Path = DB_PATH) -> sqlite3.Connection: def init_db(db_path: Path = DB_PATH) -> sqlite3.Connection:
conn = get_connection(db_path) conn = get_connection(db_path)
conn.executescript(SCHEMA) conn.executescript(SCHEMA)
conn.commit() conn.commit()
_migrate(conn) _migrate(conn)
_seed_default_hooks(conn)
return conn return conn

View file

@ -24,6 +24,15 @@ PERMISSION_PATTERNS = [
] ]
def _next_task_id(
conn: sqlite3.Connection,
project_id: str,
category: str | None = None,
) -> str:
"""Thin wrapper around models.next_task_id for testability."""
return models.next_task_id(conn, project_id, category=category)
def _is_permission_blocked(item: dict) -> bool: def _is_permission_blocked(item: dict) -> bool:
"""Check if a follow-up item describes a permission/write failure.""" """Check if a follow-up item describes a permission/write failure."""
text = f"{item.get('title', '')} {item.get('brief', '')}".lower() text = f"{item.get('title', '')} {item.get('brief', '')}".lower()
@ -48,21 +57,6 @@ def _collect_pipeline_output(conn: sqlite3.Connection, task_id: str) -> str:
return "\n".join(parts) return "\n".join(parts)
def _next_task_id(conn: sqlite3.Connection, project_id: str) -> str:
"""Generate the next sequential task ID for a project."""
prefix = project_id.upper()
existing = models.list_tasks(conn, project_id=project_id)
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(prefix + "-"):
try:
num = int(tid.split("-", 1)[1])
max_num = max(max_num, num)
except ValueError:
pass
return f"{prefix}-{max_num + 1:03d}"
def generate_followups( def generate_followups(
conn: sqlite3.Connection, conn: sqlite3.Connection,
@ -154,7 +148,7 @@ def generate_followups(
"options": ["rerun", "manual_task", "skip"], "options": ["rerun", "manual_task", "skip"],
}) })
else: else:
new_id = _next_task_id(conn, project_id) new_id = _next_task_id(conn, project_id, category=task.get("category"))
brief_dict = {"source": f"followup:{task_id}"} brief_dict = {"source": f"followup:{task_id}"}
if item.get("type"): if item.get("type"):
brief_dict["route_type"] = item["type"] brief_dict["route_type"] = item["type"]
@ -167,6 +161,7 @@ def generate_followups(
priority=item.get("priority", 5), priority=item.get("priority", 5),
parent_task_id=task_id, parent_task_id=task_id,
brief=brief_dict, brief=brief_dict,
category=task.get("category"),
) )
created.append(t) created.append(t)
@ -206,8 +201,8 @@ def resolve_pending_action(
return None return None
if choice == "manual_task": if choice == "manual_task":
new_id = _next_task_id(conn, project_id) new_id = _next_task_id(conn, project_id, category=task.get("category"))
brief_dict = {"source": f"followup:{task_id}"} brief_dict = {"source": f"followup:{task_id}", "task_type": "manual_escalation"}
if item.get("type"): if item.get("type"):
brief_dict["route_type"] = item["type"] brief_dict["route_type"] = item["type"]
if item.get("brief"): if item.get("brief"):
@ -218,6 +213,7 @@ def resolve_pending_action(
priority=item.get("priority", 5), priority=item.get("priority", 5),
parent_task_id=task_id, parent_task_id=task_id,
brief=brief_dict, brief=brief_dict,
category=task.get("category"),
) )
if choice == "rerun": if choice == "rerun":

View file

@ -115,9 +115,14 @@ def run_hooks(
task_id: str | None, task_id: str | None,
event: str, event: str,
task_modules: list[dict], task_modules: list[dict],
changed_files: list[str] | None = None,
) -> list[HookResult]: ) -> list[HookResult]:
"""Run matching hooks for the given event and module list. """Run matching hooks for the given event and module list.
If changed_files is provided, trigger_module_path is matched against
the actual git-changed file paths (more precise than task_modules).
Falls back to task_modules matching when changed_files is None.
Never raises hook failures are logged but don't affect the pipeline. Never raises hook failures are logged but don't affect the pipeline.
""" """
hooks = get_hooks(conn, project_id, event=event) hooks = get_hooks(conn, project_id, event=event)
@ -125,10 +130,13 @@ def run_hooks(
for hook in hooks: for hook in hooks:
if hook["trigger_module_path"] is not None: if hook["trigger_module_path"] is not None:
pattern = hook["trigger_module_path"] pattern = hook["trigger_module_path"]
matched = any( if changed_files is not None:
fnmatch.fnmatch(m.get("path", ""), pattern) matched = any(fnmatch.fnmatch(f, pattern) for f in changed_files)
for m in task_modules else:
) matched = any(
fnmatch.fnmatch(m.get("path", ""), pattern)
for m in task_modules
)
if not matched: if not matched:
continue continue

View file

@ -3,7 +3,9 @@ Kin — data access functions for all tables.
Pure functions: (conn, params) dict | list[dict]. No ORM, no classes. Pure functions: (conn, params) dict | list[dict]. No ORM, no classes.
""" """
import base64
import json import json
import os
import sqlite3 import sqlite3
from datetime import datetime from datetime import datetime
from typing import Any from typing import Any
@ -14,6 +16,20 @@ VALID_TASK_STATUSES = [
"blocked", "decomposed", "cancelled", "blocked", "decomposed", "cancelled",
] ]
VALID_COMPLETION_MODES = {"auto_complete", "review"}
TASK_CATEGORIES = [
"SEC", "UI", "API", "INFRA", "BIZ", "DB",
"ARCH", "TEST", "PERF", "DOCS", "FIX", "OBS",
]
def validate_completion_mode(value: str) -> str:
"""Validate completion mode from LLM output. Falls back to 'review' if invalid."""
if value in VALID_COMPLETION_MODES:
return value
return "review"
def _row_to_dict(row: sqlite3.Row | None) -> dict | None: def _row_to_dict(row: sqlite3.Row | None) -> dict | None:
"""Convert sqlite3.Row to dict with JSON fields decoded.""" """Convert sqlite3.Row to dict with JSON fields decoded."""
@ -49,7 +65,7 @@ def create_project(
conn: sqlite3.Connection, conn: sqlite3.Connection,
id: str, id: str,
name: str, name: str,
path: str, path: str | None = None,
tech_stack: list | None = None, tech_stack: list | None = None,
status: str = "active", status: str = "active",
priority: int = 5, priority: int = 5,
@ -58,14 +74,22 @@ def create_project(
forgejo_repo: str | None = None, forgejo_repo: str | None = None,
language: str = "ru", language: str = "ru",
execution_mode: str = "review", execution_mode: str = "review",
project_type: str = "development",
ssh_host: str | None = None,
ssh_user: str | None = None,
ssh_key_path: str | None = None,
ssh_proxy_jump: str | None = None,
description: str | None = None,
) -> dict: ) -> dict:
"""Create a new project and return it as dict.""" """Create a new project and return it as dict."""
conn.execute( conn.execute(
"""INSERT INTO projects (id, name, path, tech_stack, status, priority, """INSERT INTO projects (id, name, path, tech_stack, status, priority,
pm_prompt, claude_md_path, forgejo_repo, language, execution_mode) pm_prompt, claude_md_path, forgejo_repo, language, execution_mode,
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""", project_type, ssh_host, ssh_user, ssh_key_path, ssh_proxy_jump, description)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(id, name, path, _json_encode(tech_stack), status, priority, (id, name, path, _json_encode(tech_stack), status, priority,
pm_prompt, claude_md_path, forgejo_repo, language, execution_mode), pm_prompt, claude_md_path, forgejo_repo, language, execution_mode,
project_type, ssh_host, ssh_user, ssh_key_path, ssh_proxy_jump, description),
) )
conn.commit() conn.commit()
return get_project(conn, id) return get_project(conn, id)
@ -77,6 +101,16 @@ def get_project(conn: sqlite3.Connection, id: str) -> dict | None:
return _row_to_dict(row) return _row_to_dict(row)
def delete_project(conn: sqlite3.Connection, id: str) -> None:
"""Delete a project and all its related data (modules, decisions, tasks, phases)."""
# Delete tables that have FK references to tasks BEFORE deleting tasks
# project_environments must come before tasks (FK on project_id)
for table in ("modules", "agent_logs", "decisions", "pipelines", "project_phases", "project_environments", "chat_messages", "tasks"):
conn.execute(f"DELETE FROM {table} WHERE project_id = ?", (id,))
conn.execute("DELETE FROM projects WHERE id = ?", (id,))
conn.commit()
def get_effective_mode(conn: sqlite3.Connection, project_id: str, task_id: str) -> str: def get_effective_mode(conn: sqlite3.Connection, project_id: str, task_id: str) -> str:
"""Return effective execution mode: 'auto' or 'review'. """Return effective execution mode: 'auto' or 'review'.
@ -123,6 +157,44 @@ def update_project(conn: sqlite3.Connection, id: str, **fields) -> dict:
# Tasks # Tasks
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
def next_task_id(
conn: sqlite3.Connection,
project_id: str,
category: str | None = None,
) -> str:
"""Generate next task ID.
Without category: PROJ-001 (backward-compatible old format)
With category: PROJ-CAT-001 (new format, per-category counter)
"""
prefix = project_id.upper()
existing = list_tasks(conn, project_id=project_id)
if category:
cat_prefix = f"{prefix}-{category}-"
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(cat_prefix):
try:
max_num = max(max_num, int(tid[len(cat_prefix):]))
except ValueError:
pass
return f"{prefix}-{category}-{max_num + 1:03d}"
else:
# Old format: global max across project (integers only, skip CAT-NNN)
max_num = 0
for t in existing:
tid = t["id"]
if tid.startswith(prefix + "-"):
suffix = tid[len(prefix) + 1:]
try:
max_num = max(max_num, int(suffix))
except ValueError:
pass
return f"{prefix}-{max_num + 1:03d}"
def create_task( def create_task(
conn: sqlite3.Connection, conn: sqlite3.Connection,
id: str, id: str,
@ -136,16 +208,20 @@ def create_task(
spec: dict | None = None, spec: dict | None = None,
forgejo_issue_id: int | None = None, forgejo_issue_id: int | None = None,
execution_mode: str | None = None, execution_mode: str | None = None,
category: str | None = None,
acceptance_criteria: str | None = None,
labels: list | None = None,
) -> dict: ) -> dict:
"""Create a task linked to a project.""" """Create a task linked to a project."""
conn.execute( conn.execute(
"""INSERT INTO tasks (id, project_id, title, status, priority, """INSERT INTO tasks (id, project_id, title, status, priority,
assigned_role, parent_task_id, brief, spec, forgejo_issue_id, assigned_role, parent_task_id, brief, spec, forgejo_issue_id,
execution_mode) execution_mode, category, acceptance_criteria, labels)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""", VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(id, project_id, title, status, priority, assigned_role, (id, project_id, title, status, priority, assigned_role,
parent_task_id, _json_encode(brief), _json_encode(spec), parent_task_id, _json_encode(brief), _json_encode(spec),
forgejo_issue_id, execution_mode), forgejo_issue_id, execution_mode, category, acceptance_criteria,
_json_encode(labels)),
) )
conn.commit() conn.commit()
return get_task(conn, id) return get_task(conn, id)
@ -179,7 +255,7 @@ def update_task(conn: sqlite3.Connection, id: str, **fields) -> dict:
"""Update task fields. Auto-sets updated_at.""" """Update task fields. Auto-sets updated_at."""
if not fields: if not fields:
return get_task(conn, id) return get_task(conn, id)
json_cols = ("brief", "spec", "review", "test_result", "security_result") json_cols = ("brief", "spec", "review", "test_result", "security_result", "labels")
for key in json_cols: for key in json_cols:
if key in fields: if key in fields:
fields[key] = _json_encode(fields[key]) fields[key] = _json_encode(fields[key])
@ -191,6 +267,15 @@ def update_task(conn: sqlite3.Connection, id: str, **fields) -> dict:
return get_task(conn, id) return get_task(conn, id)
def mark_telegram_sent(conn: sqlite3.Connection, task_id: str) -> None:
"""Mark that a Telegram escalation was sent for this task."""
conn.execute(
"UPDATE tasks SET telegram_sent = 1 WHERE id = ?",
(task_id,),
)
conn.commit()
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Decisions # Decisions
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@ -220,6 +305,32 @@ def add_decision(
return _row_to_dict(row) return _row_to_dict(row)
def add_decision_if_new(
conn: sqlite3.Connection,
project_id: str,
type: str,
title: str,
description: str,
category: str | None = None,
tags: list | None = None,
task_id: str | None = None,
) -> dict | None:
"""Add a decision only if no existing one matches (project_id, type, normalized title).
Returns the new decision dict, or None if skipped as duplicate.
"""
existing = conn.execute(
"""SELECT id FROM decisions
WHERE project_id = ? AND type = ?
AND lower(trim(title)) = lower(trim(?))""",
(project_id, type, title),
).fetchone()
if existing:
return None
return add_decision(conn, project_id, type, title, description,
category=category, tags=tags, task_id=task_id)
def get_decisions( def get_decisions(
conn: sqlite3.Connection, conn: sqlite3.Connection,
project_id: str, project_id: str,
@ -284,17 +395,26 @@ def add_module(
) -> dict: ) -> dict:
"""Register a project module.""" """Register a project module."""
cur = conn.execute( cur = conn.execute(
"""INSERT INTO modules (project_id, name, type, path, description, """INSERT OR IGNORE INTO modules (project_id, name, type, path, description,
owner_role, dependencies) owner_role, dependencies)
VALUES (?, ?, ?, ?, ?, ?, ?)""", VALUES (?, ?, ?, ?, ?, ?, ?)""",
(project_id, name, type, path, description, owner_role, (project_id, name, type, path, description, owner_role,
_json_encode(dependencies)), _json_encode(dependencies)),
) )
created = cur.rowcount > 0
conn.commit() conn.commit()
row = conn.execute( if cur.lastrowid:
"SELECT * FROM modules WHERE id = ?", (cur.lastrowid,) row = conn.execute(
).fetchone() "SELECT * FROM modules WHERE id = ?", (cur.lastrowid,)
return _row_to_dict(row) ).fetchone()
else:
row = conn.execute(
"SELECT * FROM modules WHERE project_id = ? AND name = ?",
(project_id, name),
).fetchone()
result = _row_to_dict(row)
result["_created"] = created
return result
def get_modules(conn: sqlite3.Connection, project_id: str) -> list[dict]: def get_modules(conn: sqlite3.Connection, project_id: str) -> list[dict]:
@ -442,6 +562,58 @@ def list_tickets(
return _rows_to_list(conn.execute(query, params).fetchall()) return _rows_to_list(conn.execute(query, params).fetchall())
# ---------------------------------------------------------------------------
# Audit Log
# ---------------------------------------------------------------------------
def log_audit_event(
conn: sqlite3.Connection,
event_type: str,
task_id: str | None = None,
step_id: str | None = None,
reason: str | None = None,
project_id: str | None = None,
) -> dict:
"""Log a security-sensitive event to audit_log.
event_type='dangerous_skip' is used when --dangerously-skip-permissions is invoked.
"""
cur = conn.execute(
"""INSERT INTO audit_log (event_type, task_id, step_id, reason, project_id)
VALUES (?, ?, ?, ?, ?)""",
(event_type, task_id, step_id, reason, project_id),
)
conn.commit()
row = conn.execute(
"SELECT * FROM audit_log WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def get_audit_log(
conn: sqlite3.Connection,
task_id: str | None = None,
project_id: str | None = None,
event_type: str | None = None,
limit: int = 100,
) -> list[dict]:
"""Query audit log entries with optional filters."""
query = "SELECT * FROM audit_log WHERE 1=1"
params: list = []
if task_id:
query += " AND task_id = ?"
params.append(task_id)
if project_id:
query += " AND project_id = ?"
params.append(project_id)
if event_type:
query += " AND event_type = ?"
params.append(event_type)
query += " ORDER BY timestamp DESC LIMIT ?"
params.append(limit)
return _rows_to_list(conn.execute(query, params).fetchall())
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Statistics / Dashboard # Statistics / Dashboard
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@ -481,3 +653,291 @@ def get_cost_summary(conn: sqlite3.Connection, days: int = 7) -> list[dict]:
ORDER BY total_cost_usd DESC ORDER BY total_cost_usd DESC
""", (f"-{days} days",)).fetchall() """, (f"-{days} days",)).fetchall()
return _rows_to_list(rows) return _rows_to_list(rows)
# ---------------------------------------------------------------------------
# Project Phases (KIN-059)
# ---------------------------------------------------------------------------
def create_phase(
conn: sqlite3.Connection,
project_id: str,
role: str,
phase_order: int,
) -> dict:
"""Create a research phase for a project."""
cur = conn.execute(
"""INSERT INTO project_phases (project_id, role, phase_order, status)
VALUES (?, ?, ?, 'pending')""",
(project_id, role, phase_order),
)
conn.commit()
row = conn.execute(
"SELECT * FROM project_phases WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def get_phase(conn: sqlite3.Connection, phase_id: int) -> dict | None:
"""Get a project phase by id."""
row = conn.execute(
"SELECT * FROM project_phases WHERE id = ?", (phase_id,)
).fetchone()
return _row_to_dict(row)
def list_phases(conn: sqlite3.Connection, project_id: str) -> list[dict]:
"""List all phases for a project ordered by phase_order."""
rows = conn.execute(
"SELECT * FROM project_phases WHERE project_id = ? ORDER BY phase_order",
(project_id,),
).fetchall()
return _rows_to_list(rows)
def update_phase(conn: sqlite3.Connection, phase_id: int, **fields) -> dict:
"""Update phase fields. Auto-sets updated_at."""
if not fields:
return get_phase(conn, phase_id)
fields["updated_at"] = datetime.now().isoformat()
sets = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [phase_id]
conn.execute(f"UPDATE project_phases SET {sets} WHERE id = ?", vals)
conn.commit()
return get_phase(conn, phase_id)
# ---------------------------------------------------------------------------
# Project Environments (KIN-087)
# ---------------------------------------------------------------------------
def _get_fernet():
"""Get Fernet instance using KIN_SECRET_KEY env var.
Raises RuntimeError if KIN_SECRET_KEY is not set.
"""
key = os.environ.get("KIN_SECRET_KEY")
if not key:
raise RuntimeError(
"KIN_SECRET_KEY environment variable is not set. "
"Generate with: python -c \"from cryptography.fernet import Fernet; "
"print(Fernet.generate_key().decode())\""
)
from cryptography.fernet import Fernet
return Fernet(key.encode())
def _encrypt_auth(value: str) -> str:
"""Encrypt auth_value using Fernet (AES-128-CBC + HMAC-SHA256)."""
return _get_fernet().encrypt(value.encode()).decode()
def _decrypt_auth(
stored: str,
conn: sqlite3.Connection | None = None,
env_id: int | None = None,
) -> str:
"""Decrypt auth_value. Handles migration from legacy base64 obfuscation.
If stored value uses the old b64: prefix, decodes it and re-encrypts
in the DB (re-encrypt on read) if conn and env_id are provided.
"""
if not stored:
return stored
from cryptography.fernet import InvalidToken
try:
return _get_fernet().decrypt(stored.encode()).decode()
except (InvalidToken, Exception):
# Legacy b64: format — migrate on read
if stored.startswith("b64:"):
plaintext = base64.b64decode(stored[4:]).decode()
if conn is not None and env_id is not None:
new_encrypted = _encrypt_auth(plaintext)
conn.execute(
"UPDATE project_environments SET auth_value = ? WHERE id = ?",
(new_encrypted, env_id),
)
conn.commit()
return plaintext
return stored
def create_environment(
conn: sqlite3.Connection,
project_id: str,
name: str,
host: str,
username: str,
port: int = 22,
auth_type: str = "password",
auth_value: str | None = None,
is_installed: bool = False,
) -> dict:
"""Create a project environment. auth_value stored Fernet-encrypted; returned as None."""
obfuscated = _encrypt_auth(auth_value) if auth_value else None
cur = conn.execute(
"""INSERT INTO project_environments
(project_id, name, host, port, username, auth_type, auth_value, is_installed)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
(project_id, name, host, port, username, auth_type, obfuscated, int(is_installed)),
)
conn.commit()
row = conn.execute(
"SELECT * FROM project_environments WHERE id = ?", (cur.lastrowid,)
).fetchone()
result = _row_to_dict(row)
result["auth_value"] = None # never expose in API responses
return result
def get_environment(conn: sqlite3.Connection, env_id: int) -> dict | None:
"""Get environment by id. auth_value is returned decrypted (for internal use)."""
row = conn.execute(
"SELECT * FROM project_environments WHERE id = ?", (env_id,)
).fetchone()
result = _row_to_dict(row)
if result and result.get("auth_value"):
result["auth_value"] = _decrypt_auth(result["auth_value"], conn=conn, env_id=env_id)
return result
def list_environments(conn: sqlite3.Connection, project_id: str) -> list[dict]:
"""List all environments for a project. auth_value is always None in response."""
rows = conn.execute(
"SELECT * FROM project_environments WHERE project_id = ? ORDER BY created_at",
(project_id,),
).fetchall()
result = _rows_to_list(rows)
for env in result:
env["auth_value"] = None
return result
def update_environment(conn: sqlite3.Connection, env_id: int, **fields) -> dict:
"""Update environment fields. Auto-sets updated_at. Returns record with auth_value=None."""
if not fields:
result = get_environment(conn, env_id)
if result:
result["auth_value"] = None
return result
if "auth_value" in fields and fields["auth_value"]:
fields["auth_value"] = _encrypt_auth(fields["auth_value"])
elif "auth_value" in fields:
del fields["auth_value"] # empty/None = don't update auth_value
fields["updated_at"] = datetime.now().isoformat()
sets = ", ".join(f"{k} = ?" for k in fields)
vals = list(fields.values()) + [env_id]
conn.execute(f"UPDATE project_environments SET {sets} WHERE id = ?", vals)
conn.commit()
result = get_environment(conn, env_id)
if result:
result["auth_value"] = None
return result
def delete_environment(conn: sqlite3.Connection, env_id: int) -> bool:
"""Delete environment by id. Returns True if deleted, False if not found."""
cur = conn.execute(
"DELETE FROM project_environments WHERE id = ?", (env_id,)
)
conn.commit()
return cur.rowcount > 0
# ---------------------------------------------------------------------------
# Chat Messages (KIN-OBS-012)
# ---------------------------------------------------------------------------
def add_chat_message(
conn: sqlite3.Connection,
project_id: str,
role: str,
content: str,
message_type: str = "text",
task_id: str | None = None,
) -> dict:
"""Add a chat message and return it as dict.
role: 'user' | 'assistant' | 'system'
message_type: 'text' | 'task_created' | 'error'
task_id: set for message_type='task_created' to link to the created task.
"""
cur = conn.execute(
"""INSERT INTO chat_messages (project_id, role, content, message_type, task_id)
VALUES (?, ?, ?, ?, ?)""",
(project_id, role, content, message_type, task_id),
)
conn.commit()
row = conn.execute(
"SELECT * FROM chat_messages WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
# ---------------------------------------------------------------------------
# Task Attachments (KIN-090)
# ---------------------------------------------------------------------------
def create_attachment(
conn: sqlite3.Connection,
task_id: str,
filename: str,
path: str,
mime_type: str,
size: int,
) -> dict:
"""Create a task attachment record. path must be absolute."""
cur = conn.execute(
"""INSERT INTO task_attachments (task_id, filename, path, mime_type, size)
VALUES (?, ?, ?, ?, ?)""",
(task_id, filename, path, mime_type, size),
)
conn.commit()
row = conn.execute(
"SELECT * FROM task_attachments WHERE id = ?", (cur.lastrowid,)
).fetchone()
return _row_to_dict(row)
def list_attachments(conn: sqlite3.Connection, task_id: str) -> list[dict]:
"""List all attachments for a task ordered by creation time."""
rows = conn.execute(
"SELECT * FROM task_attachments WHERE task_id = ? ORDER BY created_at",
(task_id,),
).fetchall()
return _rows_to_list(rows)
def get_attachment(conn: sqlite3.Connection, attachment_id: int) -> dict | None:
"""Get a single attachment by id."""
row = conn.execute(
"SELECT * FROM task_attachments WHERE id = ?", (attachment_id,)
).fetchone()
return _row_to_dict(row)
def delete_attachment(conn: sqlite3.Connection, attachment_id: int) -> bool:
"""Delete attachment record. Returns True if deleted, False if not found."""
cur = conn.execute("DELETE FROM task_attachments WHERE id = ?", (attachment_id,))
conn.commit()
return cur.rowcount > 0
def get_chat_messages(
conn: sqlite3.Connection,
project_id: str,
limit: int = 50,
before_id: int | None = None,
) -> list[dict]:
"""Get chat messages for a project in chronological order (oldest first).
before_id: pagination cursor return messages with id < before_id.
"""
query = "SELECT * FROM chat_messages WHERE project_id = ?"
params: list = [project_id]
if before_id is not None:
query += " AND id < ?"
params.append(before_id)
query += " ORDER BY created_at ASC, id ASC LIMIT ?"
params.append(limit)
return _rows_to_list(conn.execute(query, params).fetchall())

183
core/obsidian_sync.py Normal file
View file

@ -0,0 +1,183 @@
"""
Kin двусторонний sync с Obsidian vault.
Export: decisions .md-файлы с YAML frontmatter
Import: чекбоксы в .md-файлах статус задач
"""
import re
import sqlite3
from pathlib import Path
from typing import Optional
from core import models
def _slug(title: str) -> str:
"""Генерирует slug из заголовка для имени файла."""
s = title.lower()
s = re.sub(r"[^a-zа-я0-9\s-]", "", s)
s = re.sub(r"\s+", "-", s.strip())
return s[:50]
def _decision_to_md(decision: dict) -> str:
"""Форматирует decision как .md файл с YAML frontmatter."""
tags = decision.get("tags") or []
if isinstance(tags, str):
try:
import json
tags = json.loads(tags)
except Exception:
tags = []
tags_str = "[" + ", ".join(str(t) for t in tags) + "]"
created_at = (decision.get("created_at") or "")[:10] # только дата
frontmatter = (
"---\n"
f"kin_decision_id: {decision['id']}\n"
f"project: {decision['project_id']}\n"
f"type: {decision['type']}\n"
f"category: {decision.get('category') or ''}\n"
f"tags: {tags_str}\n"
f"created_at: {created_at}\n"
"---\n"
)
body = f"\n# {decision['title']}\n\n{decision['description']}\n"
return frontmatter + body
def _parse_frontmatter(text: str) -> dict:
"""Парсит YAML frontmatter из .md файла (упрощённый парсер через re)."""
result = {}
match = re.match(r"^---\n(.*?)\n---", text, re.DOTALL)
if not match:
return result
for line in match.group(1).splitlines():
if ":" in line:
key, _, val = line.partition(":")
result[key.strip()] = val.strip()
return result
def export_decisions_to_md(
project_id: str,
decisions: list[dict],
vault_path: Path,
) -> list[Path]:
"""Экспортирует decisions в .md-файлы Obsidian. Возвращает список созданных файлов."""
out_dir = vault_path / project_id / "decisions"
out_dir.mkdir(parents=True, exist_ok=True)
created: list[Path] = []
for d in decisions:
slug = _slug(d["title"])
fname = f"{d['id']}-{slug}.md"
fpath = out_dir / fname
fpath.write_text(_decision_to_md(d), encoding="utf-8")
created.append(fpath)
return created
def parse_task_checkboxes(
vault_path: Path,
project_id: str,
) -> list[dict]:
"""Парсит *.md-файлы в vault/{project_id}/tasks/ и {project_id}/ на чекбоксы с task ID.
Returns: [{"task_id": "KIN-013", "done": True, "title": "..."}]
"""
pattern = re.compile(r"^[-*]\s+\[([xX ])\]\s+([A-Z][A-Z0-9]*-(?:[A-Z][A-Z0-9]*-)?\d+)\s+(.+)$")
results: list[dict] = []
search_dirs = [
vault_path / project_id / "tasks",
vault_path / project_id,
]
for search_dir in search_dirs:
if not search_dir.is_dir():
continue
for md_file in search_dir.glob("*.md"):
try:
text = md_file.read_text(encoding="utf-8")
except OSError:
continue
for line in text.splitlines():
m = pattern.match(line.strip())
if m:
check_char, task_id, title = m.group(1), m.group(2), m.group(3)
results.append({
"task_id": task_id,
"done": check_char.lower() == "x",
"title": title.strip(),
})
return results
def sync_obsidian(conn: sqlite3.Connection, project_id: str) -> dict:
"""Оркестратор: export decisions + import checkboxes.
Returns:
{
"exported_decisions": int,
"tasks_updated": int,
"errors": list[str],
"vault_path": str
}
"""
project = models.get_project(conn, project_id)
if not project:
raise ValueError(f"Project '{project_id}' not found")
vault_path_str: Optional[str] = project.get("obsidian_vault_path")
if not vault_path_str:
raise ValueError(f"obsidian_vault_path not set for project '{project_id}'")
vault_path = Path(vault_path_str)
errors: list[str] = []
# --- Создаём vault_path если не существует ---
try:
vault_path.mkdir(parents=True, exist_ok=True)
except Exception as e:
errors.append(f"Cannot create vault path {vault_path_str}: {e}")
return {"exported_decisions": 0, "tasks_updated": 0, "errors": errors, "vault_path": vault_path_str}
# --- Export decisions ---
exported_count = 0
try:
decisions = models.get_decisions(conn, project_id)
created_files = export_decisions_to_md(project_id, decisions, vault_path)
exported_count = len(created_files)
except Exception as e:
errors.append(f"Export error: {e}")
# --- Import checkboxes ---
tasks_updated = 0
try:
checkboxes = parse_task_checkboxes(vault_path, project_id)
for item in checkboxes:
if not item["done"]:
continue
task = models.get_task(conn, item["task_id"])
if task is None:
continue
if task.get("project_id") != project_id:
continue
if task.get("status") != "done":
models.update_task(conn, item["task_id"], status="done")
tasks_updated += 1
except Exception as e:
errors.append(f"Import error: {e}")
return {
"exported_decisions": exported_count,
"tasks_updated": tasks_updated,
"errors": errors,
"vault_path": vault_path_str,
}

210
core/phases.py Normal file
View file

@ -0,0 +1,210 @@
"""
Kin Research Phase Pipeline (KIN-059).
Sequential workflow: Director describes a new project, picks researcher roles,
each phase produces a task for review. After approve next phase activates.
Architect always runs last (auto-added when any researcher is selected).
"""
import sqlite3
from core import models
# Canonical order of research roles (architect always last)
RESEARCH_ROLES = [
"business_analyst",
"market_researcher",
"legal_researcher",
"tech_researcher",
"ux_designer",
"marketer",
"architect",
]
# Human-readable labels
ROLE_LABELS = {
"business_analyst": "Business Analyst",
"market_researcher": "Market Researcher",
"legal_researcher": "Legal Researcher",
"tech_researcher": "Tech Researcher",
"ux_designer": "UX Designer",
"marketer": "Marketer",
"architect": "Architect",
}
def validate_roles(roles: list[str]) -> list[str]:
"""Filter unknown roles, remove duplicates, strip 'architect' (auto-added later)."""
seen: set[str] = set()
result = []
for r in roles:
r = r.strip().lower()
if r == "architect":
continue
if r in RESEARCH_ROLES and r not in seen:
seen.add(r)
result.append(r)
return result
def build_phase_order(selected_roles: list[str]) -> list[str]:
"""Return roles in canonical RESEARCH_ROLES order, append architect if any selected."""
ordered = [r for r in RESEARCH_ROLES if r in selected_roles and r != "architect"]
if ordered:
ordered.append("architect")
return ordered
def create_project_with_phases(
conn: sqlite3.Connection,
id: str,
name: str,
path: str | None = None,
*,
description: str,
selected_roles: list[str],
tech_stack: list | None = None,
priority: int = 5,
language: str = "ru",
) -> dict:
"""Create project + sequential research phases.
Returns {project, phases}.
"""
clean_roles = validate_roles(selected_roles)
ordered_roles = build_phase_order(clean_roles)
if not ordered_roles:
raise ValueError("At least one research role must be selected")
project = models.create_project(
conn, id, name, path,
tech_stack=tech_stack, priority=priority, language=language,
description=description,
)
phases = []
for idx, role in enumerate(ordered_roles):
phase = models.create_phase(conn, id, role, idx)
phases.append(phase)
# Activate the first phase immediately
if phases:
phases[0] = activate_phase(conn, phases[0]["id"])
return {"project": project, "phases": phases}
def activate_phase(conn: sqlite3.Connection, phase_id: int) -> dict:
"""Create a task for the phase and set it to active.
Task brief includes project description + phase context.
"""
phase = models.get_phase(conn, phase_id)
if not phase:
raise ValueError(f"Phase {phase_id} not found")
project = models.get_project(conn, phase["project_id"])
if not project:
raise ValueError(f"Project {phase['project_id']} not found")
task_id = models.next_task_id(conn, phase["project_id"], category=None)
brief = {
"text": project.get("description") or project["name"],
"phase": phase["role"],
"phase_order": phase["phase_order"],
"workflow": "research",
}
task = models.create_task(
conn, task_id, phase["project_id"],
title=f"[Research] {ROLE_LABELS.get(phase['role'], phase['role'])}",
assigned_role=phase["role"],
brief=brief,
status="pending",
category=None,
)
updated = models.update_phase(conn, phase_id, task_id=task["id"], status="active")
return updated
def approve_phase(conn: sqlite3.Connection, phase_id: int) -> dict:
"""Approve a phase, activate the next one (or finish workflow).
Returns {phase, next_phase|None}.
"""
phase = models.get_phase(conn, phase_id)
if not phase:
raise ValueError(f"Phase {phase_id} not found")
if phase["status"] != "active":
raise ValueError(f"Phase {phase_id} is not active (current: {phase['status']})")
updated = models.update_phase(conn, phase_id, status="approved")
# Find next pending phase
all_phases = models.list_phases(conn, phase["project_id"])
next_phase = None
for p in all_phases:
if p["phase_order"] > phase["phase_order"] and p["status"] == "pending":
next_phase = p
break
if next_phase:
activated = activate_phase(conn, next_phase["id"])
return {"phase": updated, "next_phase": activated}
return {"phase": updated, "next_phase": None}
def reject_phase(conn: sqlite3.Connection, phase_id: int, reason: str) -> dict:
"""Reject a phase (director rejects the research output entirely)."""
phase = models.get_phase(conn, phase_id)
if not phase:
raise ValueError(f"Phase {phase_id} not found")
if phase["status"] != "active":
raise ValueError(f"Phase {phase_id} is not active (current: {phase['status']})")
return models.update_phase(conn, phase_id, status="rejected")
def revise_phase(conn: sqlite3.Connection, phase_id: int, comment: str) -> dict:
"""Request revision: create a new task for the same role with the comment.
Returns {phase, new_task}.
"""
phase = models.get_phase(conn, phase_id)
if not phase:
raise ValueError(f"Phase {phase_id} not found")
if phase["status"] not in ("active", "revising"):
raise ValueError(
f"Phase {phase_id} cannot be revised (current: {phase['status']})"
)
project = models.get_project(conn, phase["project_id"])
if not project:
raise ValueError(f"Project {phase['project_id']} not found")
new_task_id = models.next_task_id(conn, phase["project_id"], category=None)
brief = {
"text": project.get("description") or project["name"],
"phase": phase["role"],
"phase_order": phase["phase_order"],
"workflow": "research",
"revise_comment": comment,
"revise_count": (phase.get("revise_count") or 0) + 1,
}
new_task = models.create_task(
conn, new_task_id, phase["project_id"],
title=f"[Research Revise] {ROLE_LABELS.get(phase['role'], phase['role'])}",
assigned_role=phase["role"],
brief=brief,
status="pending",
category=None,
)
new_revise_count = (phase.get("revise_count") or 0) + 1
updated = models.update_phase(
conn, phase_id,
status="revising",
task_id=new_task["id"],
revise_count=new_revise_count,
revise_comment=comment,
)
return {"phase": updated, "new_task": new_task}

102
core/telegram.py Normal file
View file

@ -0,0 +1,102 @@
"""
Kin Telegram escalation notifications.
Sends a message when a PM agent detects a blocked agent.
Bot token is read from /Volumes/secrets/env/projects.env [kin] section.
Chat ID is read from KIN_TG_CHAT_ID env var.
"""
import configparser
import json
import logging
import os
import urllib.error
import urllib.parse
import urllib.request
from pathlib import Path
_logger = logging.getLogger("kin.telegram")
_SECRETS_PATH = Path("/Volumes/secrets/env/projects.env")
_TELEGRAM_API = "https://api.telegram.org/bot{token}/sendMessage"
def _load_kin_config() -> dict:
"""Load [kin] section from projects.env. Returns dict with available keys."""
if not _SECRETS_PATH.exists():
_logger.warning("secrets not mounted: %s", _SECRETS_PATH)
return {}
parser = configparser.ConfigParser()
parser.read(str(_SECRETS_PATH))
if "kin" not in parser:
_logger.warning("No [kin] section in projects.env")
return {}
return dict(parser["kin"])
def send_telegram_escalation(
task_id: str,
project_name: str,
agent_role: str,
reason: str,
pipeline_step: str | None,
) -> bool:
"""Send a Telegram escalation message for a blocked agent.
Returns True if message was sent successfully, False otherwise.
Never raises escalation errors must never block the pipeline.
"""
config = _load_kin_config()
bot_token = config.get("tg_bot") or os.environ.get("KIN_TG_BOT_TOKEN")
if not bot_token:
_logger.warning("Telegram bot token not configured; skipping escalation for %s", task_id)
return False
chat_id = os.environ.get("KIN_TG_CHAT_ID")
if not chat_id:
_logger.warning("KIN_TG_CHAT_ID not set; skipping Telegram escalation for %s", task_id)
return False
step_info = f" (шаг {pipeline_step})" if pipeline_step else ""
text = (
f"🚨 *Эскалация* — агент заблокирован\n\n"
f"*Проект:* {_escape_md(project_name)}\n"
f"*Задача:* `{task_id}`\n"
f"*Агент:* `{agent_role}{step_info}`\n"
f"*Причина:*\n{_escape_md(reason or '')}"
)
payload = json.dumps({
"chat_id": chat_id,
"text": text,
"parse_mode": "Markdown",
}).encode("utf-8")
url = _TELEGRAM_API.format(token=bot_token)
req = urllib.request.Request(
url,
data=payload,
headers={"Content-Type": "application/json"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
if resp.status == 200:
_logger.info("Telegram escalation sent for task %s", task_id)
return True
_logger.warning("Telegram API returned status %d for task %s", resp.status, task_id)
return False
except urllib.error.URLError as exc:
_logger.warning("Telegram send failed for task %s: %s", task_id, exc)
return False
except Exception as exc:
_logger.warning("Unexpected Telegram error for task %s: %s", task_id, exc)
return False
def _escape_md(text: str) -> str:
"""Escape Markdown special characters for Telegram MarkdownV1."""
# MarkdownV1 is lenient — only escape backtick/asterisk/underscore in free text
for ch in ("*", "_", "`"):
text = text.replace(ch, f"\\{ch}")
return text

149
core/worktree.py Normal file
View file

@ -0,0 +1,149 @@
"""
Kin Git worktree management for isolated agent execution.
Each eligible agent step gets its own worktree in {project_path}/.kin_worktrees/
to prevent file-write conflicts between parallel or sequential agents.
All functions are defensive: never raise, always log warnings on error.
"""
import logging
import shutil
import subprocess
from pathlib import Path
_logger = logging.getLogger("kin.worktree")
def _git(project_path: str) -> str:
"""Resolve git executable, preferring extended PATH."""
try:
from agents.runner import _build_claude_env
env = _build_claude_env()
found = shutil.which("git", path=env["PATH"])
return found or "git"
except Exception:
return shutil.which("git") or "git"
def create_worktree(project_path: str, task_id: str, step_name: str = "step") -> str | None:
"""Create a git worktree for isolated agent execution.
Creates: {project_path}/.kin_worktrees/{task_id}-{step_name}
Branch name equals the worktree directory name.
Returns the absolute worktree path, or None on any failure.
"""
git = _git(project_path)
safe_step = step_name.replace("/", "_").replace(" ", "_")
branch_name = f"{task_id}-{safe_step}"
worktrees_dir = Path(project_path) / ".kin_worktrees"
worktree_path = worktrees_dir / branch_name
try:
worktrees_dir.mkdir(exist_ok=True)
r = subprocess.run(
[git, "worktree", "add", "-b", branch_name, str(worktree_path), "HEAD"],
cwd=project_path,
capture_output=True,
text=True,
timeout=30,
)
if r.returncode != 0:
_logger.warning("git worktree add failed for %s: %s", branch_name, r.stderr.strip())
return None
_logger.info("Created worktree: %s", worktree_path)
return str(worktree_path)
except Exception as exc:
_logger.warning("create_worktree error for %s: %s", branch_name, exc)
return None
def merge_worktree(worktree_path: str, project_path: str) -> dict:
"""Merge the worktree branch back into current HEAD of project_path.
Branch name is derived from the worktree directory name.
On conflict: aborts merge and returns success=False with conflict list.
Returns {success: bool, conflicts: list[str], merged_files: list[str]}
"""
git = _git(project_path)
branch_name = Path(worktree_path).name
try:
merge_result = subprocess.run(
[git, "-C", project_path, "merge", "--no-ff", branch_name],
capture_output=True,
text=True,
timeout=60,
)
if merge_result.returncode == 0:
diff_result = subprocess.run(
[git, "-C", project_path, "diff", "HEAD~1", "HEAD", "--name-only"],
capture_output=True,
text=True,
timeout=10,
)
merged_files = [
f.strip() for f in diff_result.stdout.splitlines() if f.strip()
]
_logger.info("Merged worktree %s: %d files", branch_name, len(merged_files))
return {"success": True, "conflicts": [], "merged_files": merged_files}
# Merge failed — collect conflicts and abort
conflict_result = subprocess.run(
[git, "-C", project_path, "diff", "--name-only", "--diff-filter=U"],
capture_output=True,
text=True,
timeout=10,
)
conflicts = [f.strip() for f in conflict_result.stdout.splitlines() if f.strip()]
subprocess.run(
[git, "-C", project_path, "merge", "--abort"],
capture_output=True,
timeout=10,
)
_logger.warning("Merge conflict in worktree %s: %s", branch_name, conflicts)
return {"success": False, "conflicts": conflicts, "merged_files": []}
except Exception as exc:
_logger.warning("merge_worktree error for %s: %s", branch_name, exc)
return {"success": False, "conflicts": [], "merged_files": [], "error": str(exc)}
def cleanup_worktree(worktree_path: str, project_path: str) -> None:
"""Remove the git worktree and its branch. Never raises."""
git = _git(project_path)
branch_name = Path(worktree_path).name
try:
subprocess.run(
[git, "-C", project_path, "worktree", "remove", "--force", worktree_path],
capture_output=True,
timeout=30,
)
subprocess.run(
[git, "-C", project_path, "branch", "-D", branch_name],
capture_output=True,
timeout=10,
)
_logger.info("Cleaned up worktree: %s", worktree_path)
except Exception as exc:
_logger.warning("cleanup_worktree error for %s: %s", branch_name, exc)
def ensure_gitignore(project_path: str) -> None:
"""Ensure .kin_worktrees/ is in project's .gitignore. Never raises."""
entry = ".kin_worktrees/"
gitignore = Path(project_path) / ".gitignore"
try:
if gitignore.exists():
content = gitignore.read_text()
if entry not in content:
with gitignore.open("a") as f:
f.write(f"\n{entry}\n")
else:
gitignore.write_text(f"{entry}\n")
except Exception as exc:
_logger.warning("ensure_gitignore error: %s", exc)

View file

@ -7,7 +7,7 @@ name = "kin"
version = "0.1.0" version = "0.1.0"
description = "Multi-agent project orchestrator" description = "Multi-agent project orchestrator"
requires-python = ">=3.11" requires-python = ">=3.11"
dependencies = ["click>=8.0", "fastapi>=0.110", "uvicorn>=0.29"] dependencies = ["click>=8.0", "fastapi>=0.110", "uvicorn>=0.29", "cryptography>=41.0", "python-multipart>=0.0.9", "PyYAML>=6.0"]
[project.scripts] [project.scripts]
kin = "cli.main:cli" kin = "cli.main:cli"

6
requirements.txt Normal file
View file

@ -0,0 +1,6 @@
click>=8.0
fastapi>=0.110
uvicorn>=0.29
cryptography>=41.0
python-multipart>=0.0.9
PyYAML>=6.0

View file

@ -19,20 +19,13 @@ npm run build
echo "[rebuild-frontend] Build complete." echo "[rebuild-frontend] Build complete."
# Restart API server if it's currently running. # Restart API server if it's currently running.
# API is managed by launchctl with KeepAlive=true — just kill it, launchctl restarts it.
# pgrep returns 1 if no match; || true prevents set -e from exiting. # pgrep returns 1 if no match; || true prevents set -e from exiting.
API_PID=$(pgrep -f "uvicorn web.api" 2>/dev/null || true) API_PID=$(pgrep -f "uvicorn web.api" 2>/dev/null || true)
if [ -n "$API_PID" ]; then if [ -n "$API_PID" ]; then
echo "[rebuild-frontend] Stopping API server (PID: $API_PID) ..." echo "[rebuild-frontend] Restarting API server (PID: $API_PID) — launchctl will auto-restart ..."
kill "$API_PID" 2>/dev/null || true kill "$API_PID" 2>/dev/null || true
# Wait for port 8420 to free up (up to 5 s) echo "[rebuild-frontend] API server restarted (launchctl KeepAlive=true)."
for i in $(seq 1 5); do
pgrep -f "uvicorn web.api" > /dev/null 2>&1 || break
sleep 1
done
echo "[rebuild-frontend] Starting API server ..."
cd "$PROJECT_ROOT"
nohup python -m uvicorn web.api:app --port 8420 >> /tmp/kin-api.log 2>&1 &
echo "[rebuild-frontend] API server started (PID: $!)."
else else
echo "[rebuild-frontend] API server not running; skipping restart." echo "[rebuild-frontend] API server not running; skipping restart."
fi fi

266
tasks/KIN-013-spec.md Normal file
View file

@ -0,0 +1,266 @@
# KIN-013 — Settings + Obsidian Sync: Техническая спецификация
## Контекст
Фича добавляет:
1. Страницу Settings в GUI для управления конфигурацией проектов
2. Двусторонний Obsidian sync: decisions → .md-файлы, чекбоксы Obsidian → статус задач
Sync вызывается явно по кнопке (не демон), через API-эндпоинт.
---
## 1. Схема данных
### Изменение таблицы `projects`
Добавить колонку:
```sql
ALTER TABLE projects ADD COLUMN obsidian_vault_path TEXT;
```
**Миграция**: в `core/db.py``_migrate()`, по паттерну существующих миграций:
```python
if "obsidian_vault_path" not in proj_cols:
conn.execute("ALTER TABLE projects ADD COLUMN obsidian_vault_path TEXT")
conn.commit()
```
**Семантика**: путь к корневой папке Obsidian vault для данного проекта.
Пример: `/Users/grosfrumos/Library/Mobile Documents/iCloud~md~obsidian/Documents/MyVault`
---
## 2. Формат .md-файлов для decisions
### Расположение
```
{vault_path}/{project_id}/decisions/{id}-{slug}.md
```
Пример: `.../kin/decisions/42-proxy-jump-ssh-gotcha.md`
### Формат файла (YAML frontmatter + Markdown body)
```markdown
---
kin_decision_id: 42
project: kin
type: gotcha
category: testing
tags: [testing, mock, subprocess]
created_at: 2026-03-10
---
# Proxy через SSH не работает без ssh-agent
Описание: полный текст description из БД.
```
**Обоснование frontmatter**:
- Позволяет идентифицировать файл при импорте (поле `kin_decision_id`)
- Позволяет Obsidian показывать метаданные в Properties panel
- Поддерживает round-trip sync без парсинга имени файла
### Slug из заголовка
```python
import re
def _slug(title: str) -> str:
s = title.lower()
s = re.sub(r"[^a-zа-я0-9\s-]", "", s)
s = re.sub(r"\s+", "-", s.strip())
return s[:50]
```
---
## 3. Механизм двустороннего sync
### 3.1 Decisions → Obsidian (export)
- Создать/перезаписать `.md`-файл для каждого decision проекта
- Директория создаётся автоматически (`mkdir -p`)
- Если файл для `kin_decision_id` уже существует — перезаписать (идемпотентно)
- Решения, удалённые из БД → файлы НЕ удаляются (безопасно)
### 3.2 Obsidian чекбоксы → Tasks (import)
**Источник**: файлы `*.md` в `{vault_path}/{project_id}/tasks/`
Дополнительно: файлы `{vault_path}/{project_id}/*.md`
**Формат строки задачи**:
```
- [x] KIN-013 Title of the task
- [ ] KIN-014 Another task
```
**Алгоритм**:
1. Найти строки по паттерну: `^[-*]\s+\[([xX ])\]\s+([A-Z]+-\d+)\s+(.+)$`
2. Извлечь: `done` (bool), `task_id` (str), `title` (str)
3. Найти задачу в БД по `task_id`
4. Если `done=True` и `task.status != 'done'``update_task(conn, task_id, status='done')`
5. Если `done=False` → не трогать (не откатываем)
6. Если задача не найдена → пропустить (не создавать)
**Обоснование**: строгий маппинг только по task_id исключает случайное создание мусора.
### 3.3 Функция `sync_obsidian`
```python
def sync_obsidian(conn, project_id: str) -> dict:
"""
Returns:
{
"exported_decisions": int,
"tasks_updated": int,
"errors": list[str],
"vault_path": str
}
"""
```
---
## 4. Модуль `core/obsidian_sync.py`
### Публичный API модуля
```python
def export_decisions_to_md(
project_id: str,
decisions: list[dict],
vault_path: Path,
) -> list[Path]:
"""Экспортирует decisions в .md файлы Obsidian. Возвращает список созданных файлов."""
def parse_task_checkboxes(
vault_path: Path,
project_id: str,
) -> list[dict]:
"""Парсит *.md файлы в vault/{project_id}/tasks/ на чекбоксы с task ID.
Returns: [{"task_id": "KIN-013", "done": True, "title": "..."}]
"""
def sync_obsidian(conn, project_id: str) -> dict:
"""Оркестратор: export + import. Возвращает статистику."""
```
### Вспомогательные (приватные)
```python
def _slug(title: str) -> str # slug для имени файла
def _decision_to_md(decision: dict) -> str # форматирует .md с frontmatter
def _parse_frontmatter(text: str) -> dict # для будущего round-trip
```
### Зависимости
- Только стандартная библиотека Python: `pathlib`, `re`, `yaml` (через `import yaml`) или ручной YAML-парсер
- Важно: PyYAML может не быть установлен → использовать простой ручной вывод YAML-фронтматтера, парсинг через `re`
- Импортирует из `core.models`: `get_project`, `get_decisions`, `get_task`, `update_task`
- Импортирует из `core.db`: `get_connection`НЕ нужен, conn передаётся снаружи
---
## 5. API-эндпоинты
### 5.1 PATCH /api/projects/{project_id} — расширить
Добавить в `ProjectPatch`:
```python
class ProjectPatch(BaseModel):
execution_mode: str | None = None
autocommit_enabled: bool | None = None
obsidian_vault_path: str | None = None # новое поле
```
Обновить обработчик: если `obsidian_vault_path` provided → `update_project(conn, id, obsidian_vault_path=...)`
Убрать проверку "Nothing to update" → включить `obsidian_vault_path` в условие.
### 5.2 POST /api/projects/{project_id}/sync/obsidian — новый
```python
@app.post("/api/projects/{project_id}/sync/obsidian")
def sync_obsidian_endpoint(project_id: str):
conn = get_conn()
p = models.get_project(conn, project_id)
if not p:
conn.close()
raise HTTPException(404, ...)
if not p.get("obsidian_vault_path"):
conn.close()
raise HTTPException(400, "obsidian_vault_path not set for this project")
from core.obsidian_sync import sync_obsidian
result = sync_obsidian(conn, project_id)
conn.close()
return result
```
---
## 6. Frontend: Settings страница
### Маршрут
- Path: `/settings`
- Component: `web/frontend/src/views/SettingsView.vue`
- Регистрация в `main.ts`
### Навигация в `App.vue`
Добавить ссылку `Settings` в хедер рядом с `Kin`.
### SettingsView.vue — структура
```
Settings
├── Список проектов (v-for)
│ ├── Название + id
│ ├── Input: obsidian_vault_path (text input)
│ ├── [Save] кнопка → PATCH /api/projects/{id}
│ └── [Sync Obsidian] кнопка → POST /api/projects/{id}/sync/obsidian
│ └── Показывает результат: "Exported: 5 decisions, Updated: 2 tasks"
```
### api.ts — добавить методы
```typescript
// Обновить настройки проекта
patchProject(id: string, data: { obsidian_vault_path?: string, execution_mode?: string, autocommit_enabled?: boolean })
// Запустить Obsidian sync
syncObsidian(projectId: string): Promise<{ exported_decisions: number, tasks_updated: number, errors: string[] }>
```
---
## 7. Тесты
### `tests/test_obsidian_sync.py`
Обязательные кейсы:
1. `test_export_decisions_creates_md_files` — export создаёт файлы с правильным frontmatter
2. `test_export_idempotent` — повторный export перезаписывает, не дублирует
3. `test_parse_task_checkboxes_done``- [x] KIN-001 Title``{"task_id": "KIN-001", "done": True}`
4. `test_parse_task_checkboxes_pending``- [ ] KIN-002 Title``done: False`
5. `test_parse_task_checkboxes_no_id` — строки без task ID пропускаются
6. `test_sync_updates_task_status` — sync обновляет статус задачи если `done=True`
7. `test_sync_no_vault_path` — sync без vault_path выбрасывает ValueError
---
## 8. Риски и ограничения
1. **PyYAML не в зависимостях** → использовать ручную генерацию YAML-строки для frontmatter, парсинг `re`
2. **Vault path может быть недоступен** → sync возвращает error в `errors[]`, не падает
3. **Конфликт при rename decision** → файл со старым slug остаётся, создаётся новый. Приемлемо для v1
4. **Большой vault** → scan только в `{vault_path}/{project_id}/tasks/`, не весь vault
5. **Одновременный sync** → нет блокировки (SQLite WAL + file system). В v1 достаточно
---
## 9. Порядок реализации (для dev-агента)
1. `core/db.py` — добавить `obsidian_vault_path` в `_migrate()`
2. `core/obsidian_sync.py` — реализовать `export_decisions_to_md`, `parse_task_checkboxes`, `sync_obsidian`
3. `tests/test_obsidian_sync.py` — написать тесты (7 кейсов выше)
4. `web/api.py` — расширить `ProjectPatch`, добавить `/sync/obsidian` эндпоинт
5. `web/frontend/src/api.ts` — добавить `patchProject` обновление и `syncObsidian`
6. `web/frontend/src/views/SettingsView.vue` — создать компонент
7. `web/frontend/src/main.ts` — зарегистрировать `/settings` маршрут
8. `web/frontend/src/App.vue` — добавить ссылку Settings в nav

27
tests/conftest.py Normal file
View file

@ -0,0 +1,27 @@
"""Shared pytest fixtures for Kin test suite."""
import pytest
from unittest.mock import patch
@pytest.fixture(autouse=True)
def _set_kin_secret_key(monkeypatch):
"""Set KIN_SECRET_KEY for all tests (required by _encrypt_auth/_decrypt_auth)."""
from cryptography.fernet import Fernet
monkeypatch.setenv("KIN_SECRET_KEY", Fernet.generate_key().decode())
@pytest.fixture(autouse=True)
def _mock_check_claude_auth():
"""Авто-мок agents.runner.check_claude_auth для всех тестов.
run_pipeline() вызывает check_claude_auth() перед запуском агентов.
Без мока тесты, использующие side_effect-очереди для subprocess.run,
ломаются: первый вызов (auth-check) потребляет элемент очереди.
Тесты TestCheckClaudeAuth (test_runner.py) НЕ затрагиваются:
они вызывают check_claude_auth через напрямую импортированную ссылку
(bound at module load time), а не через agents.runner.check_claude_auth.
"""
with patch("agents.runner.check_claude_auth"):
yield

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,304 @@
"""
KIN-090: Integration tests for task attachment API endpoints.
Tests cover:
AC1 upload saves file to {project_path}/.kin/attachments/{task_id}/
AC3 file available for download via GET /api/attachments/{id}/file
AC4 data persists in SQLite
Integration: upload list verify agent context (build_context)
"""
import io
import pytest
from pathlib import Path
from fastapi.testclient import TestClient
import web.api as api_module
@pytest.fixture
def client(tmp_path):
"""TestClient with isolated DB and a seeded project+task.
Project path set to tmp_path so attachment dirs are created there
and cleaned up automatically.
"""
db_path = tmp_path / "test.db"
api_module.DB_PATH = db_path
from web.api import app
c = TestClient(app)
project_path = str(tmp_path / "myproject")
c.post("/api/projects", json={
"id": "prj",
"name": "My Project",
"path": project_path,
})
c.post("/api/tasks", json={"project_id": "prj", "title": "Fix login bug"})
return c
def _png_bytes() -> bytes:
"""Minimal valid 1x1 PNG image."""
import base64
# 1x1 red pixel PNG (base64-encoded)
data = (
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01"
b"\x08\x02\x00\x00\x00\x90wS\xde\x00\x00\x00\x0cIDATx\x9cc\xf8\x0f\x00"
b"\x00\x01\x01\x00\x05\x18\xd8N\x00\x00\x00\x00IEND\xaeB`\x82"
)
return data
# ---------------------------------------------------------------------------
# Upload
# ---------------------------------------------------------------------------
def test_upload_attachment_returns_201(client):
"""KIN-090: POST /api/tasks/{id}/attachments возвращает 201 и данные вложения."""
r = client.post(
"/api/tasks/PRJ-001/attachments",
files={"file": ("bug.png", io.BytesIO(_png_bytes()), "image/png")},
)
assert r.status_code == 201
data = r.json()
assert data["task_id"] == "PRJ-001"
assert data["filename"] == "bug.png"
assert data["mime_type"] == "image/png"
assert data["size"] == len(_png_bytes())
assert data["id"] is not None
def test_upload_attachment_saves_file_to_correct_path(client, tmp_path):
"""KIN-090: AC1 — файл сохраняется в {project_path}/.kin/attachments/{task_id}/."""
r = client.post(
"/api/tasks/PRJ-001/attachments",
files={"file": ("shot.png", io.BytesIO(_png_bytes()), "image/png")},
)
assert r.status_code == 201
saved_path = Path(r.json()["path"])
# Path structure: <project_path>/.kin/attachments/PRJ-001/shot.png
assert saved_path.name == "shot.png"
assert saved_path.parent.name == "PRJ-001"
assert saved_path.parent.parent.name == "attachments"
assert saved_path.parent.parent.parent.name == ".kin"
assert saved_path.exists()
def test_upload_attachment_file_content_matches(client):
"""KIN-090: содержимое сохранённого файла совпадает с загруженным."""
content = _png_bytes()
r = client.post(
"/api/tasks/PRJ-001/attachments",
files={"file": ("img.png", io.BytesIO(content), "image/png")},
)
assert r.status_code == 201
saved_path = Path(r.json()["path"])
assert saved_path.read_bytes() == content
def test_upload_attachment_persists_in_sqlite(client, tmp_path):
"""KIN-090: AC4 — запись о вложении сохраняется в SQLite и доступна через list."""
client.post(
"/api/tasks/PRJ-001/attachments",
files={"file": ("db_test.png", io.BytesIO(_png_bytes()), "image/png")},
)
# Verify via list endpoint (reads from DB)
r = client.get("/api/tasks/PRJ-001/attachments")
assert r.status_code == 200
assert any(a["filename"] == "db_test.png" for a in r.json())
def test_upload_attachment_task_not_found_returns_404(client):
"""KIN-090: 404 если задача не существует."""
r = client.post(
"/api/tasks/PRJ-999/attachments",
files={"file": ("x.png", io.BytesIO(_png_bytes()), "image/png")},
)
assert r.status_code == 404
def test_upload_attachment_operations_project_returns_400(client, tmp_path):
"""KIN-090: 400 для operations-проекта (нет project path)."""
db_path = tmp_path / "test2.db"
api_module.DB_PATH = db_path
from web.api import app
c = TestClient(app)
c.post("/api/projects", json={
"id": "ops",
"name": "Ops Server",
"project_type": "operations",
"ssh_host": "10.0.0.1",
})
c.post("/api/tasks", json={"project_id": "ops", "title": "Reboot server"})
r = c.post(
"/api/tasks/OPS-001/attachments",
files={"file": ("x.png", io.BytesIO(_png_bytes()), "image/png")},
)
assert r.status_code == 400
def test_upload_oversized_file_returns_413(client):
"""KIN-090: 413 если файл превышает 10 MB."""
big_content = b"x" * (10 * 1024 * 1024 + 1)
r = client.post(
"/api/tasks/PRJ-001/attachments",
files={"file": ("huge.png", io.BytesIO(big_content), "image/png")},
)
assert r.status_code == 413
# ---------------------------------------------------------------------------
# List
# ---------------------------------------------------------------------------
def test_list_attachments_empty_for_new_task(client):
"""KIN-090: GET /api/tasks/{id}/attachments возвращает [] для задачи без вложений."""
r = client.get("/api/tasks/PRJ-001/attachments")
assert r.status_code == 200
assert r.json() == []
def test_list_attachments_returns_all_uploaded(client):
"""KIN-090: список содержит все загруженные вложения."""
client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("a.png", io.BytesIO(_png_bytes()), "image/png")})
client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("b.jpg", io.BytesIO(_png_bytes()), "image/jpeg")})
r = client.get("/api/tasks/PRJ-001/attachments")
assert r.status_code == 200
filenames = {a["filename"] for a in r.json()}
assert "a.png" in filenames
assert "b.jpg" in filenames
def test_list_attachments_task_not_found_returns_404(client):
"""KIN-090: 404 если задача не существует."""
r = client.get("/api/tasks/PRJ-999/attachments")
assert r.status_code == 404
# ---------------------------------------------------------------------------
# Delete
# ---------------------------------------------------------------------------
def test_delete_attachment_returns_204(client):
"""KIN-090: DELETE возвращает 204."""
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("del.png", io.BytesIO(_png_bytes()), "image/png")})
att_id = r.json()["id"]
r = client.delete(f"/api/tasks/PRJ-001/attachments/{att_id}")
assert r.status_code == 204
def test_delete_attachment_removes_from_list(client):
"""KIN-090: после удаления вложение не появляется в списке."""
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("rm.png", io.BytesIO(_png_bytes()), "image/png")})
att_id = r.json()["id"]
client.delete(f"/api/tasks/PRJ-001/attachments/{att_id}")
attachments = client.get("/api/tasks/PRJ-001/attachments").json()
assert not any(a["id"] == att_id for a in attachments)
def test_delete_attachment_removes_file_from_disk(client):
"""KIN-090: удаление вложения удаляет файл с диска."""
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("disk.png", io.BytesIO(_png_bytes()), "image/png")})
saved_path = Path(r.json()["path"])
att_id = r.json()["id"]
assert saved_path.exists()
client.delete(f"/api/tasks/PRJ-001/attachments/{att_id}")
assert not saved_path.exists()
def test_delete_attachment_not_found_returns_404(client):
"""KIN-090: 404 если вложение не существует."""
r = client.delete("/api/tasks/PRJ-001/attachments/99999")
assert r.status_code == 404
# ---------------------------------------------------------------------------
# Download
# ---------------------------------------------------------------------------
def test_download_attachment_file_returns_correct_content(client):
"""KIN-090: AC3 — GET /api/attachments/{id}/file возвращает содержимое файла."""
content = _png_bytes()
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("get.png", io.BytesIO(content), "image/png")})
att_id = r.json()["id"]
r = client.get(f"/api/attachments/{att_id}/file")
assert r.status_code == 200
assert r.content == content
def test_download_attachment_file_returns_correct_content_type(client):
"""KIN-090: AC3 — Content-Type соответствует mime_type вложения."""
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("ct.png", io.BytesIO(_png_bytes()), "image/png")})
att_id = r.json()["id"]
r = client.get(f"/api/attachments/{att_id}/file")
assert r.status_code == 200
assert "image/png" in r.headers["content-type"]
def test_download_attachment_not_found_returns_404(client):
"""KIN-090: 404 если вложение не существует."""
r = client.get("/api/attachments/99999/file")
assert r.status_code == 404
# ---------------------------------------------------------------------------
# Integration: upload → list → agent context (AC2)
# ---------------------------------------------------------------------------
def test_integration_upload_list_agent_context(client, tmp_path):
"""KIN-090: Интеграционный тест: upload → list → build_context включает вложения.
Проверяет AC1 (путь), AC3 (доступен для скачивания), AC4 (SQLite),
и AC2 (агенты получают вложения через build_context).
"""
# Step 1: Upload image
content = _png_bytes()
r = client.post("/api/tasks/PRJ-001/attachments",
files={"file": ("integration.png", io.BytesIO(content), "image/png")})
assert r.status_code == 201
att = r.json()
# Step 2: AC1 — file is at correct path inside project
saved_path = Path(att["path"])
assert saved_path.exists()
assert "PRJ-001" in str(saved_path)
assert ".kin/attachments" in str(saved_path)
# Step 3: List confirms persistence (AC4)
r = client.get("/api/tasks/PRJ-001/attachments")
assert r.status_code == 200
assert len(r.json()) == 1
# Step 4: Download works (AC3)
r = client.get(f"/api/attachments/{att['id']}/file")
assert r.status_code == 200
assert r.content == content
# Step 5: AC2 — agent context includes attachment path
from core.db import init_db
from core.context_builder import build_context
conn = init_db(api_module.DB_PATH)
ctx = build_context(conn, "PRJ-001", "debugger", "prj")
conn.close()
assert "attachments" in ctx
paths = [a["path"] for a in ctx["attachments"]]
assert att["path"] in paths

56
tests/test_api_chat.py Normal file
View file

@ -0,0 +1,56 @@
"""Tests for chat endpoints: GET/POST /api/projects/{project_id}/chat (KIN-UI-005)."""
import pytest
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
import web.api as api_module
@pytest.fixture
def client(tmp_path):
db_path = tmp_path / "test.db"
api_module.DB_PATH = db_path
from web.api import app
c = TestClient(app)
c.post("/api/projects", json={"id": "p1", "name": "P1", "path": "/p1"})
return c
def test_get_chat_history_empty_for_new_project(client):
r = client.get("/api/projects/p1/chat")
assert r.status_code == 200
assert r.json() == []
def test_post_chat_task_request_creates_task_stub(client):
with patch("core.chat_intent.classify_intent", return_value="task_request"), \
patch("web.api.subprocess.Popen") as mock_popen:
mock_popen.return_value = MagicMock()
r = client.post("/api/projects/p1/chat", json={"content": "Добавь кнопку выхода"})
assert r.status_code == 200
data = r.json()
assert data["user_message"]["role"] == "user"
assert data["assistant_message"]["message_type"] == "task_created"
assert "task_stub" in data["assistant_message"]
assert data["assistant_message"]["task_stub"]["status"] == "pending"
assert data["task"] is not None
assert mock_popen.called
def test_post_chat_status_query_returns_text_response(client):
with patch("core.chat_intent.classify_intent", return_value="status_query"):
r = client.post("/api/projects/p1/chat", json={"content": "что сейчас в работе?"})
assert r.status_code == 200
data = r.json()
assert data["user_message"]["role"] == "user"
assert data["assistant_message"]["role"] == "assistant"
assert data["task"] is None
assert "Нет активных задач" in data["assistant_message"]["content"]
def test_post_chat_empty_content_returns_400(client):
r = client.post("/api/projects/p1/chat", json={"content": " "})
assert r.status_code == 400

314
tests/test_api_phases.py Normal file
View file

@ -0,0 +1,314 @@
"""Tests for web/api.py — Phase endpoints (KIN-059).
Covers:
- POST /api/projects/new создание проекта с фазами
- GET /api/projects/{id}/phases список фаз с joined task
- POST /api/phases/{id}/approve approve фазы
- POST /api/phases/{id}/reject reject фазы
- POST /api/phases/{id}/revise revise фазы
- POST /api/projects/{id}/phases/start запуск агента для активной фазы
"""
from unittest.mock import MagicMock, patch
import pytest
from fastapi.testclient import TestClient
import web.api as api_module
@pytest.fixture
def client(tmp_path):
"""KIN-059: TestClient с изолированной временной БД."""
db_path = tmp_path / "test.db"
api_module.DB_PATH = db_path
from web.api import app
return TestClient(app)
@pytest.fixture
def client_with_phases(client):
"""KIN-059: клиент с уже созданным проектом + фазами."""
r = client.post("/api/projects/new", json={
"id": "proj1",
"name": "Test Project",
"path": "/tmp/proj1",
"description": "Описание тестового проекта",
"roles": ["business_analyst"],
})
assert r.status_code == 200
return client
# ---------------------------------------------------------------------------
# POST /api/projects/new
# ---------------------------------------------------------------------------
def test_post_projects_new_creates_project_and_phases(client):
"""KIN-059: POST /api/projects/new создаёт проект с фазами (researcher + architect)."""
r = client.post("/api/projects/new", json={
"id": "p1",
"name": "My Project",
"path": "/tmp/p1",
"description": "Описание",
"roles": ["tech_researcher"],
})
assert r.status_code == 200
data = r.json()
assert data["project"]["id"] == "p1"
# tech_researcher + architect = 2 фазы
assert len(data["phases"]) == 2
phase_roles = [ph["role"] for ph in data["phases"]]
assert "architect" in phase_roles
assert phase_roles[-1] == "architect"
def test_post_projects_new_no_roles_returns_400(client):
"""KIN-059: POST /api/projects/new без ролей возвращает 400."""
r = client.post("/api/projects/new", json={
"id": "p1",
"name": "P1",
"path": "/tmp/p1",
"description": "Desc",
"roles": [],
})
assert r.status_code == 400
def test_post_projects_new_only_architect_returns_400(client):
"""KIN-059: только architect в roles → 400 (architect не researcher)."""
r = client.post("/api/projects/new", json={
"id": "p1",
"name": "P1",
"path": "/tmp/p1",
"description": "Desc",
"roles": ["architect"],
})
assert r.status_code == 400
def test_post_projects_new_duplicate_id_returns_409(client):
"""KIN-059: повторное создание проекта с тем же id → 409."""
payload = {
"id": "dup",
"name": "Dup",
"path": "/tmp/dup",
"description": "Desc",
"roles": ["marketer"],
}
client.post("/api/projects/new", json=payload)
r = client.post("/api/projects/new", json=payload)
assert r.status_code == 409
def test_post_projects_new_first_phase_is_active(client):
"""KIN-059: первая фаза созданного проекта сразу имеет status=active."""
r = client.post("/api/projects/new", json={
"id": "p1",
"name": "P1",
"path": "/tmp/p1",
"description": "Desc",
"roles": ["market_researcher", "tech_researcher"],
})
assert r.status_code == 200
first_phase = r.json()["phases"][0]
assert first_phase["status"] == "active"
# ---------------------------------------------------------------------------
# GET /api/projects/{project_id}/phases
# ---------------------------------------------------------------------------
def test_get_project_phases_returns_phases_with_task(client_with_phases):
"""KIN-059: GET /api/projects/{id}/phases возвращает фазы с joined полем task."""
r = client_with_phases.get("/api/projects/proj1/phases")
assert r.status_code == 200
phases = r.json()
assert len(phases) >= 1
# Активная первая фаза должна иметь task
active = next((ph for ph in phases if ph["status"] == "active"), None)
assert active is not None
assert active["task"] is not None
def test_get_project_phases_project_not_found_returns_404(client):
"""KIN-059: GET /api/projects/missing/phases → 404."""
r = client.get("/api/projects/missing/phases")
assert r.status_code == 404
# ---------------------------------------------------------------------------
# POST /api/phases/{phase_id}/approve
# ---------------------------------------------------------------------------
def _get_first_active_phase_id(client, project_id: str) -> int:
"""Вспомогательная: получить id первой активной фазы."""
phases = client.get(f"/api/projects/{project_id}/phases").json()
active = next(ph for ph in phases if ph["status"] == "active")
return active["id"]
def test_approve_phase_returns_200_and_activates_next(client_with_phases):
"""KIN-059: POST /api/phases/{id}/approve → 200, следующая фаза активируется."""
phase_id = _get_first_active_phase_id(client_with_phases, "proj1")
r = client_with_phases.post(f"/api/phases/{phase_id}/approve", json={})
assert r.status_code == 200
data = r.json()
assert data["phase"]["status"] == "approved"
# Следующая фаза (architect) активирована
assert data["next_phase"] is not None
assert data["next_phase"]["status"] == "active"
def test_approve_phase_not_found_returns_404(client):
"""KIN-059: approve несуществующей фазы → 404."""
r = client.post("/api/phases/9999/approve", json={})
assert r.status_code == 404
def test_approve_phase_not_active_returns_400(client):
"""KIN-059: approve pending-фазы → 400 (фаза не active)."""
# Создаём проект с двумя researcher-ролями
client.post("/api/projects/new", json={
"id": "p2",
"name": "P2",
"path": "/tmp/p2",
"description": "Desc",
"roles": ["market_researcher", "tech_researcher"],
})
phases = client.get("/api/projects/p2/phases").json()
# Вторая фаза pending
pending = next(ph for ph in phases if ph["status"] == "pending")
r = client.post(f"/api/phases/{pending['id']}/approve", json={})
assert r.status_code == 400
# ---------------------------------------------------------------------------
# POST /api/phases/{phase_id}/reject
# ---------------------------------------------------------------------------
def test_reject_phase_returns_200(client_with_phases):
"""KIN-059: POST /api/phases/{id}/reject → 200, status=rejected."""
phase_id = _get_first_active_phase_id(client_with_phases, "proj1")
r = client_with_phases.post(f"/api/phases/{phase_id}/reject", json={"reason": "Не актуально"})
assert r.status_code == 200
assert r.json()["status"] == "rejected"
def test_reject_phase_not_found_returns_404(client):
"""KIN-059: reject несуществующей фазы → 404."""
r = client.post("/api/phases/9999/reject", json={"reason": "test"})
assert r.status_code == 404
def test_reject_phase_not_active_returns_400(client):
"""KIN-059: reject pending-фазы → 400."""
client.post("/api/projects/new", json={
"id": "p3",
"name": "P3",
"path": "/tmp/p3",
"description": "Desc",
"roles": ["legal_researcher", "ux_designer"],
})
phases = client.get("/api/projects/p3/phases").json()
pending = next(ph for ph in phases if ph["status"] == "pending")
r = client.post(f"/api/phases/{pending['id']}/reject", json={"reason": "test"})
assert r.status_code == 400
# ---------------------------------------------------------------------------
# POST /api/phases/{phase_id}/revise
# ---------------------------------------------------------------------------
def test_revise_phase_returns_200_and_creates_new_task(client_with_phases):
"""KIN-059: POST /api/phases/{id}/revise → 200, создаётся новая задача."""
phase_id = _get_first_active_phase_id(client_with_phases, "proj1")
r = client_with_phases.post(
f"/api/phases/{phase_id}/revise",
json={"comment": "Добавь детали по монетизации"},
)
assert r.status_code == 200
data = r.json()
assert data["phase"]["status"] == "revising"
assert data["new_task"] is not None
assert data["new_task"]["brief"]["revise_comment"] == "Добавь детали по монетизации"
def test_revise_phase_empty_comment_returns_400(client_with_phases):
"""KIN-059: revise с пустым комментарием → 400."""
phase_id = _get_first_active_phase_id(client_with_phases, "proj1")
r = client_with_phases.post(f"/api/phases/{phase_id}/revise", json={"comment": " "})
assert r.status_code == 400
def test_revise_phase_not_found_returns_404(client):
"""KIN-059: revise несуществующей фазы → 404."""
r = client.post("/api/phases/9999/revise", json={"comment": "test"})
assert r.status_code == 404
def test_revise_phase_not_active_returns_400(client):
"""KIN-059: revise pending-фазы → 400."""
client.post("/api/projects/new", json={
"id": "p4",
"name": "P4",
"path": "/tmp/p4",
"description": "Desc",
"roles": ["marketer", "ux_designer"],
})
phases = client.get("/api/projects/p4/phases").json()
pending = next(ph for ph in phases if ph["status"] == "pending")
r = client.post(f"/api/phases/{pending['id']}/revise", json={"comment": "test"})
assert r.status_code == 400
# ---------------------------------------------------------------------------
# POST /api/projects/{project_id}/phases/start
# ---------------------------------------------------------------------------
def test_start_phase_returns_202_and_starts_agent(client_with_phases):
"""KIN-059: POST /api/projects/{id}/phases/start → 202, агент запускается в фоне."""
with patch("subprocess.Popen") as mock_popen:
mock_proc = MagicMock()
mock_proc.pid = 12345
mock_popen.return_value = mock_proc
r = client_with_phases.post("/api/projects/proj1/phases/start")
assert r.status_code == 202
data = r.json()
assert data["status"] == "started"
assert "phase_id" in data
assert "task_id" in data
mock_popen.assert_called_once()
def test_start_phase_task_set_to_in_progress(client_with_phases):
"""KIN-059: start устанавливает task.status=in_progress перед запуском агента."""
with patch("subprocess.Popen") as mock_popen:
mock_popen.return_value = MagicMock(pid=1)
r = client_with_phases.post("/api/projects/proj1/phases/start")
task_id = r.json()["task_id"]
task = client_with_phases.get(f"/api/tasks/{task_id}").json()
assert task["status"] == "in_progress"
def test_start_phase_no_active_phase_returns_404(client):
"""KIN-059: start без активной/revising фазы → 404."""
# Проект без фаз (обычный проект через /api/projects)
client.post("/api/projects", json={"id": "plain", "name": "Plain", "path": "/tmp/plain"})
r = client.post("/api/projects/plain/phases/start")
assert r.status_code == 404
def test_start_phase_project_not_found_returns_404(client):
"""KIN-059: start для несуществующего проекта → 404."""
r = client.post("/api/projects/missing/phases/start")
assert r.status_code == 404

121
tests/test_arch_002.py Normal file
View file

@ -0,0 +1,121 @@
"""Regression tests for KIN-ARCH-002.
Проблема: функция create_project_with_phases имела нестабильную сигнатуру
параметр path с дефолтом на позиции 4, после чего шли обязательные параметры
(description, selected_roles), что могло приводить к SyntaxError при инвалидации
.pyc-кеша в Python 3.14+.
Фикс: параметры path переносится после обязательных ИЛИ изолируется через *
(keyword-only) текущий код использует * для description/selected_roles.
Тесты покрывают:
1. Вызов с path как позиционным аргументом (текущая конвенция в тестах)
2. Вызов с path=... как keyword-аргументом (безопасная конвенция)
3. Вызов без path=None (дефолт работает)
4. Нет SyntaxError при импорте core.phases (regression guard)
5. Стабильность числа тестов: полный suite запускается без collection errors
"""
import pytest
from core.db import init_db
from core import models
from core.phases import create_project_with_phases
@pytest.fixture
def conn():
c = init_db(db_path=":memory:")
yield c
c.close()
# ---------------------------------------------------------------------------
# KIN-ARCH-002 — regression: signature stability of create_project_with_phases
# ---------------------------------------------------------------------------
def test_arch_002_import_core_phases_no_syntax_error():
"""KIN-ARCH-002: импорт core.phases не вызывает SyntaxError."""
import core.phases # noqa: F401 — если упадёт SyntaxError, тест падает
def test_arch_002_path_as_positional_arg(conn):
"""KIN-ARCH-002: path передаётся как позиционный аргумент (4-я позиция).
Текущая конвенция во всех тестах и в web/api.py.
Регрессионная защита: изменение сигнатуры не должно сломать этот вызов.
"""
result = create_project_with_phases(
conn, "arch002a", "Project A", "/some/path",
description="Описание A", selected_roles=["business_analyst"],
)
assert result["project"]["id"] == "arch002a"
assert len(result["phases"]) == 2 # business_analyst + architect
def test_arch_002_path_as_keyword_arg(conn):
"""KIN-ARCH-002: path передаётся как keyword-аргумент.
Рекомендуемая конвенция по итогам debugger-расследования.
Гарантирует, что будущий рефакторинг сигнатуры не сломает код.
"""
result = create_project_with_phases(
conn, "arch002b", "Project B",
description="Описание B",
selected_roles=["tech_researcher"],
path="/keyword/path",
)
assert result["project"]["id"] == "arch002b"
assert result["project"]["path"] == "/keyword/path"
def test_arch_002_path_none_without_operations_raises(conn):
"""KIN-ARCH-002: path=None для non-operations проекта → IntegrityError из БД (CHECK constraint)."""
import sqlite3
with pytest.raises(sqlite3.IntegrityError, match="CHECK constraint"):
create_project_with_phases(
conn, "arch002fail", "Fail",
description="D",
selected_roles=["marketer"],
path=None,
)
def test_arch_002_phases_count_is_deterministic(conn):
"""KIN-ARCH-002: при каждом вызове создаётся ровно N+1 фаз (N researchers + architect)."""
for idx, (roles, expected_count) in enumerate([
(["business_analyst"], 2),
(["business_analyst", "tech_researcher"], 3),
(["business_analyst", "market_researcher", "legal_researcher"], 4),
]):
project_id = f"arch002_det_{idx}"
result = create_project_with_phases(
conn, project_id, f"Project {len(roles)}",
description="Det test",
selected_roles=roles,
path=f"/tmp/det/{idx}",
)
assert len(result["phases"]) == expected_count, (
f"roles={roles}: ожидали {expected_count} фаз, "
f"получили {len(result['phases'])}"
)
def test_arch_002_first_phase_active_regardless_of_call_convention(conn):
"""KIN-ARCH-002: первая фаза всегда active независимо от способа передачи path."""
# Positional convention
r1 = create_project_with_phases(
conn, "p_pos", "P pos", "/pos",
description="D", selected_roles=["business_analyst"],
)
assert r1["phases"][0]["status"] == "active"
assert r1["phases"][0]["task_id"] is not None
# Keyword convention
r2 = create_project_with_phases(
conn, "p_kw", "P kw",
description="D", selected_roles=["business_analyst"], path="/kw",
)
assert r2["phases"][0]["status"] == "active"
assert r2["phases"][0]["task_id"] is not None

View file

@ -1,7 +1,8 @@
""" """
Tests for KIN-012 auto mode features: Tests for KIN-012/KIN-063 auto mode features:
- TestAutoApprove: pipeline auto-approves (status done) без ручного review - TestAutoApprove: pipeline auto-approves (status done) без ручного review
(KIN-063: auto_complete только если последний шаг tester или reviewer)
- TestAutoRerunOnPermissionDenied: runner делает retry при permission error, - TestAutoRerunOnPermissionDenied: runner делает retry при permission error,
останавливается после одного retry (лимит = 1) останавливается после одного retry (лимит = 1)
- TestAutoFollowup: generate_followups вызывается сразу, без ожидания - TestAutoFollowup: generate_followups вызывается сразу, без ожидания
@ -75,30 +76,30 @@ class TestAutoApprove:
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_auto_mode_sets_status_done(self, mock_run, mock_hooks, mock_followup, conn): def test_auto_mode_sets_status_done(self, mock_run, mock_hooks, mock_followup, conn):
"""Auto-режим: статус задачи становится 'done', а не 'review'.""" """Auto-complete режим: статус становится 'done', если последний шаг — tester."""
mock_run.return_value = _mock_success() mock_run.return_value = _mock_success()
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find bug"}] steps = [{"role": "debugger", "brief": "find bug"}, {"role": "tester", "brief": "verify fix"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True assert result["success"] is True
task = models.get_task(conn, "VDOL-001") task = models.get_task(conn, "VDOL-001")
assert task["status"] == "done", "Auto-mode должен auto-approve: status=done" assert task["status"] == "done", "Auto-complete должен auto-approve: status=done"
@patch("core.followup.generate_followups") @patch("core.followup.generate_followups")
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_auto_mode_fires_task_auto_approved_hook(self, mock_run, mock_hooks, mock_followup, conn): def test_auto_mode_fires_task_auto_approved_hook(self, mock_run, mock_hooks, mock_followup, conn):
"""В auto-режиме срабатывает хук task_auto_approved.""" """В auto_complete-режиме срабатывает хук task_auto_approved (если последний шаг — tester)."""
mock_run.return_value = _mock_success() mock_run.return_value = _mock_success()
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find bug"}] steps = [{"role": "debugger", "brief": "find bug"}, {"role": "tester", "brief": "verify"}]
run_pipeline(conn, "VDOL-001", steps) run_pipeline(conn, "VDOL-001", steps)
events = _get_hook_events(mock_hooks) events = _get_hook_events(mock_hooks)
@ -140,20 +141,20 @@ class TestAutoApprove:
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_task_level_auto_overrides_project_review(self, mock_run, mock_hooks, mock_followup, conn): def test_task_level_auto_overrides_project_review(self, mock_run, mock_hooks, mock_followup, conn):
"""Если у задачи execution_mode=auto, pipeline auto-approve, даже если проект в review.""" """Если у задачи execution_mode=auto_complete, pipeline auto-approve, даже если проект в review."""
mock_run.return_value = _mock_success() mock_run.return_value = _mock_success()
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
# Проект в review, но задача — auto # Проект в review, но задача — auto_complete
models.update_task(conn, "VDOL-001", execution_mode="auto") models.update_task(conn, "VDOL-001", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "reviewer", "brief": "approve"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True assert result["success"] is True
task = models.get_task(conn, "VDOL-001") task = models.get_task(conn, "VDOL-001")
assert task["status"] == "done", "Task-level auto должен override project review" assert task["status"] == "done", "Task-level auto_complete должен override project review"
@patch("core.followup.generate_followups") @patch("core.followup.generate_followups")
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@ -164,11 +165,11 @@ class TestAutoApprove:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result.get("mode") == "auto" assert result.get("mode") == "auto_complete"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@ -178,10 +179,13 @@ class TestAutoApprove:
class TestAutoRerunOnPermissionDenied: class TestAutoRerunOnPermissionDenied:
"""Runner повторяет шаг при permission issues, останавливается по лимиту (1 retry).""" """Runner повторяет шаг при permission issues, останавливается по лимиту (1 retry)."""
@patch("agents.runner._get_changed_files", return_value=[])
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_learning_extraction")
@patch("core.followup.generate_followups") @patch("core.followup.generate_followups")
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_auto_mode_retries_on_permission_error(self, mock_run, mock_hooks, mock_followup, conn): def test_auto_mode_retries_on_permission_error(self, mock_run, mock_hooks, mock_followup, mock_learn, mock_autocommit, mock_changed_files, conn):
"""Auto-режим: при permission denied runner делает 1 retry с allow_write=True.""" """Auto-режим: при permission denied runner делает 1 retry с allow_write=True."""
mock_run.side_effect = [ mock_run.side_effect = [
_mock_permission_denied(), # 1-й вызов: permission error _mock_permission_denied(), # 1-й вызов: permission error
@ -189,8 +193,9 @@ class TestAutoRerunOnPermissionDenied:
] ]
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
mock_learn.return_value = {"added": 0, "skipped": 0}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "fix file"}] steps = [{"role": "debugger", "brief": "fix file"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
@ -209,7 +214,7 @@ class TestAutoRerunOnPermissionDenied:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "fix"}] steps = [{"role": "debugger", "brief": "fix"}]
run_pipeline(conn, "VDOL-001", steps) run_pipeline(conn, "VDOL-001", steps)
@ -229,7 +234,7 @@ class TestAutoRerunOnPermissionDenied:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "fix"}] steps = [{"role": "debugger", "brief": "fix"}]
run_pipeline(conn, "VDOL-001", steps) run_pipeline(conn, "VDOL-001", steps)
@ -248,7 +253,7 @@ class TestAutoRerunOnPermissionDenied:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "fix"}] steps = [{"role": "debugger", "brief": "fix"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
@ -257,10 +262,13 @@ class TestAutoRerunOnPermissionDenied:
task = models.get_task(conn, "VDOL-001") task = models.get_task(conn, "VDOL-001")
assert task["status"] == "blocked" assert task["status"] == "blocked"
@patch("agents.runner._get_changed_files", return_value=[])
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_learning_extraction")
@patch("core.followup.generate_followups") @patch("core.followup.generate_followups")
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_subsequent_steps_use_allow_write_after_retry(self, mock_run, mock_hooks, mock_followup, conn): def test_subsequent_steps_use_allow_write_after_retry(self, mock_run, mock_hooks, mock_followup, mock_learn, mock_autocommit, mock_changed_files, conn):
"""После успешного retry все следующие шаги тоже используют allow_write.""" """После успешного retry все следующие шаги тоже используют allow_write."""
mock_run.side_effect = [ mock_run.side_effect = [
_mock_permission_denied(), # Шаг 1: permission error _mock_permission_denied(), # Шаг 1: permission error
@ -269,8 +277,9 @@ class TestAutoRerunOnPermissionDenied:
] ]
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
mock_learn.return_value = {"added": 0, "skipped": 0}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [ steps = [
{"role": "debugger", "brief": "fix"}, {"role": "debugger", "brief": "fix"},
{"role": "tester", "brief": "test"}, {"role": "tester", "brief": "test"},
@ -293,7 +302,7 @@ class TestAutoRerunOnPermissionDenied:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "fix"}] steps = [{"role": "debugger", "brief": "fix"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
@ -330,13 +339,13 @@ class TestAutoFollowup:
@patch("agents.runner.run_hooks") @patch("agents.runner.run_hooks")
@patch("agents.runner.subprocess.run") @patch("agents.runner.subprocess.run")
def test_auto_followup_triggered_immediately(self, mock_run, mock_hooks, mock_followup, conn): def test_auto_followup_triggered_immediately(self, mock_run, mock_hooks, mock_followup, conn):
"""В auto-режиме generate_followups вызывается сразу после pipeline.""" """В auto_complete-режиме generate_followups вызывается сразу после pipeline (последний шаг — tester)."""
mock_run.return_value = _mock_success() mock_run.return_value = _mock_success()
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True assert result["success"] is True
@ -357,8 +366,8 @@ class TestAutoFollowup:
mock_followup.return_value = {"created": [], "pending_actions": pending} mock_followup.return_value = {"created": [], "pending_actions": pending}
mock_resolve.return_value = [{"resolved": "rerun", "result": {}}] mock_resolve.return_value = [{"resolved": "rerun", "result": {}}]
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
run_pipeline(conn, "VDOL-001", steps) run_pipeline(conn, "VDOL-001", steps)
mock_resolve.assert_called_once_with(conn, "VDOL-001", pending) mock_resolve.assert_called_once_with(conn, "VDOL-001", pending)
@ -392,10 +401,10 @@ class TestAutoFollowup:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
models.update_task(conn, "VDOL-001", brief={"source": "followup:VDOL-000"}) models.update_task(conn, "VDOL-001", brief={"source": "followup:VDOL-000"})
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True assert result["success"] is True
@ -412,8 +421,8 @@ class TestAutoFollowup:
mock_hooks.return_value = [] mock_hooks.return_value = []
mock_followup.side_effect = Exception("followup PM crashed") mock_followup.side_effect = Exception("followup PM crashed")
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
result = run_pipeline(conn, "VDOL-001", steps) result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True # Pipeline succeeded, followup failure absorbed assert result["success"] is True # Pipeline succeeded, followup failure absorbed
@ -431,8 +440,8 @@ class TestAutoFollowup:
mock_followup.return_value = {"created": [], "pending_actions": []} mock_followup.return_value = {"created": [], "pending_actions": []}
mock_resolve.return_value = [] mock_resolve.return_value = []
models.update_project(conn, "vdol", execution_mode="auto") models.update_project(conn, "vdol", execution_mode="auto_complete")
steps = [{"role": "debugger", "brief": "find"}] steps = [{"role": "debugger", "brief": "find"}, {"role": "tester", "brief": "test"}]
run_pipeline(conn, "VDOL-001", steps) run_pipeline(conn, "VDOL-001", steps)
mock_resolve.assert_not_called() mock_resolve.assert_not_called()

View file

@ -114,6 +114,26 @@ def test_detect_modules_empty(tmp_path):
assert detect_modules(tmp_path) == [] assert detect_modules(tmp_path) == []
def test_detect_modules_deduplication_by_name(tmp_path):
"""KIN-081: detect_modules дедуплицирует по имени (не по имени+путь).
Если два разных scan_dir дают одноимённые модули (например, frontend/src/components
и backend/src/components), результат содержит только первый.
Это соответствует UNIQUE constraint (project_id, name) в таблице modules.
"""
fe_comp = tmp_path / "frontend" / "src" / "components"
fe_comp.mkdir(parents=True)
(fe_comp / "App.vue").write_text("<template></template>")
be_comp = tmp_path / "backend" / "src" / "components"
be_comp.mkdir(parents=True)
(be_comp / "Service.ts").write_text("export class Service {}")
modules = detect_modules(tmp_path)
names = [m["name"] for m in modules]
assert names.count("components") == 1
def test_detect_modules_backend_pg(tmp_path): def test_detect_modules_backend_pg(tmp_path):
"""Test detection in backend-pg/src/ pattern (like vdolipoperek).""" """Test detection in backend-pg/src/ pattern (like vdolipoperek)."""
src = tmp_path / "backend-pg" / "src" / "services" src = tmp_path / "backend-pg" / "src" / "services"

View file

@ -161,3 +161,506 @@ class TestLanguageInProject:
def test_context_carries_language(self, conn): def test_context_carries_language(self, conn):
ctx = build_context(conn, "VDOL-001", "pm", "vdol") ctx = build_context(conn, "VDOL-001", "pm", "vdol")
assert ctx["project"]["language"] == "ru" assert ctx["project"]["language"] == "ru"
# ---------------------------------------------------------------------------
# KIN-045: Revise context — revise_comment + last agent output injection
# ---------------------------------------------------------------------------
class TestReviseContext:
"""build_context и format_prompt корректно инжектируют контекст ревизии."""
def test_build_context_includes_revise_comment_in_task(self, conn):
"""Если у задачи есть revise_comment, он попадает в ctx['task']."""
conn.execute(
"UPDATE tasks SET revise_comment=? WHERE id='VDOL-001'",
("Доисследуй edge case с пустым массивом",),
)
conn.commit()
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
assert ctx["task"]["revise_comment"] == "Доисследуй edge case с пустым массивом"
def test_build_context_fetches_last_agent_output_when_revise_comment_set(self, conn):
"""При revise_comment build_context достаёт last_agent_output из agent_logs."""
from core import models
models.log_agent_run(
conn, "vdol", "developer", "execute",
task_id="VDOL-001",
output_summary="Реализован endpoint POST /api/items",
success=True,
)
conn.execute(
"UPDATE tasks SET revise_comment=? WHERE id='VDOL-001'",
("Добавь валидацию входных данных",),
)
conn.commit()
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
assert ctx.get("last_agent_output") == "Реализован endpoint POST /api/items"
def test_build_context_no_last_agent_output_when_no_successful_logs(self, conn):
"""revise_comment есть, но нет успешных логов — last_agent_output отсутствует."""
from core import models
models.log_agent_run(
conn, "vdol", "developer", "execute",
task_id="VDOL-001",
output_summary="Permission denied",
success=False,
)
conn.execute(
"UPDATE tasks SET revise_comment=? WHERE id='VDOL-001'",
("Повтори без ошибок",),
)
conn.commit()
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
assert "last_agent_output" not in ctx
def test_build_context_no_revise_fields_when_no_revise_comment(self, conn):
"""Обычная задача без revise_comment не получает last_agent_output в контексте."""
from core import models
models.log_agent_run(
conn, "vdol", "developer", "execute",
task_id="VDOL-001",
output_summary="Всё готово",
success=True,
)
# revise_comment не устанавливаем
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
assert "last_agent_output" not in ctx
assert ctx["task"].get("revise_comment") is None
def test_format_prompt_includes_director_revision_request(self, conn):
"""format_prompt содержит секцию '## Director's revision request:' при revise_comment."""
conn.execute(
"UPDATE tasks SET revise_comment=? WHERE id='VDOL-001'",
("Обработай случай пустого списка",),
)
conn.commit()
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
prompt = format_prompt(ctx, "backend_dev", "You are a developer.")
assert "## Director's revision request:" in prompt
assert "Обработай случай пустого списка" in prompt
def test_format_prompt_includes_previous_output_before_revision(self, conn):
"""format_prompt содержит '## Your previous output (before revision):' при last_agent_output."""
from core import models
models.log_agent_run(
conn, "vdol", "developer", "execute",
task_id="VDOL-001",
output_summary="Сделал миграцию БД",
success=True,
)
conn.execute(
"UPDATE tasks SET revise_comment=? WHERE id='VDOL-001'",
("Ещё добавь индекс",),
)
conn.commit()
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
prompt = format_prompt(ctx, "backend_dev", "You are a developer.")
assert "## Your previous output (before revision):" in prompt
assert "Сделал миграцию БД" in prompt
def test_format_prompt_no_revision_sections_when_no_revise_comment(self, conn):
"""Без revise_comment в prompt нет секций ревизии."""
ctx = build_context(conn, "VDOL-001", "backend_dev", "vdol")
prompt = format_prompt(ctx, "backend_dev", "You are a developer.")
assert "## Director's revision request:" not in prompt
assert "## Your previous output (before revision):" not in prompt
# ---------------------------------------------------------------------------
# KIN-071: project_type and SSH context
# ---------------------------------------------------------------------------
class TestOperationsProject:
"""KIN-071: operations project_type propagates to context and prompt."""
@pytest.fixture
def ops_conn(self):
c = init_db(":memory:")
models.create_project(
c, "srv", "My Server", "",
project_type="operations",
ssh_host="10.0.0.1",
ssh_user="root",
ssh_key_path="~/.ssh/id_rsa",
ssh_proxy_jump="jumpt",
)
models.create_task(c, "SRV-001", "srv", "Scan server")
yield c
c.close()
def test_slim_project_includes_project_type(self, ops_conn):
"""KIN-071: _slim_project включает project_type."""
ctx = build_context(ops_conn, "SRV-001", "sysadmin", "srv")
assert ctx["project"]["project_type"] == "operations"
def test_slim_project_includes_ssh_fields_for_operations(self, ops_conn):
"""KIN-071: _slim_project включает ssh_* поля для operations-проектов."""
ctx = build_context(ops_conn, "SRV-001", "sysadmin", "srv")
proj = ctx["project"]
assert proj["ssh_host"] == "10.0.0.1"
assert proj["ssh_user"] == "root"
assert proj["ssh_key_path"] == "~/.ssh/id_rsa"
assert proj["ssh_proxy_jump"] == "jumpt"
def test_slim_project_no_ssh_fields_for_development(self):
"""KIN-071: development-проект не получает ssh_* в slim."""
c = init_db(":memory:")
models.create_project(c, "dev", "Dev", "/path")
models.create_task(c, "DEV-001", "dev", "A task")
ctx = build_context(c, "DEV-001", "backend_dev", "dev")
assert "ssh_host" not in ctx["project"]
c.close()
def test_sysadmin_context_gets_decisions_and_modules(self, ops_conn):
"""KIN-071: sysadmin роль получает все decisions и modules."""
models.add_module(ops_conn, "srv", "nginx", "service", "/etc/nginx")
models.add_decision(ops_conn, "srv", "gotcha", "Port 80 in use", "conflict")
ctx = build_context(ops_conn, "SRV-001", "sysadmin", "srv")
assert "decisions" in ctx
assert "modules" in ctx
assert len(ctx["modules"]) == 1
def test_format_prompt_includes_ssh_connection_section(self, ops_conn):
"""KIN-071: format_prompt добавляет '## SSH Connection' для operations."""
ctx = build_context(ops_conn, "SRV-001", "sysadmin", "srv")
prompt = format_prompt(ctx, "sysadmin", "You are sysadmin.")
assert "## SSH Connection" in prompt
assert "10.0.0.1" in prompt
assert "root" in prompt
assert "jumpt" in prompt
def test_format_prompt_no_ssh_section_for_development(self):
"""KIN-071: development-проект не получает SSH-секцию в prompt."""
c = init_db(":memory:")
models.create_project(c, "dev", "Dev", "/path")
models.create_task(c, "DEV-001", "dev", "A task")
ctx = build_context(c, "DEV-001", "backend_dev", "dev")
prompt = format_prompt(ctx, "backend_dev", "You are a dev.")
assert "## SSH Connection" not in prompt
c.close()
def test_format_prompt_includes_project_type(self, ops_conn):
"""KIN-071: format_prompt включает Project type в секцию проекта."""
ctx = build_context(ops_conn, "SRV-001", "sysadmin", "srv")
prompt = format_prompt(ctx, "sysadmin", "You are sysadmin.")
assert "Project type: operations" in prompt
# ---------------------------------------------------------------------------
# KIN-071: PM routing — operations project routes PM to infra_* pipelines
# ---------------------------------------------------------------------------
class TestPMRoutingOperations:
"""PM-контекст для operations-проекта должен содержать infra-маршруты,
не включающие architect/frontend_dev."""
@pytest.fixture
def ops_conn(self):
c = init_db(":memory:")
models.create_project(
c, "srv", "My Server", "",
project_type="operations",
ssh_host="10.0.0.1",
ssh_user="root",
)
models.create_task(c, "SRV-001", "srv", "Scan server")
yield c
c.close()
def test_pm_context_has_operations_project_type(self, ops_conn):
"""PM получает project_type=operations в контексте проекта."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
assert ctx["project"]["project_type"] == "operations"
def test_pm_context_has_infra_scan_route(self, ops_conn):
"""PM-контекст содержит маршрут infra_scan из specialists.yaml."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
assert "infra_scan" in ctx["routes"]
def test_pm_context_has_infra_debug_route(self, ops_conn):
"""PM-контекст содержит маршрут infra_debug из specialists.yaml."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
assert "infra_debug" in ctx["routes"]
def test_infra_scan_route_uses_sysadmin(self, ops_conn):
"""infra_scan маршрут включает sysadmin в шагах."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
steps = ctx["routes"]["infra_scan"]["steps"]
assert "sysadmin" in steps
def test_infra_scan_route_excludes_architect(self, ops_conn):
"""infra_scan маршрут не назначает architect."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
steps = ctx["routes"]["infra_scan"]["steps"]
assert "architect" not in steps
def test_infra_scan_route_excludes_frontend_dev(self, ops_conn):
"""infra_scan маршрут не назначает frontend_dev."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
steps = ctx["routes"]["infra_scan"]["steps"]
assert "frontend_dev" not in steps
def test_format_prompt_pm_operations_project_type_label(self, ops_conn):
"""format_prompt для PM с operations-проектом содержит 'Project type: operations'."""
ctx = build_context(ops_conn, "SRV-001", "pm", "srv")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "Project type: operations" in prompt
# ---------------------------------------------------------------------------
# KIN-090: Attachments — context builder includes attachment paths
# ---------------------------------------------------------------------------
class TestAttachmentsInContext:
"""KIN-090: AC2 — агенты получают пути к вложениям в контексте задачи."""
@pytest.fixture
def conn_with_attachments(self):
c = init_db(":memory:")
models.create_project(c, "prj", "Project", "/tmp/prj")
models.create_task(c, "PRJ-001", "prj", "Fix bug")
models.create_attachment(
c, "PRJ-001", "screenshot.png",
"/tmp/prj/.kin/attachments/PRJ-001/screenshot.png",
"image/png", 1024,
)
models.create_attachment(
c, "PRJ-001", "mockup.jpg",
"/tmp/prj/.kin/attachments/PRJ-001/mockup.jpg",
"image/jpeg", 2048,
)
yield c
c.close()
def test_build_context_includes_attachments(self, conn_with_attachments):
"""KIN-090: AC2 — build_context включает вложения в контекст для всех ролей."""
ctx = build_context(conn_with_attachments, "PRJ-001", "debugger", "prj")
assert "attachments" in ctx
assert len(ctx["attachments"]) == 2
def test_build_context_attachments_have_filename_and_path(self, conn_with_attachments):
"""KIN-090: вложения в контексте содержат filename и path."""
ctx = build_context(conn_with_attachments, "PRJ-001", "debugger", "prj")
filenames = {a["filename"] for a in ctx["attachments"]}
paths = {a["path"] for a in ctx["attachments"]}
assert "screenshot.png" in filenames
assert "mockup.jpg" in filenames
assert "/tmp/prj/.kin/attachments/PRJ-001/screenshot.png" in paths
def test_build_context_attachments_key_always_present(self, conn):
"""KIN-094 #213: ключ 'attachments' всегда присутствует в контексте (пустой список если нет вложений)."""
# conn fixture has no attachments
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
assert "attachments" in ctx
assert ctx["attachments"] == []
def test_all_roles_get_attachments(self, conn_with_attachments):
"""KIN-090: AC2 — все роли (debugger, pm, tester, reviewer) получают вложения."""
for role in ("debugger", "pm", "tester", "reviewer", "backend_dev", "frontend_dev"):
ctx = build_context(conn_with_attachments, "PRJ-001", role, "prj")
assert "attachments" in ctx, f"Role '{role}' did not receive attachments"
def test_format_prompt_includes_attachments_section(self, conn_with_attachments):
"""KIN-090: format_prompt включает секцию '## Attachments' с именами и путями."""
ctx = build_context(conn_with_attachments, "PRJ-001", "debugger", "prj")
prompt = format_prompt(ctx, "debugger", "You are a debugger.")
assert "## Attachments" in prompt
assert "screenshot.png" in prompt
assert "/tmp/prj/.kin/attachments/PRJ-001/screenshot.png" in prompt
def test_format_prompt_no_attachments_section_when_none(self, conn):
"""KIN-090: format_prompt не добавляет секцию вложений, если их нет."""
ctx = build_context(conn, "VDOL-001", "debugger", "vdol")
prompt = format_prompt(ctx, "debugger", "Debug this.")
assert "## Attachments" not in prompt
# ---------------------------------------------------------------------------
# KIN-094: Attachments — ctx["attachments"] always present + inline text content
# ---------------------------------------------------------------------------
class TestAttachmentsKIN094:
"""KIN-094: AC3 — PM и другие агенты всегда получают ключ attachments в контексте;
текстовые файлы <= 32 KB вставляются inline в промпт."""
@pytest.fixture
def conn_no_attachments(self):
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", "/tmp/prj")
models.create_task(c, "PRJ-001", "prj", "Task")
yield c
c.close()
@pytest.fixture
def conn_text_attachment(self, tmp_path):
"""Проект с текстовым вложением <= 32 KB на диске."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
txt_file = tmp_path / "spec.txt"
txt_file.write_text("Привет, это спека задачи", encoding="utf-8")
models.create_attachment(
c, "PRJ-001", "spec.txt", str(txt_file), "text/plain", txt_file.stat().st_size,
)
yield c
c.close()
@pytest.fixture
def conn_md_attachment(self, tmp_path):
"""Проект с .md вложением (text/markdown или определяется по расширению)."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
md_file = tmp_path / "README.md"
md_file.write_text("# Title\n\nContent of readme", encoding="utf-8")
models.create_attachment(
c, "PRJ-001", "README.md", str(md_file), "text/markdown", md_file.stat().st_size,
)
yield c
c.close()
@pytest.fixture
def conn_json_attachment(self, tmp_path):
"""Проект с JSON-вложением (application/json)."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
json_file = tmp_path / "config.json"
json_file.write_text('{"key": "value"}', encoding="utf-8")
models.create_attachment(
c, "PRJ-001", "config.json", str(json_file), "application/json", json_file.stat().st_size,
)
yield c
c.close()
@pytest.fixture
def conn_large_text_attachment(self, tmp_path):
"""Проект с текстовым вложением > 32 KB (не должно инлайниться)."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
big_file = tmp_path / "big.txt"
big_file.write_text("x" * (32 * 1024 + 1), encoding="utf-8")
models.create_attachment(
c, "PRJ-001", "big.txt", str(big_file), "text/plain", big_file.stat().st_size,
)
yield c
c.close()
@pytest.fixture
def conn_image_attachment(self, tmp_path):
"""Проект с бинарным PNG-вложением (не должно инлайниться)."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
png_file = tmp_path / "screen.png"
png_file.write_bytes(b"\x89PNG\r\n\x1a\n" + b"\x00" * 64)
models.create_attachment(
c, "PRJ-001", "screen.png", str(png_file), "image/png", png_file.stat().st_size,
)
yield c
c.close()
# ------------------------------------------------------------------
# ctx["attachments"] always present
# ------------------------------------------------------------------
def test_pm_context_attachments_empty_list_when_no_attachments(self, conn_no_attachments):
"""KIN-094: PM получает пустой список attachments, а не отсутствующий ключ."""
ctx = build_context(conn_no_attachments, "PRJ-001", "pm", "prj")
assert "attachments" in ctx
assert ctx["attachments"] == []
def test_all_roles_attachments_key_present_when_empty(self, conn_no_attachments):
"""KIN-094: все роли получают ключ attachments (пустой список) даже без вложений."""
for role in ("pm", "debugger", "tester", "reviewer", "backend_dev", "frontend_dev", "architect"):
ctx = build_context(conn_no_attachments, "PRJ-001", role, "prj")
assert "attachments" in ctx, f"Role '{role}' missing 'attachments' key"
assert isinstance(ctx["attachments"], list), f"Role '{role}': attachments is not a list"
# ------------------------------------------------------------------
# Inline content for small text files
# ------------------------------------------------------------------
def test_format_prompt_inlines_small_text_file_content(self, conn_text_attachment):
"""KIN-094: содержимое текстового файла <= 32 KB вставляется inline в промпт."""
ctx = build_context(conn_text_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "Привет, это спека задачи" in prompt
def test_format_prompt_inlines_text_file_in_code_block(self, conn_text_attachment):
"""KIN-094: inline-контент обёрнут в блок кода (``` ... ```)."""
ctx = build_context(conn_text_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "```" in prompt
def test_format_prompt_inlines_md_file_by_extension(self, conn_md_attachment):
"""KIN-094: .md файл определяется по расширению и вставляется inline."""
ctx = build_context(conn_md_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "# Title" in prompt
assert "Content of readme" in prompt
def test_format_prompt_inlines_json_file_by_mime(self, conn_json_attachment):
"""KIN-094: application/json файл вставляется inline по MIME-типу."""
ctx = build_context(conn_json_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert '"key": "value"' in prompt
# ------------------------------------------------------------------
# NOT inlined: binary and large files
# ------------------------------------------------------------------
def test_format_prompt_does_not_inline_image_file(self, conn_image_attachment):
"""KIN-094: бинарный PNG файл НЕ вставляется inline."""
ctx = build_context(conn_image_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
# File is listed in ## Attachments section but no ``` block with binary content
assert "screen.png" in prompt # listed
assert "image/png" in prompt
# Should not contain raw binary or ``` code block for the PNG
# We verify the file content (PNG header) is NOT inlined
assert "\x89PNG" not in prompt
def test_format_prompt_does_not_inline_large_text_file(self, conn_large_text_attachment):
"""KIN-094: текстовый файл > 32 KB НЕ вставляется inline."""
ctx = build_context(conn_large_text_attachment, "PRJ-001", "pm", "prj")
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "big.txt" in prompt # listed
# Content should NOT be inlined (32KB+1 of 'x' chars)
assert "x" * 100 not in prompt
# ------------------------------------------------------------------
# Resilience: missing file on disk
# ------------------------------------------------------------------
def test_format_prompt_handles_missing_file_gracefully(self, tmp_path):
"""KIN-094: если файл отсутствует на диске, format_prompt не падает."""
c = init_db(":memory:")
models.create_project(c, "prj", "Prj", str(tmp_path))
models.create_task(c, "PRJ-001", "prj", "Task")
# Register attachment pointing to non-existent file
models.create_attachment(
c, "PRJ-001", "missing.txt",
str(tmp_path / "missing.txt"),
"text/plain", 100,
)
ctx = build_context(c, "PRJ-001", "pm", "prj")
# Should not raise — exception is caught silently
prompt = format_prompt(ctx, "pm", "You are PM.")
assert "missing.txt" in prompt # still listed
c.close()
# ------------------------------------------------------------------
# PM pipeline: attachments available in brief context
# ------------------------------------------------------------------
def test_pm_context_includes_attachment_paths_for_pipeline(self, conn_text_attachment):
"""KIN-094: PM-агент получает пути к вложениям в контексте для старта pipeline."""
ctx = build_context(conn_text_attachment, "PRJ-001", "pm", "prj")
assert len(ctx["attachments"]) == 1
att = ctx["attachments"][0]
assert att["filename"] == "spec.txt"
assert att["mime_type"] == "text/plain"
assert "path" in att

285
tests/test_db.py Normal file
View file

@ -0,0 +1,285 @@
"""Tests for core/db.py — schema and migration (KIN-071, KIN-073)."""
import sqlite3
import pytest
from core.db import init_db, _migrate
@pytest.fixture
def conn():
c = init_db(db_path=":memory:")
yield c
c.close()
def _cols(conn, table: str) -> set[str]:
"""Return set of column names for a table."""
return {row["name"] for row in conn.execute(f"PRAGMA table_info({table})").fetchall()}
# ---------------------------------------------------------------------------
# Schema: новые колонки KIN-071 присутствуют при свежей инициализации
# ---------------------------------------------------------------------------
class TestProjectsSchemaKin071:
"""PRAGMA table_info(projects) должен содержать новые KIN-071 колонки."""
def test_schema_has_project_type_column(self, conn):
assert "project_type" in _cols(conn, "projects")
def test_schema_has_ssh_host_column(self, conn):
assert "ssh_host" in _cols(conn, "projects")
def test_schema_has_ssh_user_column(self, conn):
assert "ssh_user" in _cols(conn, "projects")
def test_schema_has_ssh_key_path_column(self, conn):
assert "ssh_key_path" in _cols(conn, "projects")
def test_schema_has_ssh_proxy_jump_column(self, conn):
assert "ssh_proxy_jump" in _cols(conn, "projects")
def test_schema_has_description_column(self, conn):
assert "description" in _cols(conn, "projects")
def test_project_type_defaults_to_development(self, conn):
"""INSERT без project_type → значение по умолчанию 'development'."""
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('t1', 'T', '/t')"
)
conn.commit()
row = conn.execute(
"SELECT project_type FROM projects WHERE id='t1'"
).fetchone()
assert row["project_type"] == "development"
def test_ssh_fields_default_to_null(self, conn):
"""SSH-поля по умолчанию NULL."""
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('t2', 'T', '/t')"
)
conn.commit()
row = conn.execute(
"SELECT ssh_host, ssh_user, ssh_key_path, ssh_proxy_jump FROM projects WHERE id='t2'"
).fetchone()
assert row["ssh_host"] is None
assert row["ssh_user"] is None
assert row["ssh_key_path"] is None
assert row["ssh_proxy_jump"] is None
# ---------------------------------------------------------------------------
# Migration: _migrate добавляет KIN-071 колонки в старую схему (без них)
# ---------------------------------------------------------------------------
def _old_schema_conn() -> sqlite3.Connection:
"""Создаёт соединение с минимальной 'старой' схемой без KIN-071 колонок."""
conn = sqlite3.connect(":memory:")
conn.row_factory = sqlite3.Row
conn.executescript("""
CREATE TABLE projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT NOT NULL,
status TEXT DEFAULT 'active',
language TEXT DEFAULT 'ru',
execution_mode TEXT NOT NULL DEFAULT 'review'
);
CREATE TABLE tasks (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
title TEXT NOT NULL,
status TEXT DEFAULT 'pending',
execution_mode TEXT
);
""")
conn.commit()
return conn
def test_migrate_adds_project_type_to_old_schema():
"""_migrate добавляет project_type в старую схему без этой колонки."""
conn = _old_schema_conn()
_migrate(conn)
assert "project_type" in _cols(conn, "projects")
conn.close()
def test_migrate_adds_ssh_host_to_old_schema():
"""_migrate добавляет ssh_host в старую схему."""
conn = _old_schema_conn()
_migrate(conn)
assert "ssh_host" in _cols(conn, "projects")
conn.close()
def test_migrate_adds_all_ssh_columns_to_old_schema():
"""_migrate добавляет все SSH-колонки разом в старую схему."""
conn = _old_schema_conn()
_migrate(conn)
cols = _cols(conn, "projects")
assert {"ssh_host", "ssh_user", "ssh_key_path", "ssh_proxy_jump", "description"}.issubset(cols)
conn.close()
def test_migrate_is_idempotent():
"""Повторный вызов _migrate не ломает схему."""
conn = init_db(":memory:")
before = _cols(conn, "projects")
_migrate(conn)
after = _cols(conn, "projects")
assert before == after
conn.close()
# ---------------------------------------------------------------------------
# Migration KIN-UI-002: рекреация таблицы на минимальной схеме не падает
# ---------------------------------------------------------------------------
def test_migrate_recreates_table_without_operationalerror():
"""_migrate не бросает OperationalError при рекреации projects на минимальной схеме.
Регрессионный тест KIN-UI-002: INSERT SELECT в блоке KIN-ARCH-003 ранее
падал на отсутствующих колонках (tech_stack, priority, pm_prompt и др.).
"""
conn = _old_schema_conn() # path NOT NULL — триггер рекреации
try:
_migrate(conn)
except Exception as exc:
pytest.fail(f"_migrate raised {type(exc).__name__}: {exc}")
conn.close()
def test_migrate_path_becomes_nullable_on_old_schema():
"""После миграции старой схемы (path NOT NULL) колонка path становится nullable."""
conn = _old_schema_conn()
_migrate(conn)
path_col = next(
r for r in conn.execute("PRAGMA table_info(projects)").fetchall()
if r[1] == "path"
)
assert path_col[3] == 0, "path должна быть nullable после миграции KIN-ARCH-003"
conn.close()
def test_migrate_preserves_existing_rows_on_recreation():
"""Рекреация таблицы сохраняет существующие строки."""
conn = _old_schema_conn()
conn.execute(
"INSERT INTO projects (id, name, path, status) VALUES ('p1', 'MyProj', '/p', 'active')"
)
conn.commit()
_migrate(conn)
row = conn.execute("SELECT id, name, path, status FROM projects WHERE id='p1'").fetchone()
assert row is not None
assert row["name"] == "MyProj"
assert row["path"] == "/p"
assert row["status"] == "active"
conn.close()
def test_migrate_adds_missing_columns_before_recreation():
"""_migrate добавляет tech_stack, priority, pm_prompt, claude_md_path, forgejo_repo, created_at перед рекреацией."""
conn = _old_schema_conn()
_migrate(conn)
cols = _cols(conn, "projects")
required = {"tech_stack", "priority", "pm_prompt", "claude_md_path", "forgejo_repo", "created_at"}
assert required.issubset(cols), f"Отсутствуют колонки: {required - cols}"
conn.close()
def test_migrate_operations_project_with_null_path():
"""После миграции можно вставить operations-проект с path=NULL."""
conn = _old_schema_conn()
_migrate(conn)
conn.execute(
"INSERT INTO projects (id, name, path, project_type) VALUES ('ops1', 'Ops', NULL, 'operations')"
)
conn.commit()
row = conn.execute("SELECT path, project_type FROM projects WHERE id='ops1'").fetchone()
assert row["path"] is None
assert row["project_type"] == "operations"
conn.close()
# ---------------------------------------------------------------------------
# Schema KIN-073: acceptance_criteria в таблице tasks
# ---------------------------------------------------------------------------
class TestTasksAcceptanceCriteriaSchema:
"""Колонка acceptance_criteria присутствует в таблице tasks."""
def test_schema_has_acceptance_criteria_column(self, conn):
assert "acceptance_criteria" in _cols(conn, "tasks")
def test_acceptance_criteria_defaults_to_null(self, conn):
"""Создание задачи без acceptance_criteria — поле NULL (nullable)."""
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('p1', 'P', '/p')"
)
conn.execute(
"INSERT INTO tasks (id, project_id, title) VALUES ('t1', 'p1', 'My Task')"
)
conn.commit()
row = conn.execute(
"SELECT acceptance_criteria FROM tasks WHERE id='t1'"
).fetchone()
assert row["acceptance_criteria"] is None
def test_create_task_with_acceptance_criteria_saves_field(self, conn):
"""Создание задачи с acceptance_criteria — значение сохраняется в БД."""
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('p2', 'P', '/p')"
)
criteria = "Поле должно сохраняться. GET возвращает значение."
conn.execute(
"INSERT INTO tasks (id, project_id, title, acceptance_criteria)"
" VALUES ('t2', 'p2', 'Task with criteria', ?)",
(criteria,),
)
conn.commit()
row = conn.execute(
"SELECT acceptance_criteria FROM tasks WHERE id='t2'"
).fetchone()
assert row["acceptance_criteria"] == criteria
def test_get_task_returns_acceptance_criteria(self, conn):
"""SELECT задачи возвращает acceptance_criteria (критерий приёмки 3)."""
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('p3', 'P', '/p')"
)
conn.execute(
"INSERT INTO tasks (id, project_id, title, acceptance_criteria)"
" VALUES ('t3', 'p3', 'T', 'AC value')",
)
conn.commit()
row = conn.execute("SELECT * FROM tasks WHERE id='t3'").fetchone()
assert row["acceptance_criteria"] == "AC value"
# ---------------------------------------------------------------------------
# Migration KIN-073: _migrate добавляет acceptance_criteria в старую схему
# ---------------------------------------------------------------------------
def test_migrate_adds_acceptance_criteria_to_old_schema():
"""_migrate добавляет acceptance_criteria в tasks если колонки нет."""
conn = _old_schema_conn()
_migrate(conn)
assert "acceptance_criteria" in _cols(conn, "tasks")
conn.close()
def test_migrate_acceptance_criteria_is_nullable_after_migration():
"""После миграции acceptance_criteria nullable — старые строки не ломаются."""
conn = _old_schema_conn()
conn.execute(
"INSERT INTO projects (id, name, path) VALUES ('pm', 'P', '/p')"
)
conn.execute(
"INSERT INTO tasks (id, project_id, title) VALUES ('tm', 'pm', 'Old Task')"
)
conn.commit()
_migrate(conn)
row = conn.execute("SELECT acceptance_criteria FROM tasks WHERE id='tm'").fetchone()
assert row["acceptance_criteria"] is None
conn.close()

View file

@ -219,6 +219,35 @@ class TestResolvePendingAction:
# _run_claude with allow_write=True # _run_claude with allow_write=True
assert result["rerun_result"]["success"] is True assert result["rerun_result"]["success"] is True
def test_manual_task_brief_has_task_type_manual_escalation(self, conn):
"""brief["task_type"] должен быть 'manual_escalation' — KIN-020."""
action = {
"type": "permission_fix",
"original_item": {"title": "Fix .dockerignore", "type": "hotfix",
"priority": 3, "brief": "Create .dockerignore"},
}
result = resolve_pending_action(conn, "VDOL-001", action, "manual_task")
assert result is not None
assert result["brief"]["task_type"] == "manual_escalation"
def test_manual_task_brief_includes_source(self, conn):
"""brief["source"] должен содержать ссылку на родительскую задачу — KIN-020."""
action = {
"type": "permission_fix",
"original_item": {"title": "Fix X"},
}
result = resolve_pending_action(conn, "VDOL-001", action, "manual_task")
assert result["brief"]["source"] == "followup:VDOL-001"
def test_manual_task_brief_includes_description(self, conn):
"""brief["description"] копируется из original_item.brief — KIN-020."""
action = {
"type": "permission_fix",
"original_item": {"title": "Fix Y", "brief": "Detailed context here"},
}
result = resolve_pending_action(conn, "VDOL-001", action, "manual_task")
assert result["brief"]["description"] == "Detailed context here"
def test_nonexistent_task(self, conn): def test_nonexistent_task(self, conn):
action = {"type": "permission_fix", "original_item": {}} action = {"type": "permission_fix", "original_item": {}}
assert resolve_pending_action(conn, "NOPE", action, "skip") is None assert resolve_pending_action(conn, "NOPE", action, "skip") is None
@ -261,9 +290,177 @@ class TestAutoResolvePendingActions:
tasks = models.list_tasks(conn, project_id="vdol") tasks = models.list_tasks(conn, project_id="vdol")
assert len(tasks) == 2 # VDOL-001 + новая manual task assert len(tasks) == 2 # VDOL-001 + новая manual task
@patch("agents.runner._run_claude")
def test_escalated_manual_task_has_task_type_manual_escalation(self, mock_claude, conn):
"""При эскалации после провала rerun созданная задача имеет task_type='manual_escalation' — KIN-020."""
mock_claude.return_value = {"output": "", "returncode": 1}
action = {
"type": "permission_fix",
"description": "Fix X",
"original_item": {"title": "Fix X", "type": "frontend_dev", "brief": "Apply fix"},
"options": ["rerun", "manual_task", "skip"],
}
results = auto_resolve_pending_actions(conn, "VDOL-001", [action])
assert results[0]["resolved"] == "manual_task"
created_task = results[0]["result"]
assert created_task["brief"]["task_type"] == "manual_escalation"
@patch("agents.runner._run_claude") @patch("agents.runner._run_claude")
def test_empty_pending_actions(self, mock_claude, conn): def test_empty_pending_actions(self, mock_claude, conn):
"""Пустой список — пустой результат.""" """Пустой список — пустой результат."""
results = auto_resolve_pending_actions(conn, "VDOL-001", []) results = auto_resolve_pending_actions(conn, "VDOL-001", [])
assert results == [] assert results == []
mock_claude.assert_not_called() mock_claude.assert_not_called()
# ---------------------------------------------------------------------------
# KIN-068 — category наследуется при создании followup и manual задач
# ---------------------------------------------------------------------------
class TestNextTaskIdWithCategory:
"""_next_task_id с category генерирует ID в формате PROJ-CAT-NNN."""
@pytest.mark.parametrize("category,expected_prefix", [
("SEC", "VDOL-SEC-"),
("UI", "VDOL-UI-"),
("API", "VDOL-API-"),
("INFRA", "VDOL-INFRA-"),
("BIZ", "VDOL-BIZ-"),
])
def test_with_category_produces_cat_format(self, conn, category, expected_prefix):
"""_next_task_id с category возвращает PROJ-CAT-NNN."""
result = _next_task_id(conn, "vdol", category=category)
assert result.startswith(expected_prefix)
suffix = result[len(expected_prefix):]
assert suffix.isdigit() and len(suffix) == 3
def test_with_none_category_produces_plain_format(self, conn):
"""_next_task_id без category возвращает PROJ-NNN (backward compat)."""
result = _next_task_id(conn, "vdol", category=None)
# VDOL-001 already exists → next is VDOL-002
assert result == "VDOL-002"
parts = result.split("-")
assert len(parts) == 2
assert parts[1].isdigit()
def test_first_cat_task_is_001(self, conn):
"""Первая задача категории всегда получает номер 001."""
result = _next_task_id(conn, "vdol", category="DB")
assert result == "VDOL-DB-001"
def test_cat_counter_is_per_category(self, conn):
"""Счётчик независим для каждой категории."""
models.create_task(conn, "VDOL-SEC-001", "vdol", "Security task", category="SEC")
assert _next_task_id(conn, "vdol", category="SEC") == "VDOL-SEC-002"
assert _next_task_id(conn, "vdol", category="UI") == "VDOL-UI-001"
class TestFollowupCategoryInheritance:
"""Регрессионный тест KIN-068: followup задачи наследуют category родителя."""
@pytest.mark.parametrize("category", ["SEC", "UI", "API", "INFRA", "BIZ", None])
@patch("agents.runner._run_claude")
def test_generate_followups_followup_inherits_category(
self, mock_claude, category, conn
):
"""Followup задача наследует category родительской задачи (включая None)."""
# Установить category на родительской задаче
models.update_task(conn, "VDOL-001", category=category)
mock_claude.return_value = {
"output": json.dumps([
{"title": "Followup task", "type": "feature", "priority": 3},
]),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 1
followup = result["created"][0]
# category должен совпадать с родительской задачей
assert followup["category"] == category
# ID должен иметь правильный формат
if category:
assert followup["id"].startswith(f"VDOL-{category}-"), (
f"Ожидался ID вида VDOL-{category}-NNN, получен {followup['id']!r}"
)
else:
# Без категории: старый формат VDOL-NNN
parts = followup["id"].split("-")
assert len(parts) == 2, (
f"Ожидался ID вида VDOL-NNN (2 части), получен {followup['id']!r}"
)
assert parts[1].isdigit()
@pytest.mark.parametrize("category", ["SEC", "UI", "API", "INFRA", "BIZ", None])
def test_resolve_pending_action_manual_task_inherits_category(
self, category, conn
):
"""manual_task при resolve_pending_action наследует category родителя."""
models.update_task(conn, "VDOL-001", category=category)
action = {
"type": "permission_fix",
"original_item": {
"title": "Fix manually",
"type": "hotfix",
"priority": 4,
"brief": "Apply permissions fix",
},
}
result = resolve_pending_action(conn, "VDOL-001", action, "manual_task")
assert result is not None
assert result["category"] == category
if category:
assert result["id"].startswith(f"VDOL-{category}-"), (
f"Ожидался ID вида VDOL-{category}-NNN, получен {result['id']!r}"
)
else:
parts = result["id"].split("-")
assert len(parts) == 2
assert parts[1].isdigit()
@patch("agents.runner._run_claude")
def test_generate_followups_sec_category_id_format(self, mock_claude, conn):
"""Регрессионный тест KIN-068: followup задача с category=SEC получает ID VDOL-SEC-001."""
models.update_task(conn, "VDOL-001", category="SEC")
mock_claude.return_value = {
"output": json.dumps([{"title": "Fix SQL injection", "priority": 2}]),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 1
followup = result["created"][0]
assert followup["id"] == "VDOL-SEC-001"
assert followup["category"] == "SEC"
@patch("agents.runner._run_claude")
def test_generate_followups_multiple_followups_same_category(self, mock_claude, conn):
"""Несколько followup задач с одной category получают инкрементальные номера."""
models.update_task(conn, "VDOL-001", category="API")
mock_claude.return_value = {
"output": json.dumps([
{"title": "Add auth header", "priority": 2},
{"title": "Add rate limit", "priority": 3},
]),
"returncode": 0,
}
result = generate_followups(conn, "VDOL-001")
assert len(result["created"]) == 2
ids = [t["id"] for t in result["created"]]
assert ids[0] == "VDOL-API-001"
assert ids[1] == "VDOL-API-002"
for t in result["created"]:
assert t["category"] == "API"

View file

@ -1,6 +1,8 @@
"""Tests for core/hooks.py — post-pipeline hook execution.""" """Tests for core/hooks.py — post-pipeline hook execution."""
import os
import subprocess import subprocess
import tempfile
import pytest import pytest
from unittest.mock import patch, MagicMock from unittest.mock import patch, MagicMock
@ -538,27 +540,25 @@ class TestKIN052RebuildFrontendCommand:
"""Хук должен сохраняться в файловой БД и быть доступен после пересоздания соединения. """Хук должен сохраняться в файловой БД и быть доступен после пересоздания соединения.
Симулирует рестарт: создаём хук, закрываем соединение, открываем новое хук на месте. Симулирует рестарт: создаём хук, закрываем соединение, открываем новое хук на месте.
Используем проект НЕ 'kin', чтобы _seed_default_hooks не мигрировал хук.
""" """
import tempfile
import os
from core.db import init_db
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f: with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name db_path = f.name
try: try:
# Первое соединение — создаём проект и хук # Первое соединение — создаём проект и хук
conn1 = init_db(db_path) conn1 = init_db(db_path)
from core import models as _models from core import models as _models
_models.create_project(conn1, "kin", "Kin", "/projects/kin", tech_stack=["vue3"]) _models.create_project(conn1, "kin-test", "KinTest", "/projects/kin-test",
cmd = "cd /Users/grosfrumos/projects/kin/web/frontend && npm run build" tech_stack=["vue3"])
hook = create_hook(conn1, "kin", "rebuild-frontend", "pipeline_completed", cmd, cmd = "cd /projects/kin-test/web/frontend && npm run build"
hook = create_hook(conn1, "kin-test", "rebuild-frontend", "pipeline_completed", cmd,
trigger_module_path=None) trigger_module_path=None)
hook_id = hook["id"] hook_id = hook["id"]
conn1.close() conn1.close()
# Второе соединение — «рестарт», хук должен быть на месте # Второе соединение — «рестарт», хук должен быть на месте
conn2 = init_db(db_path) conn2 = init_db(db_path)
hooks = get_hooks(conn2, "kin", event="pipeline_completed", enabled_only=True) hooks = get_hooks(conn2, "kin-test", event="pipeline_completed", enabled_only=True)
conn2.close() conn2.close()
assert len(hooks) == 1, "После пересоздания соединения хук должен оставаться в БД" assert len(hooks) == 1, "После пересоздания соединения хук должен оставаться в БД"
@ -568,3 +568,337 @@ class TestKIN052RebuildFrontendCommand:
assert hooks[0]["trigger_module_path"] is None assert hooks[0]["trigger_module_path"] is None
finally: finally:
os.unlink(db_path) os.unlink(db_path)
# ---------------------------------------------------------------------------
# KIN-053: _seed_default_hooks — автоматический хук при инициализации БД
# ---------------------------------------------------------------------------
class TestKIN053SeedDefaultHooks:
"""Тесты для _seed_default_hooks (KIN-053).
При init_db автоматически создаётся rebuild-frontend хук для проекта 'kin',
если этот проект уже существует в БД. Функция идемпотентна.
"""
def test_seed_skipped_when_no_kin_project(self):
"""_seed_default_hooks не создаёт хук, если проекта 'kin' нет."""
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn = init_db(db_path)
hooks = get_hooks(conn, "kin", enabled_only=False)
conn.close()
assert hooks == []
finally:
os.unlink(db_path)
def test_seed_creates_hook_when_kin_project_exists(self):
"""_seed_default_hooks создаёт rebuild-frontend хук при наличии проекта 'kin'.
Порядок: init_db create_project('kin') повторный init_db хук есть.
KIN-003: команда теперь scripts/rebuild-frontend.sh, не cd && npm run build.
"""
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn1 = init_db(db_path)
models.create_project(conn1, "kin", "Kin", "/projects/kin")
conn1.close()
conn2 = init_db(db_path)
hooks = get_hooks(conn2, "kin", event="pipeline_completed", enabled_only=True)
conn2.close()
assert len(hooks) == 1
assert hooks[0]["name"] == "rebuild-frontend"
assert "rebuild-frontend.sh" in hooks[0]["command"]
finally:
os.unlink(db_path)
def test_seed_hook_has_correct_command(self):
"""Команда хука использует динамический путь из projects.path (KIN-BIZ-004).
KIN-003: хук мигрирован на скрипт scripts/rebuild-frontend.sh
с trigger_module_path='web/frontend/*' для точного git-фильтра.
KIN-BIZ-004: путь берётся из projects.path, не захардкожен.
"""
project_path = "/projects/kin"
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn1 = init_db(db_path)
models.create_project(conn1, "kin", "Kin", project_path)
conn1.close()
conn2 = init_db(db_path)
hooks = get_hooks(conn2, "kin", event="pipeline_completed", enabled_only=False)
conn2.close()
assert hooks[0]["command"] == f"{project_path}/scripts/rebuild-frontend.sh"
assert hooks[0]["trigger_module_path"] == "web/frontend/*"
assert hooks[0]["working_dir"] == project_path
assert hooks[0]["timeout_seconds"] == 300
finally:
os.unlink(db_path)
def test_seed_idempotent_no_duplicate(self):
"""Повторные вызовы init_db не дублируют хук."""
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn = init_db(db_path)
models.create_project(conn, "kin", "Kin", "/projects/kin")
conn.close()
for _ in range(3):
c = init_db(db_path)
c.close()
conn_final = init_db(db_path)
hooks = get_hooks(conn_final, "kin", event="pipeline_completed", enabled_only=False)
conn_final.close()
assert len(hooks) == 1, f"Ожидается 1 хук, получено {len(hooks)}"
finally:
os.unlink(db_path)
def test_seed_hook_does_not_affect_other_projects(self):
"""Seed не создаёт хуки для других проектов."""
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn1 = init_db(db_path)
models.create_project(conn1, "kin", "Kin", "/projects/kin")
models.create_project(conn1, "other", "Other", "/projects/other")
conn1.close()
conn2 = init_db(db_path)
other_hooks = get_hooks(conn2, "other", enabled_only=False)
conn2.close()
assert other_hooks == []
finally:
os.unlink(db_path)
def test_seed_hook_migration_updates_existing_hook(self):
"""_seed_default_hooks мигрирует существующий хук используя динамический путь (KIN-BIZ-004).
Если rebuild-frontend уже существует со старой командой (cd && npm run build),
повторный init_db должен обновить его на scripts/rebuild-frontend.sh
с путём из projects.path.
"""
project_path = "/projects/kin"
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn1 = init_db(db_path)
models.create_project(conn1, "kin", "Kin", project_path)
# Вставляем старый хук вручную (имитация состояния до KIN-003)
old_cmd = f"cd {project_path}/web/frontend && npm run build"
conn1.execute(
"""INSERT INTO hooks (project_id, name, event, trigger_module_path, command,
working_dir, timeout_seconds, enabled)
VALUES ('kin', 'rebuild-frontend', 'pipeline_completed',
NULL, ?, NULL, 120, 1)""",
(old_cmd,),
)
conn1.commit()
conn1.close()
# Повторный init_db запускает _seed_default_hooks с миграцией
conn2 = init_db(db_path)
hooks = get_hooks(conn2, "kin", event="pipeline_completed", enabled_only=False)
conn2.close()
assert len(hooks) == 1
assert hooks[0]["command"] == f"{project_path}/scripts/rebuild-frontend.sh"
assert hooks[0]["trigger_module_path"] == "web/frontend/*"
assert hooks[0]["working_dir"] == project_path
assert hooks[0]["timeout_seconds"] == 300
finally:
os.unlink(db_path)
def test_seed_hook_uses_dynamic_path_not_hardcoded(self):
"""Команда хука содержит путь из projects.path, а не захардкоженный /Users/grosfrumos/... (KIN-BIZ-004).
Создаём проект с нестандартным путём и проверяем,
что хук использует именно этот путь.
"""
custom_path = "/srv/custom/kin-deployment"
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as f:
db_path = f.name
try:
conn1 = init_db(db_path)
models.create_project(conn1, "kin", "Kin", custom_path)
conn1.close()
conn2 = init_db(db_path)
hooks = get_hooks(conn2, "kin", event="pipeline_completed", enabled_only=False)
conn2.close()
assert len(hooks) == 1
assert hooks[0]["command"] == f"{custom_path}/scripts/rebuild-frontend.sh", (
"Команда должна использовать путь из projects.path, не захардкоженный"
)
assert hooks[0]["working_dir"] == custom_path, (
"working_dir должен совпадать с projects.path"
)
assert "/Users/grosfrumos" not in hooks[0]["command"], (
"Захардкоженный путь /Users/grosfrumos не должен присутствовать в команде"
)
finally:
os.unlink(db_path)
# ---------------------------------------------------------------------------
# KIN-003: changed_files — точный git-фильтр для trigger_module_path
# ---------------------------------------------------------------------------
class TestChangedFilesMatching:
"""Тесты для нового параметра changed_files в run_hooks() (KIN-003).
Когда changed_files передан trigger_module_path матчится по реальным
git-изменённым файлам, а не по task_modules из БД.
"""
def _make_proc(self, returncode=0, stdout="ok", stderr=""):
m = MagicMock()
m.returncode = returncode
m.stdout = stdout
m.stderr = stderr
return m
@pytest.fixture
def frontend_trigger_hook(self, conn):
"""Хук с trigger_module_path='web/frontend/*'."""
return create_hook(
conn, "vdol", "rebuild-frontend", "pipeline_completed",
"scripts/rebuild-frontend.sh",
trigger_module_path="web/frontend/*",
working_dir="/tmp",
)
@patch("core.hooks.subprocess.run")
def test_hook_fires_when_frontend_file_in_changed_files(
self, mock_run, conn, frontend_trigger_hook
):
"""Хук срабатывает, если среди changed_files есть файл в web/frontend/."""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[],
changed_files=["web/frontend/App.vue", "core/models.py"],
)
assert len(results) == 1
assert results[0].name == "rebuild-frontend"
mock_run.assert_called_once()
@patch("core.hooks.subprocess.run")
def test_hook_skipped_when_no_frontend_file_in_changed_files(
self, mock_run, conn, frontend_trigger_hook
):
"""Хук НЕ срабатывает, если changed_files не содержит web/frontend/* файлов."""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[],
changed_files=["core/models.py", "web/api.py", "agents/runner.py"],
)
assert len(results) == 0
mock_run.assert_not_called()
@patch("core.hooks.subprocess.run")
def test_hook_skipped_when_changed_files_is_empty_list(
self, mock_run, conn, frontend_trigger_hook
):
"""Пустой changed_files [] — хук с trigger_module_path не срабатывает."""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[{"path": "web/frontend/App.vue", "name": "App"}],
changed_files=[], # git говорит: ничего не изменилось
)
assert len(results) == 0
mock_run.assert_not_called()
@patch("core.hooks.subprocess.run")
def test_changed_files_overrides_task_modules_match(
self, mock_run, conn, frontend_trigger_hook
):
"""Если changed_files передан, task_modules игнорируется для фильтрации.
task_modules содержит frontend-файл, но changed_files нет.
Хук не должен сработать: changed_files имеет приоритет.
"""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[{"path": "web/frontend/App.vue", "name": "App"}],
changed_files=["core/models.py"], # нет frontend-файлов
)
assert len(results) == 0, (
"changed_files должен иметь приоритет над task_modules"
)
mock_run.assert_not_called()
@patch("core.hooks.subprocess.run")
def test_fallback_to_task_modules_when_changed_files_is_none(
self, mock_run, conn, frontend_trigger_hook
):
"""Если changed_files=None — используется старое поведение через task_modules."""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[{"path": "web/frontend/App.vue", "name": "App"}],
changed_files=None, # не передан — fallback
)
assert len(results) == 1
assert results[0].name == "rebuild-frontend"
mock_run.assert_called_once()
@patch("core.hooks.subprocess.run")
def test_hook_without_trigger_fires_regardless_of_changed_files(
self, mock_run, conn
):
"""Хук без trigger_module_path всегда срабатывает, даже если changed_files=[].
Используется для хуков, которые должны запускаться после каждого pipeline.
"""
mock_run.return_value = self._make_proc()
create_hook(
conn, "vdol", "always-run", "pipeline_completed",
"echo always",
trigger_module_path=None,
working_dir="/tmp",
)
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[],
changed_files=[], # пусто — но хук без фильтра всегда запустится
)
assert len(results) == 1
assert results[0].name == "always-run"
mock_run.assert_called_once()
@patch("core.hooks.subprocess.run")
def test_deep_frontend_path_matches_glob(
self, mock_run, conn, frontend_trigger_hook
):
"""Вложенные пути web/frontend/src/components/Foo.vue матчатся по 'web/frontend/*'."""
mock_run.return_value = self._make_proc()
results = run_hooks(
conn, "vdol", "VDOL-001",
event="pipeline_completed",
task_modules=[],
changed_files=["web/frontend/src/components/TaskCard.vue"],
)
assert len(results) == 1, (
"fnmatch должен рекурсивно матчить 'web/frontend/*' на вложенные пути"
)

View file

@ -0,0 +1,377 @@
"""Regression tests for KIN-089: 500 Internal Server Error when adding credentials.
Root cause: DB schema had label/login/credential columns; code expected name/username/auth_value.
Fix: Migration in core/db.py (_migrate) renames columns labelname, loginusername, credentialauth_value.
Acceptance criteria:
1. Credentials can be added without error (status 201, not 500)
2. Credentials are stored in DB (encrypted)
3. Sysadmin task brief contains environment fields for inventory
"""
import sqlite3
import pytest
from unittest.mock import patch, MagicMock
from core.db import init_db, _migrate
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _cols(conn: sqlite3.Connection, table: str) -> set[str]:
return {r[1] for r in conn.execute(f"PRAGMA table_info({table})").fetchall()}
def _conn_with_old_env_schema() -> sqlite3.Connection:
"""Creates in-memory DB with OLD project_environments schema (label/login/credential)."""
conn = sqlite3.connect(":memory:")
conn.row_factory = sqlite3.Row
conn.executescript("""
CREATE TABLE projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT,
status TEXT DEFAULT 'active',
language TEXT DEFAULT 'ru',
execution_mode TEXT NOT NULL DEFAULT 'review'
);
CREATE TABLE tasks (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
title TEXT NOT NULL,
status TEXT DEFAULT 'pending',
execution_mode TEXT
);
CREATE TABLE project_environments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
label TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER DEFAULT 22,
login TEXT NOT NULL,
auth_type TEXT NOT NULL DEFAULT 'password',
credential TEXT,
is_installed INTEGER NOT NULL DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(project_id, label)
);
INSERT INTO projects VALUES ('corelock', 'Corelock', '/corelock', 'active', 'ru', 'review');
INSERT INTO project_environments
(project_id, label, host, port, login, auth_type, credential, is_installed)
VALUES ('corelock', 'prod', '10.5.1.254', 22, 'pelmen', 'password', 'b64:c2VjcmV0', 0);
""")
conn.commit()
return conn
# ---------------------------------------------------------------------------
# Migration: label/login/credential → name/username/auth_value
# ---------------------------------------------------------------------------
class TestKin089Migration:
"""Regression: _migrate renames env columns from old schema to new schema."""
def test_migration_renames_label_to_name(self):
conn = _conn_with_old_env_schema()
_migrate(conn)
cols = _cols(conn, "project_environments")
assert "name" in cols, "After migration, 'name' column must exist"
assert "label" not in cols, "After migration, 'label' column must not exist"
conn.close()
def test_migration_renames_login_to_username(self):
conn = _conn_with_old_env_schema()
_migrate(conn)
cols = _cols(conn, "project_environments")
assert "username" in cols, "After migration, 'username' column must exist"
assert "login" not in cols, "After migration, 'login' column must not exist"
conn.close()
def test_migration_renames_credential_to_auth_value(self):
conn = _conn_with_old_env_schema()
_migrate(conn)
cols = _cols(conn, "project_environments")
assert "auth_value" in cols, "After migration, 'auth_value' column must exist"
assert "credential" not in cols, "After migration, 'credential' column must not exist"
conn.close()
def test_migration_preserves_existing_data(self):
"""After migration, existing env rows must be accessible with new column names."""
conn = _conn_with_old_env_schema()
_migrate(conn)
row = conn.execute(
"SELECT name, username, auth_value FROM project_environments WHERE project_id = 'corelock'"
).fetchone()
assert row is not None, "Existing row must survive migration"
assert row["name"] == "prod"
assert row["username"] == "pelmen"
assert row["auth_value"] == "b64:c2VjcmV0"
conn.close()
def test_migration_is_idempotent_on_new_schema(self):
"""Calling _migrate on a DB that already has new schema must not fail."""
conn = init_db(":memory:")
before = _cols(conn, "project_environments")
_migrate(conn)
after = _cols(conn, "project_environments")
assert before == after, "_migrate must not alter schema when new columns already exist"
conn.close()
def test_migration_preserves_unique_constraint(self):
"""After migration, UNIQUE(project_id, name) constraint must still work."""
conn = _conn_with_old_env_schema()
_migrate(conn)
with pytest.raises(sqlite3.IntegrityError):
conn.execute(
"INSERT INTO project_environments (project_id, name, host, username) "
"VALUES ('corelock', 'prod', '1.2.3.4', 'root')"
)
conn.close()
# ---------------------------------------------------------------------------
# Endpoint regression: POST /environments must return 201, not 500
# ---------------------------------------------------------------------------
@pytest.fixture
def client(tmp_path):
import web.api as api_module
api_module.DB_PATH = tmp_path / "test.db"
from web.api import app
from fastapi.testclient import TestClient
c = TestClient(app)
c.post("/api/projects", json={"id": "corelock", "name": "Corelock", "path": "/corelock"})
return c
def test_create_environment_returns_201_not_500(client):
"""Regression KIN-089: POST /environments must not return 500."""
r = client.post("/api/projects/corelock/environments", json={
"name": "prod",
"host": "10.5.1.254",
"username": "pelmen",
"port": 22,
"auth_type": "password",
"auth_value": "s3cr3t",
"is_installed": False,
})
assert r.status_code == 201, f"Expected 201, got {r.status_code}: {r.text}"
def test_create_environment_missing_kin_secret_key_returns_503(tmp_path):
"""When KIN_SECRET_KEY is not set, POST /environments must return 503, not 500.
503 = server misconfiguration (operator error), not 500 (code bug).
"""
import os
import web.api as api_module
api_module.DB_PATH = tmp_path / "test503.db"
from web.api import app
from fastapi.testclient import TestClient
env_without_key = {k: v for k, v in os.environ.items() if k != "KIN_SECRET_KEY"}
with patch.dict(os.environ, env_without_key, clear=True):
c = TestClient(app)
c.post("/api/projects", json={"id": "corelock", "name": "Corelock", "path": "/corelock"})
r = c.post("/api/projects/corelock/environments", json={
"name": "prod",
"host": "10.5.1.254",
"username": "pelmen",
"auth_value": "secret",
})
assert r.status_code == 503, (
f"Missing KIN_SECRET_KEY must return 503 (not 500 or other), got {r.status_code}: {r.text}"
)
# ---------------------------------------------------------------------------
# AC: Credentials stored in DB
# ---------------------------------------------------------------------------
def test_create_environment_auth_value_encrypted_in_db(client):
"""AC: auth_value is stored encrypted in DB, not plain text."""
import web.api as api_module
from core.db import init_db
from core import models as m
r = client.post("/api/projects/corelock/environments", json={
"name": "db-creds-test",
"host": "10.5.1.254",
"username": "pelmen",
"auth_value": "supersecret",
})
assert r.status_code == 201
env_id = r.json()["id"]
conn = init_db(api_module.DB_PATH)
row = conn.execute(
"SELECT auth_value FROM project_environments WHERE id = ?", (env_id,)
).fetchone()
conn.close()
assert row["auth_value"] is not None, "auth_value must be stored in DB"
assert row["auth_value"] != "supersecret", "auth_value must NOT be stored as plain text"
def test_create_environment_auth_value_hidden_in_response(client):
"""AC: auth_value is never returned in API response."""
r = client.post("/api/projects/corelock/environments", json={
"name": "hidden-creds",
"host": "10.5.1.254",
"username": "pelmen",
"auth_value": "supersecret",
})
assert r.status_code == 201
assert r.json().get("auth_value") is None, "auth_value must be None in response"
def test_create_environment_stored_credential_is_decryptable(client):
"""AC: Stored credential can be decrypted back to original value."""
import web.api as api_module
from core.db import init_db
from core import models as m
r = client.post("/api/projects/corelock/environments", json={
"name": "decrypt-test",
"host": "10.5.1.254",
"username": "pelmen",
"auth_value": "mypassword123",
})
assert r.status_code == 201
env_id = r.json()["id"]
conn = init_db(api_module.DB_PATH)
row = conn.execute(
"SELECT auth_value FROM project_environments WHERE id = ?", (env_id,)
).fetchone()
conn.close()
decrypted = m._decrypt_auth(row["auth_value"])
assert decrypted == "mypassword123", "Stored credential must decrypt to original value"
# ---------------------------------------------------------------------------
# AC: Sysadmin sees environment fields in context for inventory
# ---------------------------------------------------------------------------
def test_sysadmin_task_created_with_env_fields_in_brief(client):
"""AC: When is_installed=True, sysadmin task brief contains host and username."""
import web.api as api_module
from core.db import init_db
from core import models as m
with patch("subprocess.Popen") as mock_popen:
mock_popen.return_value = MagicMock(pid=12345)
r = client.post("/api/projects/corelock/environments", json={
"name": "prod-scan",
"host": "10.5.1.254",
"username": "pelmen",
"is_installed": True,
})
assert r.status_code == 201
assert "scan_task_id" in r.json(), "scan_task_id must be returned when is_installed=True"
task_id = r.json()["scan_task_id"]
conn = init_db(api_module.DB_PATH)
task = m.get_task(conn, task_id)
conn.close()
assert task is not None, "Sysadmin task must be created in DB"
assert task["assigned_role"] == "sysadmin"
assert task["category"] == "INFRA"
brief = task["brief"]
brief_str = str(brief)
assert "10.5.1.254" in brief_str, "Sysadmin brief must contain host for inventory"
assert "pelmen" in brief_str, "Sysadmin brief must contain username for inventory"
def test_sysadmin_task_brief_is_dict_not_string(client):
"""Sysadmin task brief must be a structured dict (not raw string) for agent parsing."""
import web.api as api_module
from core.db import init_db
from core import models as m
with patch("subprocess.Popen") as mock_popen:
mock_popen.return_value = MagicMock(pid=99999)
r = client.post("/api/projects/corelock/environments", json={
"name": "brief-type-test",
"host": "10.5.1.1",
"username": "root",
"is_installed": True,
})
task_id = r.json()["scan_task_id"]
conn = init_db(api_module.DB_PATH)
task = m.get_task(conn, task_id)
conn.close()
assert isinstance(task["brief"], dict), (
f"Sysadmin task brief must be a dict, got {type(task['brief'])}"
)
def test_post_migration_create_environment_works(tmp_path):
"""AC: After DB migration from old schema, create_environment works end-to-end."""
import web.api as api_module
from fastapi.testclient import TestClient
# Set up DB with old schema using a file-based DB (to test init_db migration path)
old_db_path = tmp_path / "old.db"
conn = sqlite3.connect(str(old_db_path))
conn.row_factory = sqlite3.Row
conn.executescript("""
CREATE TABLE projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT,
status TEXT DEFAULT 'active',
language TEXT DEFAULT 'ru',
execution_mode TEXT NOT NULL DEFAULT 'review'
);
CREATE TABLE tasks (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
title TEXT NOT NULL,
status TEXT DEFAULT 'pending',
execution_mode TEXT
);
CREATE TABLE project_environments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id TEXT NOT NULL REFERENCES projects(id),
label TEXT NOT NULL,
host TEXT NOT NULL,
port INTEGER DEFAULT 22,
login TEXT NOT NULL,
auth_type TEXT NOT NULL DEFAULT 'password',
credential TEXT,
is_installed INTEGER NOT NULL DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(project_id, label)
);
INSERT INTO projects VALUES ('corelock', 'Corelock', '/corelock', 'active', 'ru', 'review');
""")
conn.commit()
conn.close()
# Switch API to use the old DB — init_db will run _migrate on it
api_module.DB_PATH = old_db_path
from web.api import app
c = TestClient(app)
# Trigger init_db migration by making a request
r = c.post("/api/projects/corelock/environments", json={
"name": "prod",
"host": "10.5.1.254",
"username": "pelmen",
"auth_value": "topsecret",
})
assert r.status_code == 201, (
f"After migration from old schema, create_environment must return 201, got {r.status_code}: {r.text}"
)
assert r.json()["name"] == "prod"
assert r.json()["username"] == "pelmen"

View file

@ -0,0 +1,551 @@
"""
Regression tests for KIN-091:
(1) Revise button feedback loop, revise_count, target_role, max limit
(2) Auto-test before review _run_project_tests, fix loop, block on exhaustion
(3) Spec-driven workflow route exists and has correct steps in specialists.yaml
(4) Git worktrees create/merge/cleanup/ensure_gitignore with mocked subprocess
(5) Auto-trigger pipeline task with label 'auto' triggers pipeline on creation
"""
import json
import subprocess
import pytest
from pathlib import Path
from unittest.mock import patch, MagicMock, call
import web.api as api_module
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def client(tmp_path):
db_path = tmp_path / "test.db"
api_module.DB_PATH = db_path
from web.api import app
from fastapi.testclient import TestClient
c = TestClient(app)
c.post("/api/projects", json={"id": "p1", "name": "P1", "path": "/tmp/p1"})
c.post("/api/tasks", json={"project_id": "p1", "title": "Fix bug"})
return c
@pytest.fixture
def conn():
from core.db import init_db
from core import models
c = init_db(":memory:")
models.create_project(c, "vdol", "ВДОЛЬ", "~/projects/vdolipoperek",
tech_stack=["vue3"])
models.create_task(c, "VDOL-001", "vdol", "Fix bug",
brief={"route_type": "debug"})
yield c
c.close()
# ---------------------------------------------------------------------------
# (1) Revise button — revise_count, target_role, max limit
# ---------------------------------------------------------------------------
class TestReviseEndpoint:
def test_revise_increments_revise_count(self, client):
"""revise_count начинается с 0 и увеличивается на 1 при каждом вызове."""
r = client.post("/api/tasks/P1-001/revise", json={"comment": "ещё раз"})
assert r.status_code == 200
assert r.json()["revise_count"] == 1
r = client.post("/api/tasks/P1-001/revise", json={"comment": "и ещё"})
assert r.status_code == 200
assert r.json()["revise_count"] == 2
def test_revise_stores_target_role(self, client):
"""target_role сохраняется в задаче в БД."""
from core.db import init_db
from core import models
r = client.post("/api/tasks/P1-001/revise", json={
"comment": "доработай бэкенд",
"target_role": "backend_dev",
})
assert r.status_code == 200
conn = init_db(api_module.DB_PATH)
row = conn.execute(
"SELECT revise_target_role FROM tasks WHERE id = 'P1-001'"
).fetchone()
conn.close()
assert row["revise_target_role"] == "backend_dev"
def test_revise_target_role_builds_short_steps(self, client):
"""Если передан target_role, pipeline_steps = [target_role, reviewer]."""
r = client.post("/api/tasks/P1-001/revise", json={
"comment": "фикс",
"target_role": "frontend_dev",
})
assert r.status_code == 200
steps = r.json()["pipeline_steps"]
roles = [s["role"] for s in steps]
assert roles == ["frontend_dev", "reviewer"]
def test_revise_max_count_exceeded_returns_400(self, client):
"""После 5 ревизий следующий вызов возвращает 400."""
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", revise_count=5)
conn.close()
r = client.post("/api/tasks/P1-001/revise", json={"comment": "6-й"})
assert r.status_code == 400
assert "Max revisions" in r.json()["detail"]
def test_revise_sets_status_in_progress(self, client):
"""После /revise задача переходит в статус in_progress."""
r = client.post("/api/tasks/P1-001/revise", json={"comment": "исправь"})
assert r.status_code == 200
assert r.json()["status"] == "in_progress"
def test_revise_only_visible_for_review_done_tasks(self, client):
"""Задача со статусом 'review' возвращает 200, а не 404."""
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", status="review")
conn.close()
r = client.post("/api/tasks/P1-001/revise", json={"comment": "review→revise"})
assert r.status_code == 200
def test_revise_done_task_allowed(self, client):
"""Задача со статусом 'done' тоже может быть ревизована."""
from core.db import init_db
from core import models
conn = init_db(api_module.DB_PATH)
models.update_task(conn, "P1-001", status="done")
conn.close()
r = client.post("/api/tasks/P1-001/revise", json={"comment": "done→revise"})
assert r.status_code == 200
assert r.json()["status"] == "in_progress"
# ---------------------------------------------------------------------------
# (2) Auto-test before review — _run_project_tests, fix loop, block
# ---------------------------------------------------------------------------
class TestRunProjectTests:
def test_returns_success_when_make_exits_0(self):
"""_run_project_tests возвращает success=True при returncode=0."""
from agents.runner import _run_project_tests
mock_result = MagicMock()
mock_result.returncode = 0
mock_result.stdout = "All tests passed."
mock_result.stderr = ""
with patch("agents.runner.subprocess.run", return_value=mock_result):
result = _run_project_tests("/fake/path")
assert result["success"] is True
assert "All tests passed." in result["output"]
def test_returns_failure_when_make_exits_nonzero(self):
"""_run_project_tests возвращает success=False при returncode!=0."""
from agents.runner import _run_project_tests
mock_result = MagicMock()
mock_result.returncode = 2
mock_result.stdout = ""
mock_result.stderr = "FAILED 3 tests"
with patch("agents.runner.subprocess.run", return_value=mock_result):
result = _run_project_tests("/fake/path")
assert result["success"] is False
assert "FAILED" in result["output"]
def test_handles_make_not_found(self):
"""_run_project_tests возвращает success=False если make не найден."""
from agents.runner import _run_project_tests
with patch("agents.runner.subprocess.run", side_effect=FileNotFoundError):
result = _run_project_tests("/fake/path")
assert result["success"] is False
assert result["returncode"] == 127
def test_handles_timeout(self):
"""_run_project_tests возвращает success=False при таймауте."""
from agents.runner import _run_project_tests
with patch("agents.runner.subprocess.run",
side_effect=subprocess.TimeoutExpired(cmd="make", timeout=120)):
result = _run_project_tests("/fake/path", timeout=120)
assert result["success"] is False
assert result["returncode"] == 124
def _mock_success(output="done"):
m = MagicMock()
m.stdout = json.dumps({"result": output})
m.stderr = ""
m.returncode = 0
return m
def _mock_failure(msg="error"):
m = MagicMock()
m.stdout = ""
m.stderr = msg
m.returncode = 1
return m
class TestAutoTestInPipeline:
"""Pipeline с auto_test_enabled: тесты запускаются автоматически после dev-шага."""
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_project_tests")
@patch("agents.runner.subprocess.run")
def test_auto_test_passes_pipeline_continues(
self, mock_run, mock_tests, mock_autocommit, conn
):
"""Если авто-тест проходит — pipeline завершается успешно."""
from agents.runner import run_pipeline
from core import models
mock_run.return_value = _mock_success()
mock_tests.return_value = {"success": True, "output": "OK", "returncode": 0}
models.update_project(conn, "vdol", auto_test_enabled=True)
steps = [{"role": "backend_dev", "brief": "implement"}]
result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True
mock_tests.assert_called_once()
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_project_tests")
@patch("agents.runner.subprocess.run")
def test_auto_test_disabled_not_called(
self, mock_run, mock_tests, mock_autocommit, conn
):
"""Если auto_test_enabled=False — make test не вызывается."""
from agents.runner import run_pipeline
from core import models
mock_run.return_value = _mock_success()
# auto_test_enabled по умолчанию 0
steps = [{"role": "backend_dev", "brief": "implement"}]
run_pipeline(conn, "VDOL-001", steps)
mock_tests.assert_not_called()
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_project_tests")
@patch("agents.runner.subprocess.run")
def test_auto_test_fail_triggers_fix_loop(
self, mock_run, mock_tests, mock_autocommit, conn
):
"""Если авто-тест падает — запускается fixer агент и тесты перезапускаются."""
from agents.runner import run_pipeline
from core import models
import os
mock_run.return_value = _mock_success()
# First test call fails, second passes
mock_tests.side_effect = [
{"success": False, "output": "FAILED: test_foo", "returncode": 1},
{"success": True, "output": "OK", "returncode": 0},
]
models.update_project(conn, "vdol", auto_test_enabled=True)
with patch.dict(os.environ, {"KIN_AUTO_TEST_MAX_ATTEMPTS": "3"}):
steps = [{"role": "backend_dev", "brief": "implement"}]
result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is True
# _run_project_tests called twice: initial check + after fix
assert mock_tests.call_count == 2
# subprocess.run called at least twice: backend_dev + fixer backend_dev
assert mock_run.call_count >= 2
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_project_tests")
@patch("agents.runner.subprocess.run")
def test_auto_test_exhausted_blocks_task(
self, mock_run, mock_tests, mock_autocommit, conn
):
"""Если авто-тест падает max_attempts раз — задача блокируется."""
from agents.runner import run_pipeline
from core import models
import os
mock_run.return_value = _mock_success()
# Тест всегда падает
mock_tests.return_value = {"success": False, "output": "FAILED", "returncode": 1}
models.update_project(conn, "vdol", auto_test_enabled=True)
with patch.dict(os.environ, {"KIN_AUTO_TEST_MAX_ATTEMPTS": "2"}):
steps = [{"role": "backend_dev", "brief": "implement"}]
result = run_pipeline(conn, "VDOL-001", steps)
assert result["success"] is False
task = models.get_task(conn, "VDOL-001")
assert task["status"] == "blocked"
assert "Auto-test" in (task.get("blocked_reason") or "")
@patch("agents.runner._run_autocommit")
@patch("agents.runner._run_project_tests")
@patch("agents.runner.subprocess.run")
def test_auto_test_not_triggered_for_non_dev_roles(
self, mock_run, mock_tests, mock_autocommit, conn
):
"""auto_test запускается только для backend_dev/frontend_dev, не для debugger."""
from agents.runner import run_pipeline
from core import models
mock_run.return_value = _mock_success()
models.update_project(conn, "vdol", auto_test_enabled=True)
steps = [{"role": "debugger", "brief": "find"}]
run_pipeline(conn, "VDOL-001", steps)
mock_tests.assert_not_called()
# ---------------------------------------------------------------------------
# (3) Spec-driven workflow route
# ---------------------------------------------------------------------------
class TestSpecDrivenRoute:
def _load_specialists(self):
import yaml
spec_path = Path(__file__).parent.parent / "agents" / "specialists.yaml"
with open(spec_path) as f:
return yaml.safe_load(f)
def test_spec_driven_route_exists(self):
"""Маршрут spec_driven должен быть объявлен в specialists.yaml."""
data = self._load_specialists()
assert "spec_driven" in data.get("routes", {})
def test_spec_driven_route_steps_order(self):
"""spec_driven route: шаги [constitution, spec, architect, task_decomposer]."""
data = self._load_specialists()
steps = data["routes"]["spec_driven"]["steps"]
assert steps == ["constitution", "spec", "architect", "task_decomposer"]
def test_spec_driven_all_roles_exist(self):
"""Все роли в spec_driven route должны быть объявлены в specialists."""
data = self._load_specialists()
specialists = data.get("specialists", {})
for role in data["routes"]["spec_driven"]["steps"]:
assert role in specialists, f"Role '{role}' missing from specialists"
def test_constitution_role_has_output_schema(self):
"""constitution должен иметь output_schema (principles, constraints, goals)."""
data = self._load_specialists()
schema = data["specialists"]["constitution"].get("output_schema", {})
assert "principles" in schema
assert "constraints" in schema
assert "goals" in schema
def test_spec_role_has_output_schema(self):
"""spec должен иметь output_schema (overview, features, api_contracts)."""
data = self._load_specialists()
schema = data["specialists"]["spec"].get("output_schema", {})
assert "overview" in schema
assert "features" in schema
assert "api_contracts" in schema
# ---------------------------------------------------------------------------
# (4) Git worktrees — create / merge / cleanup / ensure_gitignore
# ---------------------------------------------------------------------------
class TestCreateWorktree:
def test_create_worktree_success(self, tmp_path):
"""create_worktree возвращает путь при успешном git worktree add."""
from core.worktree import create_worktree
mock_r = MagicMock()
mock_r.returncode = 0
mock_r.stderr = ""
with patch("core.worktree.subprocess.run", return_value=mock_r):
path = create_worktree(str(tmp_path), "TASK-001", "backend_dev")
assert path is not None
assert "TASK-001-backend_dev" in path
def test_create_worktree_git_failure_returns_none(self, tmp_path):
"""create_worktree возвращает None если git worktree add провалился."""
from core.worktree import create_worktree
mock_r = MagicMock()
mock_r.returncode = 128
mock_r.stderr = "fatal: branch already exists"
with patch("core.worktree.subprocess.run", return_value=mock_r):
path = create_worktree(str(tmp_path), "TASK-001", "backend_dev")
assert path is None
def test_create_worktree_exception_returns_none(self, tmp_path):
"""create_worktree возвращает None при неожиданном исключении (не поднимает)."""
from core.worktree import create_worktree
with patch("core.worktree.subprocess.run", side_effect=OSError("no git")):
path = create_worktree(str(tmp_path), "TASK-001", "backend_dev")
assert path is None
def test_create_worktree_branch_name_sanitized(self, tmp_path):
"""Слэши и пробелы в имени шага заменяются на _."""
from core.worktree import create_worktree
mock_r = MagicMock()
mock_r.returncode = 0
mock_r.stderr = ""
calls_made = []
def capture(*args, **kwargs):
calls_made.append(args[0])
return mock_r
with patch("core.worktree.subprocess.run", side_effect=capture):
create_worktree(str(tmp_path), "TASK-001", "step/with spaces")
assert calls_made
cmd = calls_made[0]
branch = cmd[cmd.index("-b") + 1]
assert "/" not in branch
assert " " not in branch
class TestMergeWorktree:
def test_merge_success_returns_merged_files(self, tmp_path):
"""merge_worktree возвращает success=True и список файлов при успешном merge."""
from core.worktree import merge_worktree
worktree = str(tmp_path / "TASK-001-backend_dev")
merge_ok = MagicMock(returncode=0, stdout="", stderr="")
diff_ok = MagicMock(returncode=0, stdout="src/api.py\nsrc/models.py\n", stderr="")
with patch("core.worktree.subprocess.run", side_effect=[merge_ok, diff_ok]):
result = merge_worktree(worktree, str(tmp_path))
assert result["success"] is True
assert "src/api.py" in result["merged_files"]
assert result["conflicts"] == []
def test_merge_conflict_returns_conflict_list(self, tmp_path):
"""merge_worktree возвращает success=False и список конфликтных файлов."""
from core.worktree import merge_worktree
worktree = str(tmp_path / "TASK-001-backend_dev")
merge_fail = MagicMock(returncode=1, stdout="", stderr="CONFLICT")
conflict_files = MagicMock(returncode=0, stdout="src/models.py\n", stderr="")
abort = MagicMock(returncode=0)
with patch("core.worktree.subprocess.run",
side_effect=[merge_fail, conflict_files, abort]):
result = merge_worktree(worktree, str(tmp_path))
assert result["success"] is False
assert "src/models.py" in result["conflicts"]
def test_merge_exception_returns_success_false(self, tmp_path):
"""merge_worktree никогда не поднимает исключение."""
from core.worktree import merge_worktree
with patch("core.worktree.subprocess.run", side_effect=OSError("git died")):
result = merge_worktree("/fake/wt", str(tmp_path))
assert result["success"] is False
assert "error" in result
class TestCleanupWorktree:
def test_cleanup_calls_worktree_remove_and_branch_delete(self, tmp_path):
"""cleanup_worktree вызывает git worktree remove и git branch -D."""
from core.worktree import cleanup_worktree
calls = []
def capture(*args, **kwargs):
calls.append(args[0])
return MagicMock(returncode=0)
with patch("core.worktree.subprocess.run", side_effect=capture):
cleanup_worktree("/fake/path/TASK-branch", str(tmp_path))
assert len(calls) == 2
# первый: worktree remove
assert "worktree" in calls[0]
assert "remove" in calls[0]
# второй: branch -D
assert "branch" in calls[1]
assert "-D" in calls[1]
def test_cleanup_never_raises(self, tmp_path):
"""cleanup_worktree не поднимает исключение при ошибке."""
from core.worktree import cleanup_worktree
with patch("core.worktree.subprocess.run", side_effect=OSError("crashed")):
cleanup_worktree("/fake/wt", str(tmp_path)) # должно пройти тихо
class TestEnsureGitignore:
def test_adds_entry_to_existing_gitignore(self, tmp_path):
"""ensure_gitignore добавляет .kin_worktrees/ в существующий .gitignore."""
from core.worktree import ensure_gitignore
gi = tmp_path / ".gitignore"
gi.write_text("*.pyc\n__pycache__/\n")
ensure_gitignore(str(tmp_path))
assert ".kin_worktrees/" in gi.read_text()
def test_creates_gitignore_if_missing(self, tmp_path):
"""ensure_gitignore создаёт .gitignore если его нет."""
from core.worktree import ensure_gitignore
ensure_gitignore(str(tmp_path))
gi = tmp_path / ".gitignore"
assert gi.exists()
assert ".kin_worktrees/" in gi.read_text()
def test_skips_if_entry_already_present(self, tmp_path):
"""ensure_gitignore не дублирует запись."""
from core.worktree import ensure_gitignore
gi = tmp_path / ".gitignore"
gi.write_text(".kin_worktrees/\n")
ensure_gitignore(str(tmp_path))
content = gi.read_text()
assert content.count(".kin_worktrees/") == 1
def test_never_raises_on_permission_error(self, tmp_path):
"""ensure_gitignore не поднимает исключение при ошибке записи."""
from core.worktree import ensure_gitignore
with patch("core.worktree.Path.open", side_effect=PermissionError):
ensure_gitignore(str(tmp_path)) # должно пройти тихо
# ---------------------------------------------------------------------------
# (5) Auto-trigger pipeline — label 'auto'
# ---------------------------------------------------------------------------
class TestAutoTrigger:
def test_task_with_auto_label_triggers_pipeline(self, client):
"""Создание задачи с label 'auto' запускает pipeline в фоне."""
with patch("web.api._launch_pipeline_subprocess") as mock_launch:
r = client.post("/api/tasks", json={
"project_id": "p1",
"title": "Auto task",
"labels": ["auto"],
})
assert r.status_code == 200
mock_launch.assert_called_once()
called_task_id = mock_launch.call_args[0][0]
assert called_task_id.startswith("P1-")
def test_task_without_auto_label_does_not_trigger(self, client):
"""Создание задачи без label 'auto' НЕ запускает pipeline."""
with patch("web.api._launch_pipeline_subprocess") as mock_launch:
r = client.post("/api/tasks", json={
"project_id": "p1",
"title": "Manual task",
"labels": ["feature"],
})
assert r.status_code == 200
mock_launch.assert_not_called()
def test_task_without_labels_does_not_trigger(self, client):
"""Создание задачи без labels вообще НЕ запускает pipeline."""
with patch("web.api._launch_pipeline_subprocess") as mock_launch:
r = client.post("/api/tasks", json={
"project_id": "p1",
"title": "Plain task",
})
assert r.status_code == 200
mock_launch.assert_not_called()
def test_task_with_auto_among_multiple_labels_triggers(self, client):
"""Задача с несколькими метками включая 'auto' запускает pipeline."""
with patch("web.api._launch_pipeline_subprocess") as mock_launch:
r = client.post("/api/tasks", json={
"project_id": "p1",
"title": "Multi-label auto task",
"labels": ["feature", "auto", "backend"],
})
assert r.status_code == 200
mock_launch.assert_called_once()

200
tests/test_kin_biz_002.py Normal file
View file

@ -0,0 +1,200 @@
"""Regression tests for KIN-BIZ-002.
Проблема: approve через /tasks/{id}/approve не продвигал phase state machine.
Фикс: в approve_task() добавлен блок, вызывающий approve_phase() из core.phases
если задача принадлежит активной фазе.
В approve_phase() endpoint добавлена синхронизация task.status='done'.
Тесты покрывают:
1. POST /tasks/{id}/approve для phase-задачи phase.status=done, следующая фаза active
2. Изменения в БД персистентны после approve
3. POST /tasks/{id}/approve для обычной задачи не ломает ничего, phase=None
4. POST /phases/{id}/approve task.status синхронизируется в done
"""
import pytest
from fastapi.testclient import TestClient
import web.api as api_module
@pytest.fixture
def client(tmp_path):
"""Изолированная временная БД для каждого теста."""
db_path = tmp_path / "test_biz002.db"
api_module.DB_PATH = db_path
from web.api import app
return TestClient(app)
def _create_project_with_phases(client, project_id: str = "proj_biz002") -> dict:
"""Вспомогательная: создаёт проект с двумя researcher-фазами + architect."""
r = client.post("/api/projects/new", json={
"id": project_id,
"name": "BIZ-002 Test Project",
"path": f"/tmp/{project_id}",
"description": "Тест регрессии KIN-BIZ-002",
"roles": ["business_analyst", "tech_researcher"],
})
assert r.status_code == 200, r.json()
return r.json()
def _get_active_phase(client, project_id: str) -> dict:
"""Вспомогательная: возвращает первую активную фазу."""
phases = client.get(f"/api/projects/{project_id}/phases").json()
active = next(ph for ph in phases if ph["status"] == "active")
return active
# ---------------------------------------------------------------------------
# KIN-BIZ-002 — регрессионные тесты
# ---------------------------------------------------------------------------
def test_KIN_BIZ_002_approve_task_advances_phase_state_machine(client):
"""KIN-BIZ-002: POST /tasks/{id}/approve для phase-задачи продвигает state machine.
Ожидаем: phase.status=approved, next_phase активирован.
"""
_create_project_with_phases(client)
active_phase = _get_active_phase(client, "proj_biz002")
task_id = active_phase["task_id"]
r = client.post(f"/api/tasks/{task_id}/approve", json={})
assert r.status_code == 200
data = r.json()
assert data["status"] == "done"
# Ключ phase должен присутствовать и содержать результат
assert "phase" in data
assert data["phase"] is not None
# Одобренная фаза имеет status=approved
assert data["phase"]["phase"]["status"] == "approved"
# Следующая фаза была активирована
assert data["phase"]["next_phase"] is not None
assert data["phase"]["next_phase"]["status"] == "active"
def test_KIN_BIZ_002_approve_task_phase_status_persists_in_db(client):
"""KIN-BIZ-002: после approve через /tasks/{id}/approve статусы фаз корректны в БД.
Первая фаза approved, вторая фаза active.
"""
data = _create_project_with_phases(client)
# Три фазы: business_analyst, tech_researcher, architect
assert len(data["phases"]) == 3
active_phase = _get_active_phase(client, "proj_biz002")
task_id = active_phase["task_id"]
client.post(f"/api/tasks/{task_id}/approve", json={})
# Перечитываем фазы из БД
phases = client.get("/api/projects/proj_biz002/phases").json()
statuses = {ph["role"]: ph["status"] for ph in phases}
assert statuses["business_analyst"] == "approved"
assert statuses["tech_researcher"] == "active"
assert statuses["architect"] == "pending"
def test_KIN_BIZ_002_approve_task_task_status_is_done(client):
"""KIN-BIZ-002: сама задача должна иметь status=done после approve."""
_create_project_with_phases(client)
active_phase = _get_active_phase(client, "proj_biz002")
task_id = active_phase["task_id"]
client.post(f"/api/tasks/{task_id}/approve", json={})
task = client.get(f"/api/tasks/{task_id}").json()
assert task["status"] == "done"
def test_KIN_BIZ_002_approve_regular_task_does_not_affect_phases(client):
"""KIN-BIZ-002: approve обычной задачи (без фазы) не ломает ничего, phase=None."""
# Создаём обычный проект без фаз
client.post("/api/projects", json={
"id": "plain_proj",
"name": "Plain Project",
"path": "/tmp/plain_proj",
})
r_task = client.post("/api/tasks", json={
"project_id": "plain_proj",
"title": "Обычная задача без фазы",
})
assert r_task.status_code == 200
task_id = r_task.json()["id"]
r = client.post(f"/api/tasks/{task_id}/approve", json={})
assert r.status_code == 200
data = r.json()
assert data["status"] == "done"
# phase должен быть None — нет связанной фазы
assert data["phase"] is None
def test_KIN_BIZ_002_approve_regular_task_sets_status_done(client):
"""KIN-BIZ-002: approve обычной задачи корректно устанавливает status=done."""
client.post("/api/projects", json={
"id": "plain2",
"name": "Plain2",
"path": "/tmp/plain2",
})
r_task = client.post("/api/tasks", json={
"project_id": "plain2",
"title": "Задача без фазы",
})
task_id = r_task.json()["id"]
client.post(f"/api/tasks/{task_id}/approve", json={})
task = client.get(f"/api/tasks/{task_id}").json()
assert task["status"] == "done"
def test_KIN_BIZ_002_approve_phase_endpoint_syncs_task_status_to_done(client):
"""KIN-BIZ-002: POST /phases/{id}/approve синхронизирует task.status=done.
Гарантируем консистентность обоих путей одобрения фазы.
"""
_create_project_with_phases(client)
active_phase = _get_active_phase(client, "proj_biz002")
phase_id = active_phase["id"]
task_id = active_phase["task_id"]
r = client.post(f"/api/phases/{phase_id}/approve", json={})
assert r.status_code == 200
# Задача, связанная с фазой, должна иметь status=done
task = client.get(f"/api/tasks/{task_id}").json()
assert task["status"] == "done"
def test_KIN_BIZ_002_full_phase_chain_two_approves_completes_workflow(client):
"""KIN-BIZ-002: последовательный approve через /tasks/{id}/approve проходит весь chain.
business_analyst approved tech_researcher approved architect approved.
"""
_create_project_with_phases(client)
phases_init = client.get("/api/projects/proj_biz002/phases").json()
assert len(phases_init) == 3
# Апруваем каждую фазу последовательно через task-endpoint
for _ in range(3):
phases = client.get("/api/projects/proj_biz002/phases").json()
active = next((ph for ph in phases if ph["status"] == "active"), None)
if active is None:
break
task_id = active["task_id"]
r = client.post(f"/api/tasks/{task_id}/approve", json={})
assert r.status_code == 200
assert r.json()["status"] == "done"
# После всех approve все фазы должны быть approved
final_phases = client.get("/api/projects/proj_biz002/phases").json()
for ph in final_phases:
assert ph["status"] == "approved", (
f"Ожидали approved для {ph['role']}, получили {ph['status']}"
)

View file

@ -0,0 +1,388 @@
"""Tests for KIN-BIZ-007: Fernet encryption of credentials in project_environments.
Acceptance criteria:
1. Roundtrip: _encrypt_auth _decrypt_auth returns the original string.
2. Migration: b64:-prefixed record is auto-re-encrypted on read; decrypt returns plaintext.
3. Missing KIN_SECRET_KEY scan endpoint returns 503 (not 500).
4. Runner path: get_environment() returns decrypted plaintext auth_value.
5. Old _obfuscate_auth / _deobfuscate_auth are not present anywhere.
Decision #214: patch на consuming-модуле, не на defining.
Decision #215: использовать mock.assert_called_once().
"""
import base64
import os
import pytest
from unittest.mock import patch, MagicMock
from core.db import init_db
from core import models
@pytest.fixture
def conn():
"""Fresh in-memory DB for each test."""
c = init_db(db_path=":memory:")
yield c
c.close()
@pytest.fixture
def conn_with_project(conn):
"""In-memory DB with a test project."""
models.create_project(conn, "testproj", "Test Project", "/test")
return conn
@pytest.fixture
def scan_client(tmp_path):
"""TestClient with project + environment pre-created. Returns (client, env_id)."""
import web.api as api_module
api_module.DB_PATH = tmp_path / "scan_biz007.db"
from web.api import app
from fastapi.testclient import TestClient
c = TestClient(app)
c.post("/api/projects", json={"id": "scanproj", "name": "Scan Project", "path": "/scan"})
r = c.post("/api/projects/scanproj/environments", json={
"name": "prod", "host": "10.0.0.1", "username": "root",
})
env_id = r.json()["id"]
return c, env_id
@pytest.fixture
def env_client(tmp_path):
"""TestClient with just a project pre-created. Returns client."""
import web.api as api_module
api_module.DB_PATH = tmp_path / "env_biz007.db"
from web.api import app
from fastapi.testclient import TestClient
c = TestClient(app)
c.post("/api/projects", json={"id": "envproj", "name": "Env Project", "path": "/env"})
return c
# ---------------------------------------------------------------------------
# AC1: Roundtrip — _encrypt_auth → _decrypt_auth returns original string
# ---------------------------------------------------------------------------
def test_encrypt_decrypt_roundtrip_returns_original(conn):
"""AC1: encrypt → decrypt returns the exact original plaintext."""
original = "my_super_secret_password"
encrypted = models._encrypt_auth(original)
decrypted = models._decrypt_auth(encrypted)
assert decrypted == original
def test_encrypt_produces_different_value_than_plaintext(conn):
"""AC1: encrypted value is not the original (Fernet token, not plaintext)."""
original = "plain_secret"
encrypted = models._encrypt_auth(original)
assert encrypted != original
assert not encrypted.startswith("b64:")
def test_encrypt_two_calls_produce_different_tokens(conn):
"""AC1: Fernet uses random IV — two encryptions of same value differ but both decrypt correctly."""
value = "same_password"
enc1 = models._encrypt_auth(value)
enc2 = models._encrypt_auth(value)
# Encrypted forms must differ due to Fernet IV randomness
assert enc1 != enc2
# Both must decrypt to original
assert models._decrypt_auth(enc1) == value
assert models._decrypt_auth(enc2) == value
def test_encrypt_raises_runtime_error_when_no_key(monkeypatch):
"""AC1: _encrypt_auth raises RuntimeError when KIN_SECRET_KEY is absent."""
monkeypatch.delenv("KIN_SECRET_KEY", raising=False)
with pytest.raises(RuntimeError, match="KIN_SECRET_KEY"):
models._encrypt_auth("any_value")
def test_decrypt_fernet_token_without_key_returns_raw_not_plaintext(monkeypatch):
"""AC1: _decrypt_auth without key cannot recover plaintext — returns stored token, not original."""
original = "secret"
encrypted = models._encrypt_auth(original)
monkeypatch.delenv("KIN_SECRET_KEY", raising=False)
result = models._decrypt_auth(encrypted)
# Without the key we cannot get the plaintext back
assert result != original
# ---------------------------------------------------------------------------
# AC2: Migration — b64: record auto-re-encrypted on read
# ---------------------------------------------------------------------------
def test_decrypt_auth_handles_b64_prefix_without_db(conn):
"""AC2: _decrypt_auth decodes b64:-prefixed value (no DB needed for the decode itself)."""
plaintext = "legacy_password"
b64_stored = "b64:" + base64.b64encode(plaintext.encode()).decode()
decrypted = models._decrypt_auth(b64_stored)
assert decrypted == plaintext
def test_decrypt_auth_b64_rewrites_db_when_conn_provided(conn_with_project):
"""AC2: _decrypt_auth with conn+env_id re-encrypts b64: value in DB on read."""
conn = conn_with_project
plaintext = "legacy_pass_123"
b64_value = "b64:" + base64.b64encode(plaintext.encode()).decode()
cur = conn.execute(
"""INSERT INTO project_environments
(project_id, name, host, port, username, auth_type, auth_value, is_installed)
VALUES ('testproj', 'legacy', 'host.example.com', 22, 'root', 'password', ?, 0)""",
(b64_value,),
)
conn.commit()
env_id = cur.lastrowid
# Call decrypt with conn+env_id — must trigger re-encryption
decrypted = models._decrypt_auth(b64_value, conn=conn, env_id=env_id)
assert decrypted == plaintext
# DB must now have Fernet token, not b64:
stored_after = conn.execute(
"SELECT auth_value FROM project_environments WHERE id = ?", (env_id,)
).fetchone()["auth_value"]
assert not stored_after.startswith("b64:"), (
"After migration, b64: prefix must be replaced by a Fernet token"
)
# And the new token must decrypt correctly
assert models._decrypt_auth(stored_after) == plaintext
def test_get_environment_migrates_b64_and_returns_plaintext(conn_with_project):
"""AC2: get_environment() transparently migrates b64: values and returns plaintext auth_value."""
conn = conn_with_project
plaintext = "old_secret"
b64_value = "b64:" + base64.b64encode(plaintext.encode()).decode()
cur = conn.execute(
"""INSERT INTO project_environments
(project_id, name, host, port, username, auth_type, auth_value, is_installed)
VALUES ('testproj', 'legacy2', 'host2.example.com', 22, 'root', 'password', ?, 0)""",
(b64_value,),
)
conn.commit()
env_id = cur.lastrowid
env = models.get_environment(conn, env_id)
assert env["auth_value"] == plaintext, (
f"get_environment must return plaintext after b64 migration, got: {env['auth_value']!r}"
)
# DB must be updated: b64: replaced by Fernet token
stored_after = conn.execute(
"SELECT auth_value FROM project_environments WHERE id = ?", (env_id,)
).fetchone()["auth_value"]
assert not stored_after.startswith("b64:"), (
"DB must contain Fernet token after get_environment migrates b64: record"
)
def test_get_environment_second_read_after_migration_still_decrypts(conn_with_project):
"""AC2: After b64 migration, subsequent get_environment calls still return plaintext."""
conn = conn_with_project
plaintext = "migrated_secret"
b64_value = "b64:" + base64.b64encode(plaintext.encode()).decode()
cur = conn.execute(
"""INSERT INTO project_environments
(project_id, name, host, port, username, auth_type, auth_value, is_installed)
VALUES ('testproj', 'legacy3', 'host3.example.com', 22, 'root', 'password', ?, 0)""",
(b64_value,),
)
conn.commit()
env_id = cur.lastrowid
# First read: triggers migration
env1 = models.get_environment(conn, env_id)
assert env1["auth_value"] == plaintext
# Second read: now reads Fernet token (post-migration)
env2 = models.get_environment(conn, env_id)
assert env2["auth_value"] == plaintext
# ---------------------------------------------------------------------------
# AC3: Missing KIN_SECRET_KEY → scan endpoint returns 503 (not 500)
# ---------------------------------------------------------------------------
def test_scan_endpoint_returns_503_when_kin_secret_key_missing(scan_client, monkeypatch):
"""AC3: POST /environments/{id}/scan returns 503 when KIN_SECRET_KEY is not set."""
client, env_id = scan_client
monkeypatch.delenv("KIN_SECRET_KEY", raising=False)
r = client.post(f"/api/projects/scanproj/environments/{env_id}/scan")
assert r.status_code == 503, (
f"scan must return 503 when KIN_SECRET_KEY is missing, got {r.status_code}: {r.text}"
)
def test_scan_endpoint_returns_503_not_500(scan_client, monkeypatch):
"""AC3: HTTP 503 (misconfiguration) must be returned, not 500 (code bug)."""
client, env_id = scan_client
monkeypatch.delenv("KIN_SECRET_KEY", raising=False)
r = client.post(f"/api/projects/scanproj/environments/{env_id}/scan")
assert r.status_code != 500, "Missing KIN_SECRET_KEY must produce 503, not 500"
assert r.status_code == 503
def test_scan_endpoint_returns_202_when_key_present(scan_client):
"""AC3: scan endpoint returns 202 when KIN_SECRET_KEY is correctly set."""
client, env_id = scan_client
with patch("subprocess.Popen") as mock_popen:
mock_popen.return_value = MagicMock(pid=12345)
r = client.post(f"/api/projects/scanproj/environments/{env_id}/scan")
assert r.status_code == 202
# ---------------------------------------------------------------------------
# AC4: Runner path — get_environment() returns decrypted plaintext auth_value
# ---------------------------------------------------------------------------
def test_get_environment_returns_decrypted_auth_value(conn_with_project):
"""AC4: get_environment() returns plaintext, not the Fernet token stored in DB."""
conn = conn_with_project
plaintext = "runner_secret_42"
env = models.create_environment(
conn, "testproj", "runner-env", "10.0.0.10", "root",
auth_value=plaintext,
)
env_id = env["id"]
fetched = models.get_environment(conn, env_id)
assert fetched["auth_value"] == plaintext, (
f"get_environment must return plaintext auth_value, got: {fetched['auth_value']!r}"
)
def test_get_environment_auth_value_is_not_fernet_token(conn_with_project):
"""AC4: auth_value from get_environment is decrypted (not a Fernet base64 token)."""
conn = conn_with_project
plaintext = "real_password_xyz"
env = models.create_environment(
conn, "testproj", "fernet-check", "10.0.0.11", "user",
auth_value=plaintext,
)
# Verify DB stores encrypted (not plaintext)
raw_stored = conn.execute(
"SELECT auth_value FROM project_environments WHERE id = ?", (env["id"],)
).fetchone()["auth_value"]
assert raw_stored != plaintext, "DB must store encrypted value, not plaintext"
# get_environment must return decrypted plaintext
fetched = models.get_environment(conn, env["id"])
assert fetched["auth_value"] == plaintext
def test_get_environment_returns_none_auth_value_when_not_set(conn_with_project):
"""AC4: get_environment() returns auth_value=None when no credential was stored."""
conn = conn_with_project
env = models.create_environment(
conn, "testproj", "no-cred", "10.0.0.12", "user",
auth_value=None,
)
fetched = models.get_environment(conn, env["id"])
assert fetched["auth_value"] is None
def test_create_environment_hides_auth_value_in_return(conn_with_project):
"""AC4: create_environment() returns auth_value=None — plaintext only via get_environment."""
conn = conn_with_project
env = models.create_environment(
conn, "testproj", "hidden-cred", "10.0.0.13", "user",
auth_value="secret",
)
assert env["auth_value"] is None, (
"create_environment must return auth_value=None for API safety"
)
# ---------------------------------------------------------------------------
# AC5: Old _obfuscate_auth / _deobfuscate_auth are not present anywhere
# ---------------------------------------------------------------------------
def test_obfuscate_auth_not_in_core_models():
"""AC5: _obfuscate_auth must not exist in core.models (fully removed)."""
import core.models as m
assert not hasattr(m, "_obfuscate_auth"), (
"_obfuscate_auth must be removed from core.models — use _encrypt_auth instead"
)
def test_deobfuscate_auth_not_in_core_models():
"""AC5: _deobfuscate_auth must not exist in core.models (fully removed)."""
import core.models as m
assert not hasattr(m, "_deobfuscate_auth"), (
"_deobfuscate_auth must be removed from core.models — use _decrypt_auth instead"
)
def test_obfuscate_auth_not_imported_in_web_api():
"""AC5: _obfuscate_auth must not be imported or defined in web.api."""
import web.api as api_mod
assert not hasattr(api_mod, "_obfuscate_auth"), (
"_obfuscate_auth must not appear in web.api"
)
def test_deobfuscate_auth_not_imported_in_web_api():
"""AC5: _deobfuscate_auth must not be imported or defined in web.api."""
import web.api as api_mod
assert not hasattr(api_mod, "_deobfuscate_auth"), (
"_deobfuscate_auth must not appear in web.api"
)
# ---------------------------------------------------------------------------
# AC6 (KIN-095): ModuleNotFoundError for cryptography → 503, not 500
# ---------------------------------------------------------------------------
def test_create_environment_returns_503_when_cryptography_not_installed(env_client):
"""AC6: POST /environments returns 503 when cryptography package missing (not 500)."""
client = env_client
with patch("core.models._encrypt_auth", side_effect=ModuleNotFoundError("No module named 'cryptography'")):
r = client.post("/api/projects/envproj/environments", json={
"name": "creds-env", "host": "10.0.0.20", "username": "root",
"auth_type": "password", "auth_value": "secret",
})
assert r.status_code == 503, (
f"create_environment must return 503 when cryptography is missing, got {r.status_code}: {r.text}"
)
assert "cryptography" in r.json()["detail"].lower()
def test_create_environment_returns_503_not_500_for_missing_cryptography(env_client):
"""AC6: 500 must NOT be returned when cryptography package is absent."""
client = env_client
with patch("core.models._encrypt_auth", side_effect=ModuleNotFoundError("No module named 'cryptography'")):
r = client.post("/api/projects/envproj/environments", json={
"name": "creds-env2", "host": "10.0.0.21", "username": "root",
"auth_value": "secret2",
})
assert r.status_code != 500, "Missing cryptography must produce 503, not 500"
def test_patch_environment_returns_503_when_cryptography_not_installed(env_client):
"""AC6: PATCH /environments/{id} returns 503 when cryptography package missing."""
client = env_client
# Create env without auth_value so no encryption at create time
r = client.post("/api/projects/envproj/environments", json={
"name": "patch-env", "host": "10.0.0.22", "username": "root",
})
assert r.status_code == 201, f"Setup failed: {r.text}"
env_id = r.json()["id"]
with patch("core.models._encrypt_auth", side_effect=ModuleNotFoundError("No module named 'cryptography'")):
r = client.patch(f"/api/projects/envproj/environments/{env_id}", json={
"auth_value": "new_secret",
})
assert r.status_code == 503, (
f"patch_environment must return 503 when cryptography is missing, got {r.status_code}: {r.text}"
)
assert "cryptography" in r.json()["detail"].lower()

View file

@ -0,0 +1,156 @@
"""Regression tests for KIN-FIX-006: 'ssh_key' must be a valid auth_type.
Root cause: VALID_AUTH_TYPES did not include 'ssh_key', causing 422 on POST credentials.
Fix: VALID_AUTH_TYPES = {"password", "key", "ssh_key"} (web/api.py line 1028).
Acceptance criteria:
1. POST /projects/{id}/environments with auth_type='ssh_key' returns 201 (not 422)
2. auth_type='key' still returns 201
3. auth_type='password' still returns 201
4. auth_type='ftp' (invalid) returns 422
"""
import pytest
from unittest.mock import patch, MagicMock
# ---------------------------------------------------------------------------
# Fixture
# ---------------------------------------------------------------------------
@pytest.fixture
def client(tmp_path):
import web.api as api_module
api_module.DB_PATH = tmp_path / "test_fix006.db"
# Re-import app after setting DB_PATH so init_db uses the new path
from importlib import reload
import web.api
reload(web.api)
api_module.DB_PATH = tmp_path / "test_fix006.db"
from web.api import app
from fastapi.testclient import TestClient
c = TestClient(app)
c.post("/api/projects", json={"id": "testproj", "name": "Test Project", "path": "/testproj"})
return c
# ---------------------------------------------------------------------------
# Tests: VALID_AUTH_TYPES validation
# ---------------------------------------------------------------------------
def test_create_environment_ssh_key_auth_type_returns_201(client):
"""Regression KIN-FIX-006: auth_type='ssh_key' must return 201, not 422."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-ssh",
"host": "10.0.0.1",
"username": "deploy",
"auth_type": "ssh_key",
"auth_value": "-----BEGIN RSA PRIVATE KEY-----",
})
assert r.status_code == 201, (
f"auth_type='ssh_key' must be accepted (201), got {r.status_code}: {r.text}"
)
def test_create_environment_key_auth_type_still_valid(client):
"""auth_type='key' must still return 201 after the fix."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-key",
"host": "10.0.0.2",
"username": "deploy",
"auth_type": "key",
"auth_value": "keydata",
})
assert r.status_code == 201, (
f"auth_type='key' must still be valid (201), got {r.status_code}: {r.text}"
)
def test_create_environment_password_auth_type_still_valid(client):
"""auth_type='password' must still return 201 after the fix."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-pass",
"host": "10.0.0.3",
"username": "root",
"auth_type": "password",
"auth_value": "s3cr3t",
})
assert r.status_code == 201, (
f"auth_type='password' must still be valid (201), got {r.status_code}: {r.text}"
)
def test_create_environment_invalid_auth_type_returns_422(client):
"""Invalid auth_type (e.g. 'ftp') must return 422 Unprocessable Entity."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-ftp",
"host": "10.0.0.4",
"username": "ftpuser",
"auth_type": "ftp",
"auth_value": "password123",
})
assert r.status_code == 422, (
f"auth_type='ftp' must be rejected (422), got {r.status_code}: {r.text}"
)
def test_create_environment_empty_auth_type_returns_422(client):
"""Empty string auth_type must return 422."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-empty",
"host": "10.0.0.5",
"username": "root",
"auth_type": "",
})
assert r.status_code == 422, (
f"auth_type='' must be rejected (422), got {r.status_code}: {r.text}"
)
def test_create_environment_default_auth_type_is_password(client):
"""Default auth_type (omitted) must be 'password' and return 201."""
r = client.post("/api/projects/testproj/environments", json={
"name": "prod-default",
"host": "10.0.0.6",
"username": "root",
"auth_value": "pass",
# auth_type intentionally omitted — defaults to 'password'
})
assert r.status_code == 201, (
f"Default auth_type must be accepted (201), got {r.status_code}: {r.text}"
)
# ---------------------------------------------------------------------------
# Test: VALID_AUTH_TYPES content (unit-level)
# ---------------------------------------------------------------------------
def test_valid_auth_types_contains_ssh_key():
"""Unit: VALID_AUTH_TYPES set must include 'ssh_key'."""
from web.api import VALID_AUTH_TYPES
assert "ssh_key" in VALID_AUTH_TYPES, (
f"VALID_AUTH_TYPES must contain 'ssh_key', got: {VALID_AUTH_TYPES}"
)
def test_valid_auth_types_contains_key():
"""Unit: VALID_AUTH_TYPES set must include 'key'."""
from web.api import VALID_AUTH_TYPES
assert "key" in VALID_AUTH_TYPES, (
f"VALID_AUTH_TYPES must contain 'key', got: {VALID_AUTH_TYPES}"
)
def test_valid_auth_types_contains_password():
"""Unit: VALID_AUTH_TYPES set must include 'password'."""
from web.api import VALID_AUTH_TYPES
assert "password" in VALID_AUTH_TYPES, (
f"VALID_AUTH_TYPES must contain 'password', got: {VALID_AUTH_TYPES}"
)
def test_valid_auth_types_excludes_ftp():
"""Unit: VALID_AUTH_TYPES must NOT include 'ftp'."""
from web.api import VALID_AUTH_TYPES
assert "ftp" not in VALID_AUTH_TYPES, (
f"VALID_AUTH_TYPES must not contain 'ftp', got: {VALID_AUTH_TYPES}"
)

View file

@ -1,8 +1,10 @@
"""Tests for core/models.py — all functions, in-memory SQLite.""" """Tests for core/models.py — all functions, in-memory SQLite."""
import re
import pytest import pytest
from core.db import init_db from core.db import init_db
from core import models from core import models
from core.models import TASK_CATEGORIES
@pytest.fixture @pytest.fixture
@ -53,6 +55,123 @@ def test_update_project_tech_stack_json(conn):
assert updated["tech_stack"] == ["python", "fastapi"] assert updated["tech_stack"] == ["python", "fastapi"]
# -- project_type and SSH fields (KIN-071) --
def test_create_operations_project(conn):
"""KIN-071: operations project stores SSH fields. KIN-ARCH-005: path не передаётся."""
p = models.create_project(
conn, "srv1", "My Server",
project_type="operations",
ssh_host="10.0.0.1",
ssh_user="root",
ssh_key_path="~/.ssh/id_rsa",
ssh_proxy_jump="jumpt",
)
assert p["project_type"] == "operations"
assert p["ssh_host"] == "10.0.0.1"
assert p["ssh_user"] == "root"
assert p["ssh_key_path"] == "~/.ssh/id_rsa"
assert p["ssh_proxy_jump"] == "jumpt"
assert p["path"] is None
def test_create_development_project_defaults(conn):
"""KIN-071: development is default project_type."""
p = models.create_project(conn, "devp", "Dev Project", "/path")
assert p["project_type"] == "development"
assert p["ssh_host"] is None
def test_update_project_ssh_fields(conn):
"""KIN-071: update_project can set SSH fields."""
models.create_project(conn, "srv2", "Server 2", project_type="operations")
updated = models.update_project(conn, "srv2", ssh_host="192.168.1.1", ssh_user="pelmen")
assert updated["ssh_host"] == "192.168.1.1"
assert updated["ssh_user"] == "pelmen"
assert updated["path"] is None
# ---------------------------------------------------------------------------
# KIN-ARCH-003 — path nullable для operations-проектов
# Исправляет баг: workaround с пустой строкой ("") для operations-проектов
# ---------------------------------------------------------------------------
def test_kin_arch_003_operations_project_without_path_stores_null(conn):
"""KIN-ARCH-003: operations-проект без path сохраняется с path=NULL, не пустой строкой.
До фикса: workaround передавать path='' чтобы обойти NOT NULL constraint.
После фикса: path=None (NULL в БД) допустим для operations-проектов.
"""
p = models.create_project(
conn, "ops_null", "Ops Null Path",
project_type="operations",
ssh_host="10.0.0.1",
)
assert p["path"] is None, (
"KIN-ARCH-003 регрессия: path должен быть NULL, а не пустой строкой"
)
def test_kin_arch_003_check_constraint_rejects_null_path_for_development(conn):
"""KIN-ARCH-003: CHECK constraint (path IS NOT NULL OR project_type='operations')
отклоняет path=NULL для development-проектов."""
import sqlite3 as _sqlite3
with pytest.raises(_sqlite3.IntegrityError):
models.create_project(
conn, "dev_no_path", "Dev No Path",
path=None, project_type="development",
)
# -- validate_completion_mode (KIN-063) --
def test_validate_completion_mode_valid_auto_complete():
"""validate_completion_mode принимает 'auto_complete'."""
assert models.validate_completion_mode("auto_complete") == "auto_complete"
def test_validate_completion_mode_valid_review():
"""validate_completion_mode принимает 'review'."""
assert models.validate_completion_mode("review") == "review"
def test_validate_completion_mode_invalid_fallback():
"""validate_completion_mode возвращает 'review' для невалидных значений (фоллбэк)."""
assert models.validate_completion_mode("auto") == "review"
assert models.validate_completion_mode("") == "review"
assert models.validate_completion_mode("unknown") == "review"
# -- get_effective_mode (KIN-063) --
def test_get_effective_mode_task_overrides_project(conn):
"""Task execution_mode имеет приоритет над project execution_mode."""
models.create_project(conn, "p1", "P1", "/p1", execution_mode="review")
models.create_task(conn, "P1-001", "p1", "Task", execution_mode="auto_complete")
mode = models.get_effective_mode(conn, "p1", "P1-001")
assert mode == "auto_complete"
def test_get_effective_mode_falls_back_to_project(conn):
"""Если задача без execution_mode — применяется project execution_mode."""
models.create_project(conn, "p1", "P1", "/p1", execution_mode="auto_complete")
models.create_task(conn, "P1-001", "p1", "Task") # execution_mode=None
mode = models.get_effective_mode(conn, "p1", "P1-001")
assert mode == "auto_complete"
def test_get_effective_mode_project_review_overrides_default(conn):
"""Project execution_mode='review' + task без override → возвращает 'review'.
Сценарий: PM хотел auto_complete, но проект настроен на review человеком.
get_effective_mode должен вернуть project-level 'review'.
"""
models.create_project(conn, "p1", "P1", "/p1", execution_mode="review")
models.create_task(conn, "P1-001", "p1", "Task") # нет task-level override
mode = models.get_effective_mode(conn, "p1", "P1-001")
assert mode == "review"
# -- Tasks -- # -- Tasks --
def test_create_and_get_task(conn): def test_create_and_get_task(conn):
@ -161,6 +280,87 @@ def test_add_and_get_modules(conn):
assert len(mods) == 1 assert len(mods) == 1
def test_add_module_created_true_for_new_module(conn):
"""KIN-081: add_module возвращает _created=True для нового модуля (INSERT)."""
models.create_project(conn, "p1", "P1", "/p1")
m = models.add_module(conn, "p1", "api", "backend", "src/api/")
assert m["_created"] is True
assert m["name"] == "api"
def test_add_module_created_false_for_duplicate_name(conn):
"""KIN-081: add_module возвращает _created=False при дублировании по имени (INSERT OR IGNORE).
UNIQUE constraint (project_id, name). Второй INSERT с тем же name игнорируется,
возвращается существующая запись с _created=False.
"""
models.create_project(conn, "p1", "P1", "/p1")
m1 = models.add_module(conn, "p1", "api", "backend", "src/api/")
assert m1["_created"] is True
# Same name, different path — should be ignored
m2 = models.add_module(conn, "p1", "api", "frontend", "src/api-v2/")
assert m2["_created"] is False
assert m2["name"] == "api"
# Only one module in DB
assert len(models.get_modules(conn, "p1")) == 1
def test_add_module_duplicate_returns_original_row(conn):
"""KIN-081: при дублировании add_module возвращает оригинальную запись (не новые данные)."""
models.create_project(conn, "p1", "P1", "/p1")
m1 = models.add_module(conn, "p1", "api", "backend", "src/api/",
description="original desc")
m2 = models.add_module(conn, "p1", "api", "frontend", "src/api-v2/",
description="new desc")
# Should return original row, not updated one
assert m2["type"] == "backend"
assert m2["description"] == "original desc"
assert m2["id"] == m1["id"]
def test_add_module_same_name_different_projects_are_independent(conn):
"""KIN-081: два проекта могут иметь одноимённые модули — UNIQUE per project_id."""
models.create_project(conn, "p1", "P1", "/p1")
models.create_project(conn, "p2", "P2", "/p2")
m1 = models.add_module(conn, "p1", "api", "backend", "src/api/")
m2 = models.add_module(conn, "p2", "api", "backend", "src/api/")
assert m1["_created"] is True
assert m2["_created"] is True
assert m1["id"] != m2["id"]
# -- delete_project --
def test_delete_project_removes_project_record(conn):
"""KIN-081: delete_project удаляет запись из таблицы projects."""
models.create_project(conn, "p1", "P1", "/p1")
assert models.get_project(conn, "p1") is not None
models.delete_project(conn, "p1")
assert models.get_project(conn, "p1") is None
def test_delete_project_cascades_to_related_tables(conn):
"""KIN-081: delete_project удаляет связанные modules, decisions, tasks, agent_logs."""
models.create_project(conn, "p1", "P1", "/p1")
models.add_module(conn, "p1", "api", "backend", "src/api/")
models.add_decision(conn, "p1", "gotcha", "Bug X", "desc")
models.create_task(conn, "P1-001", "p1", "Task")
models.log_agent_run(conn, "p1", "developer", "implement", task_id="P1-001")
models.delete_project(conn, "p1")
assert conn.execute("SELECT COUNT(*) FROM modules WHERE project_id='p1'").fetchone()[0] == 0
assert conn.execute("SELECT COUNT(*) FROM decisions WHERE project_id='p1'").fetchone()[0] == 0
assert conn.execute("SELECT COUNT(*) FROM tasks WHERE project_id='p1'").fetchone()[0] == 0
assert conn.execute("SELECT COUNT(*) FROM agent_logs WHERE project_id='p1'").fetchone()[0] == 0
def test_delete_project_nonexistent_does_not_raise(conn):
"""KIN-081: delete_project на несуществующий проект не бросает исключение."""
models.delete_project(conn, "nonexistent")
# -- Agent Logs -- # -- Agent Logs --
def test_log_agent_run(conn): def test_log_agent_run(conn):
@ -238,3 +438,299 @@ def test_cost_summary(conn):
def test_cost_summary_empty(conn): def test_cost_summary_empty(conn):
models.create_project(conn, "p1", "P1", "/p1") models.create_project(conn, "p1", "P1", "/p1")
assert models.get_cost_summary(conn, days=7) == [] assert models.get_cost_summary(conn, days=7) == []
# -- add_decision_if_new --
def test_add_decision_if_new_adds_new_decision(conn):
models.create_project(conn, "p1", "P1", "/p1")
d = models.add_decision_if_new(conn, "p1", "gotcha", "Use WAL mode", "description")
assert d is not None
assert d["title"] == "Use WAL mode"
assert d["type"] == "gotcha"
def test_add_decision_if_new_skips_exact_duplicate(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "gotcha", "Use WAL mode", "desc1")
result = models.add_decision_if_new(conn, "p1", "gotcha", "Use WAL mode", "desc2")
assert result is None
# Existing decision not duplicated
assert len(models.get_decisions(conn, "p1")) == 1
def test_add_decision_if_new_skips_case_insensitive_duplicate(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "decision", "Use UUID for task IDs", "desc")
result = models.add_decision_if_new(conn, "p1", "decision", "use uuid for task ids", "other desc")
assert result is None
assert len(models.get_decisions(conn, "p1")) == 1
def test_add_decision_if_new_allows_same_title_different_type(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "gotcha", "SQLite WAL", "desc")
result = models.add_decision_if_new(conn, "p1", "convention", "SQLite WAL", "other desc")
assert result is not None
assert len(models.get_decisions(conn, "p1")) == 2
def test_add_decision_if_new_skips_whitespace_duplicate(conn):
models.create_project(conn, "p1", "P1", "/p1")
models.add_decision(conn, "p1", "convention", "Run tests after each change", "desc")
result = models.add_decision_if_new(conn, "p1", "convention", " Run tests after each change ", "desc2")
assert result is None
assert len(models.get_decisions(conn, "p1")) == 1
# -- next_task_id (KIN-OBS-009) --
def test_next_task_id_with_category_first(conn):
"""Первая задача с category='SEC''VDOL-SEC-001'."""
models.create_project(conn, "vdol", "VDOL", "/vdol")
task_id = models.next_task_id(conn, "vdol", category="SEC")
assert task_id == "VDOL-SEC-001"
def test_next_task_id_with_category_increments(conn):
"""Вторая задача с category='SEC''VDOL-SEC-002'."""
models.create_project(conn, "vdol", "VDOL", "/vdol")
models.create_task(conn, "VDOL-SEC-001", "vdol", "Task 1", category="SEC")
task_id = models.next_task_id(conn, "vdol", category="SEC")
assert task_id == "VDOL-SEC-002"
def test_next_task_id_category_counters_independent(conn):
"""Счётчики категорий независимы: SEC-002 не влияет на UI-001."""
models.create_project(conn, "vdol", "VDOL", "/vdol")
models.create_task(conn, "VDOL-SEC-001", "vdol", "Sec Task 1", category="SEC")
models.create_task(conn, "VDOL-SEC-002", "vdol", "Sec Task 2", category="SEC")
task_id = models.next_task_id(conn, "vdol", category="UI")
assert task_id == "VDOL-UI-001"
def test_next_task_id_without_category_backward_compat(conn):
"""Задача без category → 'VDOL-001' (backward compat)."""
models.create_project(conn, "vdol", "VDOL", "/vdol")
task_id = models.next_task_id(conn, "vdol")
assert task_id == "VDOL-001"
def test_next_task_id_mixed_formats_no_collision(conn):
"""Смешанный проект: счётчики старого и нового форматов не пересекаются."""
models.create_project(conn, "kin", "KIN", "/kin")
models.create_task(conn, "KIN-001", "kin", "Old style task")
models.create_task(conn, "KIN-002", "kin", "Old style task 2")
# Новый формат с категорией не мешает старому
cat_id = models.next_task_id(conn, "kin", category="OBS")
assert cat_id == "KIN-OBS-001"
# Старый формат не мешает новому
old_id = models.next_task_id(conn, "kin")
assert old_id == "KIN-003"
# -- Obsidian sync regex (KIN-OBS-009, решение #75) --
_OBSIDIAN_TASK_PATTERN = re.compile(
r"^[-*]\s+\[([xX ])\]\s+([A-Z][A-Z0-9]*-(?:[A-Z][A-Z0-9]*-)?\d+)\s+(.+)$"
)
def test_obsidian_regex_matches_old_format():
"""Старый формат KIN-001 матчится."""
m = _OBSIDIAN_TASK_PATTERN.match("- [x] KIN-001 Fix login bug")
assert m is not None
assert m.group(2) == "KIN-001"
def test_obsidian_regex_matches_new_format():
"""Новый формат VDOL-SEC-001 матчится."""
m = _OBSIDIAN_TASK_PATTERN.match("- [ ] VDOL-SEC-001 Security audit")
assert m is not None
assert m.group(2) == "VDOL-SEC-001"
def test_obsidian_regex_matches_obs_format():
"""Формат KIN-OBS-009 матчится (проверяем задачу этой фичи)."""
m = _OBSIDIAN_TASK_PATTERN.match("* [X] KIN-OBS-009 Task ID по категориям")
assert m is not None
assert m.group(2) == "KIN-OBS-009"
def test_obsidian_regex_no_match_lowercase():
"""Нижний регистр не матчится."""
assert _OBSIDIAN_TASK_PATTERN.match("- [x] proj-001 lowercase id") is None
def test_obsidian_regex_no_match_numeric_prefix():
"""Числовой префикс не матчится."""
assert _OBSIDIAN_TASK_PATTERN.match("- [x] 123-abc invalid format") is None
def test_obsidian_regex_done_state(conn):
"""Статус done/pending корректно извлекается."""
m_done = _OBSIDIAN_TASK_PATTERN.match("- [x] KIN-UI-003 Done task")
m_pending = _OBSIDIAN_TASK_PATTERN.match("- [ ] KIN-UI-004 Pending task")
assert m_done.group(1) == "x"
assert m_pending.group(1) == " "
# -- next_task_id для всех 12 категорий (KIN-OBS-009) --
@pytest.mark.parametrize("cat", TASK_CATEGORIES)
def test_next_task_id_all_categories_generate_correct_format(conn, cat):
"""next_task_id генерирует ID формата PROJ-CAT-001 для каждой из 12 категорий."""
models.create_project(conn, "vdol", "VDOL", "/vdol")
task_id = models.next_task_id(conn, "vdol", category=cat)
assert task_id == f"VDOL-{cat}-001"
# -- update_task category не ломает brief (KIN-OBS-009, решение #74) --
def test_update_task_category_preserves_brief(conn):
"""update_task(category=...) не перетирает существующее поле brief."""
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "Task", brief={"summary": "important context"})
updated = models.update_task(conn, "P1-001", category="SEC")
assert updated["category"] == "SEC"
assert updated["brief"] == {"summary": "important context"}
def test_update_task_category_preserves_status_and_priority(conn):
"""update_task(category=...) не меняет остальные поля задачи."""
models.create_project(conn, "p1", "P1", "/p1")
models.create_task(conn, "P1-001", "p1", "Task", status="in_progress", priority=3)
updated = models.update_task(conn, "P1-001", category="UI")
assert updated["category"] == "UI"
assert updated["status"] == "in_progress"
assert updated["priority"] == 3
# -- KIN-ARCH-006: autocommit_enabled и obsidian_vault_path в SCHEMA --
def test_schema_project_has_autocommit_enabled_column(conn):
"""KIN-ARCH-006: таблица projects содержит колонку autocommit_enabled."""
cols = {r[1] for r in conn.execute("PRAGMA table_info(projects)").fetchall()}
assert "autocommit_enabled" in cols
def test_schema_project_has_obsidian_vault_path_column(conn):
"""KIN-ARCH-006: таблица projects содержит колонку obsidian_vault_path."""
cols = {r[1] for r in conn.execute("PRAGMA table_info(projects)").fetchall()}
assert "obsidian_vault_path" in cols
def test_autocommit_enabled_default_is_zero(conn):
"""KIN-ARCH-006: autocommit_enabled по умолчанию равен 0."""
models.create_project(conn, "p1", "P1", "/p1")
p = models.get_project(conn, "p1")
assert p["autocommit_enabled"] == 0
def test_obsidian_vault_path_default_is_none(conn):
"""KIN-ARCH-006: obsidian_vault_path по умолчанию равен NULL."""
models.create_project(conn, "p1", "P1", "/p1")
p = models.get_project(conn, "p1")
assert p["obsidian_vault_path"] is None
def test_autocommit_enabled_can_be_set_to_one(conn):
"""KIN-ARCH-006: autocommit_enabled можно установить в 1 через update_project."""
models.create_project(conn, "p1", "P1", "/p1")
updated = models.update_project(conn, "p1", autocommit_enabled=1)
assert updated["autocommit_enabled"] == 1
def test_obsidian_vault_path_can_be_set(conn):
"""KIN-ARCH-006: obsidian_vault_path можно установить через update_project."""
models.create_project(conn, "p1", "P1", "/p1")
updated = models.update_project(conn, "p1", obsidian_vault_path="/vault/my-notes")
assert updated["obsidian_vault_path"] == "/vault/my-notes"
# ---------------------------------------------------------------------------
# KIN-090: Task Attachments
# ---------------------------------------------------------------------------
@pytest.fixture
def task_conn(conn):
"""conn with seeded project and task for attachment tests."""
models.create_project(conn, "prj", "Project", "/tmp/prj")
models.create_task(conn, "PRJ-001", "prj", "Fix bug")
return conn
def test_create_attachment_returns_dict(task_conn):
"""KIN-090: create_attachment возвращает dict со всеми полями."""
att = models.create_attachment(
task_conn, "PRJ-001", "screenshot.png",
"/tmp/prj/.kin/attachments/PRJ-001/screenshot.png",
"image/png", 1024,
)
assert att["id"] is not None
assert att["task_id"] == "PRJ-001"
assert att["filename"] == "screenshot.png"
assert att["path"] == "/tmp/prj/.kin/attachments/PRJ-001/screenshot.png"
assert att["mime_type"] == "image/png"
assert att["size"] == 1024
assert att["created_at"] is not None
def test_create_attachment_persists_in_sqlite(task_conn):
"""KIN-090: AC4 — данные вложения персистируются в SQLite."""
att = models.create_attachment(
task_conn, "PRJ-001", "bug.png",
"/tmp/prj/.kin/attachments/PRJ-001/bug.png",
"image/png", 512,
)
fetched = models.get_attachment(task_conn, att["id"])
assert fetched is not None
assert fetched["filename"] == "bug.png"
assert fetched["size"] == 512
def test_list_attachments_empty_for_new_task(task_conn):
"""KIN-090: list_attachments возвращает [] для задачи без вложений."""
result = models.list_attachments(task_conn, "PRJ-001")
assert result == []
def test_list_attachments_returns_all_for_task(task_conn):
"""KIN-090: list_attachments возвращает все вложения задачи."""
models.create_attachment(task_conn, "PRJ-001", "a.png",
"/tmp/prj/.kin/attachments/PRJ-001/a.png", "image/png", 100)
models.create_attachment(task_conn, "PRJ-001", "b.jpg",
"/tmp/prj/.kin/attachments/PRJ-001/b.jpg", "image/jpeg", 200)
result = models.list_attachments(task_conn, "PRJ-001")
assert len(result) == 2
filenames = {a["filename"] for a in result}
assert filenames == {"a.png", "b.jpg"}
def test_list_attachments_isolated_by_task(task_conn):
"""KIN-090: list_attachments не возвращает вложения других задач."""
models.create_task(task_conn, "PRJ-002", "prj", "Other task")
models.create_attachment(task_conn, "PRJ-001", "a.png",
"/tmp/.kin/PRJ-001/a.png", "image/png", 100)
models.create_attachment(task_conn, "PRJ-002", "b.png",
"/tmp/.kin/PRJ-002/b.png", "image/png", 100)
assert len(models.list_attachments(task_conn, "PRJ-001")) == 1
assert len(models.list_attachments(task_conn, "PRJ-002")) == 1
def test_get_attachment_not_found_returns_none(task_conn):
"""KIN-090: get_attachment возвращает None если вложение не найдено."""
assert models.get_attachment(task_conn, 99999) is None
def test_delete_attachment_returns_true(task_conn):
"""KIN-090: delete_attachment возвращает True при успешном удалении."""
att = models.create_attachment(task_conn, "PRJ-001", "del.png",
"/tmp/del.png", "image/png", 50)
assert models.delete_attachment(task_conn, att["id"]) is True
assert models.get_attachment(task_conn, att["id"]) is None
def test_delete_attachment_not_found_returns_false(task_conn):
"""KIN-090: delete_attachment возвращает False если запись не найдена."""
assert models.delete_attachment(task_conn, 99999) is False

307
tests/test_obsidian_sync.py Normal file
View file

@ -0,0 +1,307 @@
"""Tests for core/obsidian_sync.py — KIN-013."""
import sqlite3
import tempfile
from pathlib import Path
import pytest
from core.db import init_db
from core.obsidian_sync import (
export_decisions_to_md,
parse_task_checkboxes,
sync_obsidian,
)
from core import models
# ---------------------------------------------------------------------------
# 0. Migration — obsidian_vault_path column must exist after init_db
# ---------------------------------------------------------------------------
def test_migration_obsidian_vault_path_column_exists():
"""init_db создаёт или мигрирует колонку obsidian_vault_path в таблице projects."""
conn = init_db(db_path=":memory:")
cols = {r[1] for r in conn.execute("PRAGMA table_info(projects)").fetchall()}
conn.close()
assert "obsidian_vault_path" in cols
@pytest.fixture
def tmp_vault(tmp_path):
"""Returns a temporary vault root directory."""
return tmp_path / "vault"
@pytest.fixture
def db(tmp_path):
"""Returns an in-memory SQLite connection with schema + test data."""
db_path = tmp_path / "test.db"
conn = init_db(db_path)
models.create_project(conn, "proj1", "Test Project", "/tmp/proj1")
yield conn
conn.close()
# ---------------------------------------------------------------------------
# 1. export creates files with correct frontmatter
# ---------------------------------------------------------------------------
def test_export_decisions_creates_md_files(tmp_vault):
decisions = [
{
"id": 42,
"project_id": "proj1",
"type": "gotcha",
"category": "testing",
"title": "Proxy через SSH не работает без ssh-agent",
"description": "При подключении через ProxyJump ssh-agent должен быть запущен.",
"tags": ["testing", "mock", "subprocess"],
"created_at": "2026-03-10T12:00:00",
}
]
tmp_vault.mkdir(parents=True)
created = export_decisions_to_md("proj1", decisions, tmp_vault)
assert len(created) == 1
md_file = created[0]
assert md_file.exists()
content = md_file.read_text(encoding="utf-8")
assert "kin_decision_id: 42" in content
assert "project: proj1" in content
assert "type: gotcha" in content
assert "category: testing" in content
assert "2026-03-10" in content
assert "# Proxy через SSH не работает без ssh-agent" in content
assert "При подключении через ProxyJump" in content
# ---------------------------------------------------------------------------
# 2. export is idempotent (overwrite, not duplicate)
# ---------------------------------------------------------------------------
def test_export_idempotent(tmp_vault):
decisions = [
{
"id": 1,
"project_id": "p",
"type": "decision",
"category": None,
"title": "Use SQLite",
"description": "SQLite is the source of truth.",
"tags": [],
"created_at": "2026-01-01",
}
]
tmp_vault.mkdir(parents=True)
export_decisions_to_md("p", decisions, tmp_vault)
export_decisions_to_md("p", decisions, tmp_vault)
out_dir = tmp_vault / "p" / "decisions"
files = list(out_dir.glob("*.md"))
assert len(files) == 1
# ---------------------------------------------------------------------------
# 3. parse_task_checkboxes — done checkbox
# ---------------------------------------------------------------------------
def test_parse_task_checkboxes_done(tmp_vault):
tasks_dir = tmp_vault / "proj1" / "tasks"
tasks_dir.mkdir(parents=True)
(tasks_dir / "kanban.md").write_text(
"- [x] KIN-001 Implement login\n- [ ] KIN-002 Add tests\n",
encoding="utf-8",
)
results = parse_task_checkboxes(tmp_vault, "proj1")
done_items = [r for r in results if r["task_id"] == "KIN-001"]
assert len(done_items) == 1
assert done_items[0]["done"] is True
assert done_items[0]["title"] == "Implement login"
# ---------------------------------------------------------------------------
# 4. parse_task_checkboxes — pending checkbox
# ---------------------------------------------------------------------------
def test_parse_task_checkboxes_pending(tmp_vault):
tasks_dir = tmp_vault / "proj1" / "tasks"
tasks_dir.mkdir(parents=True)
(tasks_dir / "kanban.md").write_text(
"- [ ] KIN-002 Add tests\n",
encoding="utf-8",
)
results = parse_task_checkboxes(tmp_vault, "proj1")
pending = [r for r in results if r["task_id"] == "KIN-002"]
assert len(pending) == 1
assert pending[0]["done"] is False
# ---------------------------------------------------------------------------
# 5. parse_task_checkboxes — lines without task ID are skipped
# ---------------------------------------------------------------------------
def test_parse_task_checkboxes_no_id(tmp_vault):
tasks_dir = tmp_vault / "proj1" / "tasks"
tasks_dir.mkdir(parents=True)
(tasks_dir / "notes.md").write_text(
"- [x] Some task without ID\n"
"- [ ] Another line without identifier\n"
"- [x] KIN-003 With ID\n",
encoding="utf-8",
)
results = parse_task_checkboxes(tmp_vault, "proj1")
assert all(r["task_id"].startswith("KIN-") for r in results)
assert len(results) == 1
assert results[0]["task_id"] == "KIN-003"
# ---------------------------------------------------------------------------
# 6. sync_obsidian updates task status when done=True
# ---------------------------------------------------------------------------
def test_sync_updates_task_status(db, tmp_vault):
tmp_vault.mkdir(parents=True)
models.update_project(db, "proj1", obsidian_vault_path=str(tmp_vault))
task = models.create_task(db, "PROJ1-001", "proj1", "Do something", status="in_progress")
assert task["status"] == "in_progress"
# Write checkbox file
tasks_dir = tmp_vault / "proj1" / "tasks"
tasks_dir.mkdir(parents=True)
(tasks_dir / "sprint.md").write_text(
"- [x] PROJ1-001 Do something\n",
encoding="utf-8",
)
result = sync_obsidian(db, "proj1")
assert result["tasks_updated"] == 1
assert not result["errors"]
updated = models.get_task(db, "PROJ1-001")
assert updated["status"] == "done"
# ---------------------------------------------------------------------------
# 7. sync_obsidian raises ValueError when vault_path not set
# ---------------------------------------------------------------------------
def test_sync_no_vault_path(db):
# project exists but obsidian_vault_path is NULL
with pytest.raises(ValueError, match="obsidian_vault_path not set"):
sync_obsidian(db, "proj1")
# ---------------------------------------------------------------------------
# 8. export — frontmatter обёрнут в разделители ---
# ---------------------------------------------------------------------------
def test_export_frontmatter_has_yaml_delimiters(tmp_vault):
"""Экспортированный файл начинается с '---' и содержит закрывающий '---'."""
decisions = [
{
"id": 99,
"project_id": "p",
"type": "decision",
"category": None,
"title": "YAML Delimiter Test",
"description": "Verifying frontmatter delimiters.",
"tags": [],
"created_at": "2026-01-01",
}
]
tmp_vault.mkdir(parents=True)
created = export_decisions_to_md("p", decisions, tmp_vault)
content = created[0].read_text(encoding="utf-8")
assert content.startswith("---\n"), "Frontmatter должен начинаться с '---\\n'"
# первые --- открывают, вторые --- закрывают frontmatter
parts = content.split("---\n")
assert len(parts) >= 3, "Должно быть минимум два разделителя '---'"
# ---------------------------------------------------------------------------
# 9. sync_obsidian — несуществующий vault_path → директория создаётся автоматически
# KIN-070: Регрессионный тест на автоматическое создание vault directory
# ---------------------------------------------------------------------------
def test_kin070_sync_creates_missing_vault_directory(db, tmp_path):
"""KIN-070: Если vault_path не существует, sync автоматически создаёт директорию.
Проверяет что:
- Директория создаётся без ошибок
- sync_obsidian не падает с ошибкой
- Возвращаемый результат содержит errors=[]
"""
nonexistent = tmp_path / "ghost_vault"
models.update_project(db, "proj1", obsidian_vault_path=str(nonexistent))
result = sync_obsidian(db, "proj1")
assert result["errors"] == []
assert nonexistent.is_dir() # директория автоматически создана
assert result["exported_decisions"] == 0 # нет decisions в DB
assert result["tasks_updated"] == 0
# ---------------------------------------------------------------------------
# 10. sync_obsidian + decisions: несуществующий vault + decisions в БД → export success
# KIN-070: Проверяет что decisions экспортируются когда vault создаётся автоматически
# ---------------------------------------------------------------------------
def test_kin070_sync_creates_vault_and_exports_decisions(db, tmp_path):
"""KIN-070: sync экспортирует decisions и автоматически создаёт vault_path.
Проверяет что:
- vault директория создаётся автоматически
- decisions экспортируются в .md-файлы (exported_decisions > 0)
- errors == [] (нет ошибок)
"""
nonexistent = tmp_path / "missing_vault"
models.update_project(db, "proj1", obsidian_vault_path=str(nonexistent))
# Создаём decision в БД
models.add_decision(
db,
project_id="proj1",
type="decision",
title="Use SQLite for sync state",
description="SQLite will be the single source of truth.",
tags=["database", "sync"],
)
result = sync_obsidian(db, "proj1")
# Проверяем успешный экспорт
assert result["errors"] == []
assert nonexistent.is_dir() # директория создана
assert result["exported_decisions"] == 1 # одно decision экспортировано
assert result["tasks_updated"] == 0
# Проверяем что .md-файл создан в правильной директории
decisions_dir = nonexistent / "proj1" / "decisions"
assert decisions_dir.is_dir()
md_files = list(decisions_dir.glob("*.md"))
assert len(md_files) == 1
# ---------------------------------------------------------------------------
# 11. sync_obsidian — пустой vault → 0 экспортов, 0 обновлений, нет ошибок
# ---------------------------------------------------------------------------
def test_sync_empty_vault_no_errors(db, tmp_vault):
"""Пустой vault (нет decisions, нет task-файлов) → exported=0, updated=0, errors=[]."""
tmp_vault.mkdir(parents=True)
models.update_project(db, "proj1", obsidian_vault_path=str(tmp_vault))
result = sync_obsidian(db, "proj1")
assert result["exported_decisions"] == 0
assert result["tasks_updated"] == 0
assert result["errors"] == []

369
tests/test_phases.py Normal file
View file

@ -0,0 +1,369 @@
"""Tests for core/phases.py — Research Phase Pipeline (KIN-059).
Covers:
- validate_roles: фильтрация, дедубликация, удаление architect
- build_phase_order: канонический порядок + auto-architect
- create_project_with_phases: создание + первая фаза active
- approve_phase: переход статусов, активация следующей, sequential enforcement
- reject_phase: статус rejected, защита от неактивных фаз
- revise_phase: цикл reviserunning, счётчик, сохранение комментария
"""
import pytest
from core.db import init_db
from core import models
from core.phases import (
RESEARCH_ROLES,
approve_phase,
build_phase_order,
create_project_with_phases,
reject_phase,
revise_phase,
validate_roles,
)
@pytest.fixture
def conn():
"""KIN-059: изолированная in-memory БД для каждого теста."""
c = init_db(db_path=":memory:")
yield c
c.close()
# ---------------------------------------------------------------------------
# validate_roles
# ---------------------------------------------------------------------------
def test_validate_roles_filters_unknown_roles():
"""KIN-059: неизвестные роли отфильтровываются из списка."""
result = validate_roles(["business_analyst", "wizard", "ghost"])
assert result == ["business_analyst"]
def test_validate_roles_strips_architect():
"""KIN-059: architect убирается из входных ролей — добавляется автоматически позже."""
result = validate_roles(["architect", "tech_researcher"])
assert "architect" not in result
assert "tech_researcher" in result
def test_validate_roles_deduplicates():
"""KIN-059: дублирующиеся роли удаляются, остаётся одна копия."""
result = validate_roles(["business_analyst", "business_analyst", "tech_researcher"])
assert result.count("business_analyst") == 1
def test_validate_roles_empty_input_returns_empty():
"""KIN-059: пустой список ролей → пустой результат."""
assert validate_roles([]) == []
def test_validate_roles_only_architect_returns_empty():
"""KIN-059: только architect во входе → пустой результат (architect не researcher)."""
assert validate_roles(["architect"]) == []
def test_validate_roles_strips_and_lowercases():
"""KIN-059: роли нормализуются: trim + lowercase."""
result = validate_roles([" Tech_Researcher ", "MARKETER"])
assert "tech_researcher" in result
assert "marketer" in result
# ---------------------------------------------------------------------------
# build_phase_order
# ---------------------------------------------------------------------------
@pytest.mark.parametrize("roles,expected", [
(
["business_analyst"],
["business_analyst", "architect"],
),
(
["tech_researcher"],
["tech_researcher", "architect"],
),
(
["marketer", "business_analyst"],
["business_analyst", "marketer", "architect"],
),
(
["ux_designer", "market_researcher", "tech_researcher"],
["market_researcher", "tech_researcher", "ux_designer", "architect"],
),
])
def test_build_phase_order_canonical_order_and_appends_architect(roles, expected):
"""KIN-059: роли сортируются в канонический порядок, architect добавляется последним."""
assert build_phase_order(roles) == expected
def test_build_phase_order_no_architect_if_no_researcher():
"""KIN-059: architect не добавляется если нет ни одного researcher."""
result = build_phase_order([])
assert result == []
assert "architect" not in result
def test_build_phase_order_architect_always_last():
"""KIN-059: architect всегда последний независимо от набора ролей."""
result = build_phase_order(["marketer", "legal_researcher", "business_analyst"])
assert result[-1] == "architect"
# ---------------------------------------------------------------------------
# create_project_with_phases
# ---------------------------------------------------------------------------
def test_create_project_with_phases_creates_project_and_phases(conn):
"""KIN-059: создание проекта с researcher-ролями создаёт и проект, и записи фаз."""
result = create_project_with_phases(
conn, "proj1", "Project 1", "/path",
description="Тестовый проект", selected_roles=["business_analyst"],
)
assert result["project"]["id"] == "proj1"
# business_analyst + architect = 2 фазы
assert len(result["phases"]) == 2
def test_create_project_with_phases_first_phase_is_active(conn):
"""KIN-059: первая фаза сразу переходит в status=active и получает task_id."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["tech_researcher"],
)
first = result["phases"][0]
assert first["status"] == "active"
assert first["task_id"] is not None
def test_create_project_with_phases_other_phases_remain_pending(conn):
"""KIN-059: все фазы кроме первой остаются pending — не активируются без approve."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["market_researcher", "tech_researcher"],
)
# market_researcher, tech_researcher, architect → 3 фазы
for phase in result["phases"][1:]:
assert phase["status"] == "pending"
def test_create_project_with_phases_raises_if_no_roles(conn):
"""KIN-059: ValueError при попытке создать проект без researcher-ролей."""
with pytest.raises(ValueError, match="[Aa]t least one research role"):
create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=[],
)
def test_create_project_with_phases_architect_auto_added_last(conn):
"""KIN-059: architect автоматически добавляется последним без явного указания."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
roles = [ph["role"] for ph in result["phases"]]
assert "architect" in roles
assert roles[-1] == "architect"
@pytest.mark.parametrize("roles", [
["business_analyst"],
["market_researcher", "tech_researcher"],
["legal_researcher", "ux_designer", "marketer"],
["business_analyst", "market_researcher", "legal_researcher",
"tech_researcher", "ux_designer", "marketer"],
])
def test_create_project_with_phases_architect_added_for_any_combination(conn, roles):
"""KIN-059: architect добавляется при любом наборе researcher-ролей."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=roles,
)
phase_roles = [ph["role"] for ph in result["phases"]]
assert "architect" in phase_roles
assert phase_roles[-1] == "architect"
# ---------------------------------------------------------------------------
# approve_phase
# ---------------------------------------------------------------------------
def test_approve_phase_sets_status_approved(conn):
"""KIN-059: approve_phase устанавливает status=approved для текущей фазы."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
phase_id = result["phases"][0]["id"]
out = approve_phase(conn, phase_id)
assert out["phase"]["status"] == "approved"
def test_approve_phase_activates_next_phase(conn):
"""KIN-059: следующая фаза активируется только после approve предыдущей."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
first_phase_id = result["phases"][0]["id"]
out = approve_phase(conn, first_phase_id)
next_phase = out["next_phase"]
assert next_phase is not None
assert next_phase["status"] == "active"
assert next_phase["role"] == "architect"
def test_approve_phase_last_returns_no_next(conn):
"""KIN-059: approve последней фазы возвращает next_phase=None (workflow завершён)."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
# Approve business_analyst → architect активируется
first_id = result["phases"][0]["id"]
mid = approve_phase(conn, first_id)
architect_id = mid["next_phase"]["id"]
# Approve architect → no next
final = approve_phase(conn, architect_id)
assert final["next_phase"] is None
def test_approve_phase_not_active_raises(conn):
"""KIN-059: approve фазы в статусе != active бросает ValueError."""
models.create_project(conn, "proj1", "P1", "/path", description="Desc")
phase = models.create_phase(conn, "proj1", "business_analyst", 0)
# Фаза в статусе pending, не active
with pytest.raises(ValueError, match="not active"):
approve_phase(conn, phase["id"])
def test_pending_phase_not_started_without_approve(conn):
"""KIN-059: следующая фаза не стартует без approve предыдущей (нет автоактивации)."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["market_researcher", "tech_researcher"],
)
# Вторая фаза (tech_researcher) должна оставаться pending
second_phase = result["phases"][1]
assert second_phase["status"] == "pending"
assert second_phase["task_id"] is None
# ---------------------------------------------------------------------------
# reject_phase
# ---------------------------------------------------------------------------
def test_reject_phase_sets_status_rejected(conn):
"""KIN-059: reject_phase устанавливает status=rejected для фазы."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["tech_researcher"],
)
phase_id = result["phases"][0]["id"]
out = reject_phase(conn, phase_id, reason="Не релевантно")
assert out["status"] == "rejected"
def test_reject_phase_not_active_raises(conn):
"""KIN-059: reject_phase для pending-фазы бросает ValueError."""
models.create_project(conn, "proj1", "P1", "/path", description="Desc")
phase = models.create_phase(conn, "proj1", "tech_researcher", 0)
with pytest.raises(ValueError, match="not active"):
reject_phase(conn, phase["id"], reason="test")
# ---------------------------------------------------------------------------
# revise_phase
# ---------------------------------------------------------------------------
def test_revise_phase_sets_status_revising(conn):
"""KIN-059: revise_phase устанавливает статус revising для фазы."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["ux_designer"],
)
phase_id = result["phases"][0]["id"]
out = revise_phase(conn, phase_id, comment="Нужно больше деталей")
assert out["phase"]["status"] == "revising"
def test_revise_phase_creates_new_task_with_comment(conn):
"""KIN-059: revise_phase создаёт новую задачу с revise_comment в brief."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["marketer"],
)
phase_id = result["phases"][0]["id"]
comment = "Добавь анализ конкурентов"
out = revise_phase(conn, phase_id, comment=comment)
new_task = out["new_task"]
assert new_task is not None
assert new_task["brief"]["revise_comment"] == comment
def test_revise_phase_increments_revise_count(conn):
"""KIN-059: revise_phase увеличивает счётчик revise_count с каждым вызовом."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["marketer"],
)
phase_id = result["phases"][0]["id"]
out1 = revise_phase(conn, phase_id, comment="Первая ревизия")
assert out1["phase"]["revise_count"] == 1
out2 = revise_phase(conn, phase_id, comment="Вторая ревизия")
assert out2["phase"]["revise_count"] == 2
def test_revise_phase_saves_comment_on_phase(conn):
"""KIN-059: revise_phase сохраняет комментарий в поле revise_comment фазы."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
phase_id = result["phases"][0]["id"]
comment = "Уточни целевую аудиторию"
out = revise_phase(conn, phase_id, comment=comment)
assert out["phase"]["revise_comment"] == comment
def test_revise_phase_pending_raises(conn):
"""KIN-059: revise_phase для pending-фазы бросает ValueError."""
models.create_project(conn, "proj1", "P1", "/path", description="Desc")
phase = models.create_phase(conn, "proj1", "marketer", 0)
with pytest.raises(ValueError, match="cannot be revised"):
revise_phase(conn, phase["id"], comment="test")
def test_revise_phase_revising_status_allows_another_revise(conn):
"""KIN-059: фаза в статусе revising допускает повторный вызов revise (цикл)."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["business_analyst"],
)
phase_id = result["phases"][0]["id"]
revise_phase(conn, phase_id, comment="Первая ревизия")
# Фаза теперь revising — повторный revise должен проходить
out = revise_phase(conn, phase_id, comment="Вторая ревизия")
assert out["phase"]["revise_count"] == 2
def test_revise_phase_updates_task_id_to_new_task(conn):
"""KIN-059: после revise phase.task_id указывает на новую задачу."""
result = create_project_with_phases(
conn, "proj1", "P1", "/path",
description="Desc", selected_roles=["market_researcher"],
)
phase = result["phases"][0]
original_task_id = phase["task_id"]
out = revise_phase(conn, phase["id"], comment="Пересмотреть")
new_task_id = out["phase"]["task_id"]
assert new_task_id != original_task_id
assert new_task_id == out["new_task"]["id"]

File diff suppressed because it is too large Load diff

304
tests/test_telegram.py Normal file
View file

@ -0,0 +1,304 @@
"""
Tests for core/telegram.py send_telegram_escalation KIN-BIZ-001.
Covers:
- Correct Telegram API call parameters (token, chat_id, task_id, agent_role, reason)
- Graceful failure when Telegram API is unavailable (no exceptions raised)
- telegram_sent flag written to DB after successful send (mark_telegram_sent)
"""
import json
import urllib.error
from unittest.mock import MagicMock, patch
import pytest
from core import models
from core.db import init_db
from core.telegram import send_telegram_escalation
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def db_conn():
"""Fresh in-memory DB for each test."""
conn = init_db(db_path=":memory:")
yield conn
conn.close()
@pytest.fixture
def tg_env(monkeypatch):
"""Inject Telegram credentials via env vars (bypass secrets file).
Also stubs _load_kin_config so the secrets file doesn't override env vars.
"""
monkeypatch.setenv("KIN_TG_BOT_TOKEN", "test-token-abc123")
monkeypatch.setenv("KIN_TG_CHAT_ID", "99887766")
monkeypatch.setattr("core.telegram._load_kin_config", lambda: {})
@pytest.fixture
def mock_urlopen_ok():
"""Mock urllib.request.urlopen to return HTTP 200."""
mock_resp = MagicMock()
mock_resp.status = 200
mock_resp.__enter__ = lambda s: s
mock_resp.__exit__ = MagicMock(return_value=False)
with patch("urllib.request.urlopen", return_value=mock_resp) as m:
yield m
# ---------------------------------------------------------------------------
# Unit tests: send_telegram_escalation — correct API call parameters
# ---------------------------------------------------------------------------
def test_send_telegram_escalation_url_contains_bot_token(tg_env, mock_urlopen_ok):
"""Запрос уходит на URL с правильным bot token."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test Project",
agent_role="backend_dev",
reason="Cannot access DB",
pipeline_step="2",
)
req = mock_urlopen_ok.call_args[0][0]
assert "test-token-abc123" in req.full_url
assert "sendMessage" in req.full_url
def test_send_telegram_escalation_sends_to_correct_chat_id(tg_env, mock_urlopen_ok):
"""В теле POST-запроса содержится правильный chat_id."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Blocked",
pipeline_step="1",
)
req = mock_urlopen_ok.call_args[0][0]
body = json.loads(req.data.decode())
assert body["chat_id"] == "99887766"
def test_send_telegram_escalation_includes_task_id_in_message(tg_env, mock_urlopen_ok):
"""task_id присутствует в тексте сообщения."""
send_telegram_escalation(
task_id="KIN-TEST-007",
project_name="My Project",
agent_role="frontend_dev",
reason="No API access",
pipeline_step="3",
)
req = mock_urlopen_ok.call_args[0][0]
body = json.loads(req.data.decode())
assert "KIN-TEST-007" in body["text"]
def test_send_telegram_escalation_includes_agent_role_in_message(tg_env, mock_urlopen_ok):
"""agent_role присутствует в тексте сообщения."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="sysadmin",
reason="SSH timeout",
pipeline_step="1",
)
req = mock_urlopen_ok.call_args[0][0]
body = json.loads(req.data.decode())
assert "sysadmin" in body["text"]
def test_send_telegram_escalation_includes_reason_in_message(tg_env, mock_urlopen_ok):
"""reason присутствует в тексте сообщения."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Access denied to external API",
pipeline_step="2",
)
req = mock_urlopen_ok.call_args[0][0]
body = json.loads(req.data.decode())
assert "Access denied to external API" in body["text"]
def test_send_telegram_escalation_uses_post_method(tg_env, mock_urlopen_ok):
"""Запрос отправляется методом POST."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
req = mock_urlopen_ok.call_args[0][0]
assert req.method == "POST"
def test_send_telegram_escalation_returns_true_on_success(tg_env, mock_urlopen_ok):
"""Функция возвращает True при успешном ответе HTTP 200."""
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is True
def test_send_telegram_escalation_includes_pipeline_step_in_message(tg_env, mock_urlopen_ok):
"""pipeline_step включён в текст сообщения."""
send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="debugger",
reason="Reason",
pipeline_step="5",
)
req = mock_urlopen_ok.call_args[0][0]
body = json.loads(req.data.decode())
assert "5" in body["text"]
# ---------------------------------------------------------------------------
# Graceful failure tests — Telegram API unavailable
# ---------------------------------------------------------------------------
def test_send_telegram_escalation_returns_false_on_url_error(tg_env):
"""Функция возвращает False (не бросает) при urllib.error.URLError."""
with patch("urllib.request.urlopen", side_effect=urllib.error.URLError("Connection refused")):
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is False
def test_send_telegram_escalation_returns_false_on_unexpected_exception(tg_env):
"""Функция возвращает False (не бросает) при неожиданной ошибке."""
with patch("urllib.request.urlopen", side_effect=RuntimeError("Unexpected!")):
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is False
def test_send_telegram_escalation_never_raises_exception(tg_env):
"""Функция никогда не бросает исключение — пайплайн не должен падать."""
with patch("urllib.request.urlopen", side_effect=Exception("Anything at all")):
try:
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
except Exception as exc:
pytest.fail(f"send_telegram_escalation raised: {exc!r}")
assert result is False
def test_send_telegram_escalation_returns_false_on_http_non_200(tg_env):
"""Функция возвращает False при HTTP ответе != 200."""
mock_resp = MagicMock()
mock_resp.status = 403
mock_resp.__enter__ = lambda s: s
mock_resp.__exit__ = MagicMock(return_value=False)
with patch("urllib.request.urlopen", return_value=mock_resp):
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is False
# ---------------------------------------------------------------------------
# Missing credentials tests
# ---------------------------------------------------------------------------
def test_send_telegram_escalation_returns_false_when_no_bot_token(monkeypatch):
"""Без bot token функция возвращает False, не падает."""
monkeypatch.delenv("KIN_TG_BOT_TOKEN", raising=False)
monkeypatch.setenv("KIN_TG_CHAT_ID", "12345")
with patch("core.telegram._load_kin_config", return_value={}):
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is False
def test_send_telegram_escalation_returns_false_when_no_chat_id(monkeypatch):
"""Без KIN_TG_CHAT_ID функция возвращает False, не падает."""
monkeypatch.setenv("KIN_TG_BOT_TOKEN", "some-token")
monkeypatch.delenv("KIN_TG_CHAT_ID", raising=False)
with patch("core.telegram._load_kin_config", return_value={"tg_bot": "some-token"}):
result = send_telegram_escalation(
task_id="KIN-001",
project_name="Test",
agent_role="pm",
reason="Reason",
pipeline_step="1",
)
assert result is False
# ---------------------------------------------------------------------------
# DB tests: mark_telegram_sent
# ---------------------------------------------------------------------------
def test_mark_telegram_sent_sets_flag_in_db(db_conn):
"""mark_telegram_sent() устанавливает telegram_sent=1 в БД."""
models.create_project(db_conn, "proj1", "Project 1", "/proj1")
models.create_task(db_conn, "PROJ1-001", "proj1", "Task 1")
task = models.get_task(db_conn, "PROJ1-001")
assert not bool(task.get("telegram_sent"))
models.mark_telegram_sent(db_conn, "PROJ1-001")
task = models.get_task(db_conn, "PROJ1-001")
assert bool(task["telegram_sent"]) is True
def test_mark_telegram_sent_does_not_affect_other_tasks(db_conn):
"""mark_telegram_sent() обновляет только указанную задачу."""
models.create_project(db_conn, "proj1", "Project 1", "/proj1")
models.create_task(db_conn, "PROJ1-001", "proj1", "Task 1")
models.create_task(db_conn, "PROJ1-002", "proj1", "Task 2")
models.mark_telegram_sent(db_conn, "PROJ1-001")
task2 = models.get_task(db_conn, "PROJ1-002")
assert not bool(task2.get("telegram_sent"))
def test_mark_telegram_sent_idempotent(db_conn):
"""Повторный вызов mark_telegram_sent() не вызывает ошибок."""
models.create_project(db_conn, "proj1", "Project 1", "/proj1")
models.create_task(db_conn, "PROJ1-001", "proj1", "Task 1")
models.mark_telegram_sent(db_conn, "PROJ1-001")
models.mark_telegram_sent(db_conn, "PROJ1-001") # second call
task = models.get_task(db_conn, "PROJ1-001")
assert bool(task["telegram_sent"]) is True

1170
web/api.py

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,5 @@
<script setup lang="ts"> <script setup lang="ts">
import EscalationBanner from './components/EscalationBanner.vue'
</script> </script>
<template> <template>
@ -7,9 +8,13 @@
<router-link to="/" class="text-lg font-bold text-gray-100 hover:text-white no-underline"> <router-link to="/" class="text-lg font-bold text-gray-100 hover:text-white no-underline">
Kin Kin
</router-link> </router-link>
<span class="text-xs text-gray-600">multi-agent orchestrator</span> <nav class="flex items-center gap-4">
<EscalationBanner />
<router-link to="/settings" class="text-xs text-gray-400 hover:text-gray-200 no-underline">Settings</router-link>
<span class="text-xs text-gray-600">multi-agent orchestrator</span>
</nav>
</header> </header>
<main class="max-w-6xl mx-auto px-6 py-6"> <main class="px-6 py-6">
<router-view /> <router-view />
</main> </main>
</div> </div>

View file

@ -0,0 +1,55 @@
/**
* KIN-075: Тест полной ширины экрана
* Проверяет что App.vue не ограничивает ширину контента нет max-w-* на <main>
*/
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import App from '../App.vue'
vi.mock('../components/EscalationBanner.vue', () => ({
default: { template: '<div />' },
}))
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: { template: '<div>home</div>' } },
{ path: '/settings', component: { template: '<div>settings</div>' } },
],
})
}
beforeEach(() => {
vi.clearAllMocks()
})
describe('KIN-075: App.vue — полная ширина экрана', () => {
it('<main> не содержит класс max-w-* — контент не ограничен по ширине', async () => {
const router = makeRouter()
await router.push('/')
const wrapper = mount(App, { global: { plugins: [router] } })
await flushPromises()
const main = wrapper.find('main')
expect(main.exists(), '<main> должен существовать в App.vue').toBe(true)
expect(
main.classes().some(c => c.startsWith('max-w-')),
'<main> не должен иметь ограничивающий класс max-w-*',
).toBe(false)
})
it('<main> не содержит класс max-w-6xl (регрессия KIN-075)', async () => {
const router = makeRouter()
await router.push('/')
const wrapper = mount(App, { global: { plugins: [router] } })
await flushPromises()
const main = wrapper.find('main')
expect(main.classes()).not.toContain('max-w-6xl')
})
})

View file

@ -0,0 +1,310 @@
/**
* KIN-UI-008: ChatView логирование ошибок в polling-цикле
*
* Проверяет:
* 1. console.warn вызывается при каждой ошибке polling
* 2. После 3 последовательных ошибок error.value устанавливается
* 3. После 3 ошибок polling останавливается
* 4. Счётчик сбрасывается при успешном ответе (error.value тоже сбрасывается)
* 5. Менее 3 ошибок error.value не устанавливается
* 6. load() сбрасывает consecutiveErrors до начала загрузки
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import ChatView from '../views/ChatView.vue'
vi.mock('../api', async (importOriginal) => {
const actual = await importOriginal<typeof import('../api')>()
return {
...actual,
api: {
chatHistory: vi.fn(),
project: vi.fn(),
sendChatMessage: vi.fn(),
},
}
})
import { api } from '../api'
const Stub = { template: '<div />' }
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: Stub },
{ path: '/project/:id', component: Stub },
{ path: '/chat/:projectId', component: ChatView, props: true },
],
})
}
const MOCK_MESSAGES_IDLE = [
{
id: 1,
project_id: 'KIN',
role: 'user',
content: 'Привет',
message_type: 'text',
task_stub: null,
created_at: '2024-01-01T00:00:00',
},
]
const MOCK_MESSAGES_WITH_RUNNING_TASK = [
{
id: 1,
project_id: 'KIN',
role: 'assistant',
content: 'Работаю...',
message_type: 'task_created',
task_stub: { id: 'KIN-001', status: 'in_progress' },
created_at: '2024-01-01T00:00:00',
},
]
const MOCK_PROJECT = {
id: 'KIN',
name: 'Kin',
path: '/projects/kin',
status: 'active',
}
beforeEach(() => {
vi.useFakeTimers()
vi.clearAllMocks()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
})
afterEach(() => {
vi.useRealTimers()
vi.restoreAllMocks()
})
async function mountChatView(projectId = 'KIN') {
const router = makeRouter()
await router.push(`/chat/${projectId}`)
const wrapper = mount(ChatView, {
props: { projectId },
global: { plugins: [router] },
})
return wrapper
}
describe('KIN-UI-008: ChatView — polling error handling', () => {
describe('console.warn при ошибках polling', () => {
it('console.warn вызывается при первой ошибке polling', async () => {
// Первый вызов (load) — успех, второй (polling) — ошибка
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValue(new Error('Network error'))
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
// Триггерим первый тик polling
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
// Фильтруем только polling-предупреждения (игнорируем Vue Router warn)
const pollingCalls = warnSpy.mock.calls.filter(
args => typeof args[0] === 'string' && args[0].includes('[polling]'),
)
expect(pollingCalls).toHaveLength(1)
expect(pollingCalls[0][0]).toContain('[polling] ошибка #1:')
expect(pollingCalls[0][1]).toBeInstanceOf(Error)
wrapper.unmount()
})
it('console.warn содержит нарастающий номер ошибки при нескольких сбоях', async () => {
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValue(new Error('Server down'))
const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
// Фильтруем только polling-предупреждения
const pollingCalls = warnSpy.mock.calls.filter(
args => typeof args[0] === 'string' && args[0].includes('[polling]'),
)
expect(pollingCalls).toHaveLength(2)
expect(pollingCalls[0][0]).toContain('#1:')
expect(pollingCalls[1][0]).toContain('#2:')
wrapper.unmount()
})
})
describe('error.value после 3 последовательных ошибок', () => {
it('error.value устанавливается после 3 ошибок подряд', async () => {
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValue(new Error('Server down'))
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
// Ошибки 1, 2, 3
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).toContain('Сервер недоступен')
wrapper.unmount()
})
it('error.value НЕ устанавливается после 1 ошибки', async () => {
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValueOnce(new Error('Transient error'))
.mockResolvedValue(MOCK_MESSAGES_IDLE as any)
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
// Только 1 ошибка, потом успех
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).not.toContain('Сервер недоступен')
wrapper.unmount()
})
it('error.value НЕ устанавливается после 2 ошибок подряд', async () => {
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValueOnce(new Error('err 1'))
.mockRejectedValueOnce(new Error('err 2'))
.mockResolvedValue(MOCK_MESSAGES_IDLE as any)
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).not.toContain('Сервер недоступен')
wrapper.unmount()
})
})
describe('сброс счётчика при успешном ответе', () => {
it('после успешного ответа (reload через смену projectId) error.value очищается', async () => {
// Первый проект: 3 ошибки → error.value устанавливается
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValueOnce(new Error('err 1'))
.mockRejectedValueOnce(new Error('err 2'))
.mockRejectedValueOnce(new Error('err 3'))
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView('KIN')
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).toContain('Сервер недоступен')
// Переключаемся на другой проект — вызывает load(), который сбрасывает error.value
vi.mocked(api.chatHistory).mockResolvedValue(MOCK_MESSAGES_IDLE as any)
await wrapper.setProps({ projectId: 'KIN2' })
await flushPromises()
expect(wrapper.text()).not.toContain('Сервер недоступен')
wrapper.unmount()
})
it('после 2 ошибок и 1 успеха — следующие 3 ошибки снова вызывают error.value', async () => {
vi.mocked(api.chatHistory)
// load()
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
// 2 ошибки
.mockRejectedValueOnce(new Error('err'))
.mockRejectedValueOnce(new Error('err'))
// успех — сброс счётчика
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
// ещё 3 ошибки — снова должна появиться ошибка
.mockRejectedValueOnce(new Error('err'))
.mockRejectedValueOnce(new Error('err'))
.mockRejectedValue(new Error('err'))
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
// 2 ошибки (счётчик = 2, ошибки нет)
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).not.toContain('Сервер недоступен')
// Успех (счётчик = 0)
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
// Ещё 3 ошибки — счётчик начинается с 0, достигает 3
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
expect(wrapper.text()).toContain('Сервер недоступен')
wrapper.unmount()
})
})
describe('остановка polling после 3 ошибок', () => {
it('polling останавливается после 3 ошибок — дополнительные тики не вызывают api', async () => {
vi.mocked(api.chatHistory)
.mockResolvedValueOnce(MOCK_MESSAGES_WITH_RUNNING_TASK as any)
.mockRejectedValue(new Error('Server down'))
vi.spyOn(console, 'warn').mockImplementation(() => {})
const wrapper = await mountChatView()
await flushPromises()
// 3 тика — достигаем лимита
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
await vi.advanceTimersByTimeAsync(3000)
await flushPromises()
const callCountAfterStop = vi.mocked(api.chatHistory).mock.calls.length
// Ещё 3 тика — polling должен быть остановлен, новых вызовов нет
await vi.advanceTimersByTimeAsync(9000)
await flushPromises()
expect(vi.mocked(api.chatHistory).mock.calls.length).toBe(callCountAfterStop)
wrapper.unmount()
})
})
})

View file

@ -0,0 +1,277 @@
/**
* KIN-083: Тесты healthcheck Claude CLI auth frontend баннеры
*
* Проверяет:
* 1. TaskDetail.vue: при ошибке claude_auth_required от runTask показывает баннер
* 2. TaskDetail.vue: баннер закрывается кнопкой
* 3. TaskDetail.vue: happy path баннер не появляется при успешном runTask
* 4. ProjectView.vue: при ошибке claude_auth_required от startPhase показывает баннер
* 5. ProjectView.vue: happy path баннер не появляется при успешном startPhase
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import TaskDetail from '../views/TaskDetail.vue'
import ProjectView from '../views/ProjectView.vue'
// importOriginal сохраняет реальный ApiError — нужен для instanceof-проверки в компоненте
vi.mock('../api', async (importOriginal) => {
const actual = await importOriginal<typeof import('../api')>()
return {
...actual,
api: {
project: vi.fn(),
taskFull: vi.fn(),
runTask: vi.fn(),
startPhase: vi.fn(),
getPhases: vi.fn(),
patchTask: vi.fn(),
patchProject: vi.fn(),
auditProject: vi.fn(),
createTask: vi.fn(),
deployProject: vi.fn(),
notifications: vi.fn(),
},
}
})
import { api, ApiError } from '../api'
const Stub = { template: '<div />' }
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: (k: string) => store[k] ?? null,
setItem: (k: string, v: string) => { store[k] = v },
removeItem: (k: string) => { delete store[k] },
clear: () => { store = {} },
}
})()
Object.defineProperty(globalThis, 'localStorage', { value: localStorageMock, configurable: true })
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: Stub },
{ path: '/project/:id', component: ProjectView, props: true },
{ path: '/task/:id', component: TaskDetail, props: true },
],
})
}
const MOCK_TASK = {
id: 'KIN-001',
project_id: 'KIN',
title: 'Тестовая задача',
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
blocked_reason: null,
dangerously_skipped: null,
category: null,
acceptance_criteria: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
pending_actions: [],
}
const MOCK_PROJECT = {
id: 'KIN',
name: 'Kin',
path: '/projects/kin',
status: 'active',
priority: 5,
tech_stack: ['python', 'vue'],
execution_mode: 'review',
autocommit_enabled: 0,
obsidian_vault_path: null,
deploy_command: null,
created_at: '2024-01-01',
total_tasks: 1,
done_tasks: 0,
active_tasks: 1,
blocked_tasks: 0,
review_tasks: 0,
project_type: 'development',
ssh_host: null,
ssh_user: null,
ssh_key_path: null,
ssh_proxy_jump: null,
description: null,
tasks: [],
decisions: [],
modules: [],
}
const MOCK_ACTIVE_PHASE = {
id: 1,
project_id: 'KIN',
role: 'pm',
phase_order: 1,
status: 'active',
task_id: 'KIN-R-001',
revise_count: 0,
revise_comment: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
task: {
id: 'KIN-R-001',
status: 'pending',
title: 'Research',
priority: 5,
assigned_role: 'pm',
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
blocked_reason: null,
dangerously_skipped: null,
category: null,
acceptance_criteria: null,
project_id: 'KIN',
created_at: '2024-01-01',
updated_at: '2024-01-01',
},
}
beforeEach(() => {
localStorageMock.clear()
vi.clearAllMocks()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
vi.mocked(api.taskFull).mockResolvedValue(MOCK_TASK as any)
vi.mocked(api.runTask).mockResolvedValue({ status: 'started' } as any)
vi.mocked(api.startPhase).mockResolvedValue({ status: 'started', phase_id: 1, task_id: 'KIN-R-001' })
vi.mocked(api.getPhases).mockResolvedValue([])
vi.mocked(api.notifications).mockResolvedValue([])
})
afterEach(() => {
vi.restoreAllMocks()
})
// ─────────────────────────────────────────────────────────────
// TaskDetail: баннер при claude_auth_required
// ─────────────────────────────────────────────────────────────
describe('KIN-083: TaskDetail — claude auth banner', () => {
async function mountTaskDetail() {
const router = makeRouter()
await router.push('/task/KIN-001')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-001' },
global: { plugins: [router] },
})
await flushPromises()
return wrapper
}
it('показывает баннер "Claude CLI requires login" при ошибке claude_auth_required от runTask', async () => {
vi.mocked(api.runTask).mockRejectedValue(
new ApiError('claude_auth_required', 'Claude CLI requires login. Run: claude login'),
)
const wrapper = await mountTaskDetail()
const runBtn = wrapper.findAll('button').find(b => b.text().includes('Run Pipeline'))
expect(runBtn?.exists(), 'Кнопка Run Pipeline должна быть видна для pending задачи').toBe(true)
await runBtn!.trigger('click')
await flushPromises()
expect(wrapper.text(), 'Баннер должен содержать текст ошибки аутентификации').toContain('Claude CLI requires login')
})
it('баннер закрывается кнопкой ✕', async () => {
vi.mocked(api.runTask).mockRejectedValue(
new ApiError('claude_auth_required', 'Claude CLI requires login. Run: claude login'),
)
const wrapper = await mountTaskDetail()
const runBtn = wrapper.findAll('button').find(b => b.text().includes('Run Pipeline'))
await runBtn!.trigger('click')
await flushPromises()
expect(wrapper.text()).toContain('Claude CLI requires login')
const closeBtn = wrapper.findAll('button').find(b => b.text().trim() === '✕')
expect(closeBtn?.exists(), 'Кнопка ✕ должна быть видна').toBe(true)
await closeBtn!.trigger('click')
await flushPromises()
expect(wrapper.text(), 'После закрытия баннер не должен быть виден').not.toContain('Claude CLI requires login')
})
it('не показывает баннер когда runTask успешен (happy path)', async () => {
const wrapper = await mountTaskDetail()
const runBtn = wrapper.findAll('button').find(b => b.text().includes('Run Pipeline'))
if (runBtn?.exists()) {
await runBtn.trigger('click')
await flushPromises()
}
expect(wrapper.text(), 'Баннер не должен появляться при успешном запуске').not.toContain('Claude CLI requires login')
})
})
// ─────────────────────────────────────────────────────────────
// ProjectView: баннер при claude_auth_required
// ─────────────────────────────────────────────────────────────
describe('KIN-083: ProjectView — claude auth banner', () => {
async function mountOnPhases() {
vi.mocked(api.getPhases).mockResolvedValue([MOCK_ACTIVE_PHASE] as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const phasesTab = wrapper.findAll('button').find(b => b.text().includes('Phases'))
await phasesTab!.trigger('click')
await flushPromises()
return wrapper
}
it('показывает баннер "Claude CLI requires login" при ошибке claude_auth_required от startPhase', async () => {
vi.mocked(api.startPhase).mockRejectedValue(
new ApiError('claude_auth_required', 'Claude CLI requires login. Run: claude login'),
)
const wrapper = await mountOnPhases()
const startBtn = wrapper.findAll('button').find(b => b.text().includes('Start Research'))
expect(startBtn?.exists(), 'Кнопка Start Research должна быть видна').toBe(true)
await startBtn!.trigger('click')
await flushPromises()
expect(wrapper.text(), 'Баннер должен содержать текст ошибки аутентификации').toContain('Claude CLI requires login')
})
it('не показывает баннер когда startPhase успешен (happy path)', async () => {
const wrapper = await mountOnPhases()
const startBtn = wrapper.findAll('button').find(b => b.text().includes('Start Research'))
if (startBtn?.exists()) {
await startBtn.trigger('click')
await flushPromises()
}
expect(wrapper.text(), 'Баннер не должен появляться при успешном запуске фазы').not.toContain('Claude CLI requires login')
})
})

View file

@ -0,0 +1,449 @@
/**
* KIN-FIX-002: Унифицировать localStorage значения execution_mode с 'auto_complete'
*
* Acceptance Criteria:
* 1. Строки 46 и 53 в TaskDetail.vue содержат 'auto_complete' в localStorage операциях
* 2. Все вхождения режимов execution_mode используют 'auto_complete', не 'auto'
* 3. Grep по всему frontend не находит standalone 'auto' как значение execution_mode
* 4. Существующие тесты filter-persistence.test.ts пройдены успешно
*/
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import ProjectView from '../views/ProjectView.vue'
import TaskDetail from '../views/TaskDetail.vue'
// Mock api
vi.mock('../api', () => ({
api: {
project: vi.fn(),
taskFull: vi.fn(),
patchTask: vi.fn(),
patchProject: vi.fn(),
},
}))
import { api } from '../api'
const Stub = { template: '<div />' }
const MOCK_PROJECT = {
id: 'KIN',
name: 'Kin',
path: '/projects/kin',
status: 'active',
priority: 5,
tech_stack: ['python', 'vue'],
created_at: '2024-01-01',
total_tasks: 1,
done_tasks: 0,
active_tasks: 1,
blocked_tasks: 0,
review_tasks: 0,
tasks: [],
decisions: [],
modules: [],
}
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: (k: string) => store[k] ?? null,
setItem: (k: string, v: string) => { store[k] = v },
removeItem: (k: string) => { delete store[k] },
clear: () => { store = {} },
}
})()
Object.defineProperty(globalThis, 'localStorage', { value: localStorageMock, configurable: true })
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: Stub },
{ path: '/project/:id', component: ProjectView, props: true },
{ path: '/task/:id', component: TaskDetail, props: true },
],
})
}
beforeEach(() => {
localStorageMock.clear()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
})
describe('KIN-FIX-002: execution_mode унификация на "auto_complete"', () => {
describe('TaskDetail.vue — localStorage операции (lines 46, 53)', () => {
it('toggleMode в TaskDetail сохраняет "auto_complete" в localStorage', async () => {
const task = {
id: 'KIN-001',
project_id: 'KIN',
title: 'Test Task',
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
}
vi.mocked(api.taskFull).mockResolvedValue(task as any)
vi.mocked(api.patchTask).mockResolvedValue({ execution_mode: 'auto_complete' } as any)
const router = makeRouter()
await router.push('/task/KIN-001')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-001' },
global: { plugins: [router] },
})
await flushPromises()
// Найти и кликнуть кнопку тоггла режима (Auto/Review)
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Auto') || b.text().includes('Review')
)
if (toggleBtn) {
await toggleBtn.trigger('click')
await flushPromises()
// Проверяем, что localStorage содержит 'auto_complete', не 'auto'
const stored = localStorageMock.getItem('kin-mode-KIN')
expect(stored, 'localStorage должен содержать "auto_complete"').toBe('auto_complete')
}
})
it('loadMode в TaskDetail использует "auto_complete" при чтении из localStorage', async () => {
// Сначала установим значение в localStorage
localStorageMock.setItem('kin-mode-KIN', 'auto_complete')
const task = {
id: 'KIN-001',
project_id: 'KIN',
title: 'Test Task',
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
}
vi.mocked(api.taskFull).mockResolvedValue(task as any)
const router = makeRouter()
await router.push('/task/KIN-001')
mount(TaskDetail, {
props: { id: 'KIN-001' },
global: { plugins: [router] },
})
await flushPromises()
// Проверяем, что значение из localStorage прочитано как 'auto_complete'
const stored = localStorageMock.getItem('kin-mode-KIN')
expect(stored).toBe('auto_complete')
})
})
describe('ProjectView.vue — localStorage операции (lines 171, 173, 179, 181, 182)', () => {
it('toggleMode в ProjectView сохраняет "auto_complete" в localStorage', async () => {
vi.mocked(api.patchProject).mockResolvedValue({ execution_mode: 'auto_complete' } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Найти и кликнуть кнопку тоггла режима
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Auto') || b.text().includes('Review')
)
if (toggleBtn) {
await toggleBtn.trigger('click')
await flushPromises()
// Проверяем, что localStorage содержит 'auto_complete', не 'auto'
const stored = localStorageMock.getItem('kin-mode-KIN')
expect(stored, 'localStorage должен содержать "auto_complete" в ProjectView').toBe('auto_complete')
}
})
it('loadMode в ProjectView использует "auto_complete" при чтении из localStorage', async () => {
// Установим значение в localStorage
localStorageMock.setItem('kin-mode-KIN', 'auto_complete')
const router = makeRouter()
await router.push('/project/KIN')
mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Проверяем, что значение из localStorage прочитано корректно
const stored = localStorageMock.getItem('kin-mode-KIN')
expect(stored).toBe('auto_complete')
})
})
describe('Унификация: все значения используют "auto_complete"', () => {
it('execution_mode никогда не использует standalone "auto"', async () => {
// Проверяем, что при сохранении режима, используется ТОЛЬКО 'auto_complete' или 'review'
const task = {
id: 'KIN-001',
project_id: 'KIN',
title: 'Test Task',
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: 'auto_complete',
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
}
vi.mocked(api.taskFull).mockResolvedValue(task as any)
vi.mocked(api.patchTask).mockResolvedValue({ execution_mode: 'review' } as any)
const router = makeRouter()
await router.push('/task/KIN-001')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-001' },
global: { plugins: [router] },
})
await flushPromises()
// Переключаемся в режим review
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Auto') || b.text().includes('Review')
)
if (toggleBtn) {
await toggleBtn.trigger('click')
await flushPromises()
const stored = localStorageMock.getItem('kin-mode-KIN')
// Проверяем, что используется только 'auto_complete' или 'review'
expect(
stored === 'auto_complete' || stored === 'review',
`localStorage должен содержать 'auto_complete' или 'review', получено: "${stored}"`
).toBe(true)
// Проверяем, что НЕ используется 'auto'
expect(stored).not.toBe('auto')
}
})
it('Сравнение в коде используется "auto_complete", не "auto"', async () => {
// Установим 'auto_complete' и проверим, что компонент корректно определяет режим
localStorageMock.setItem('kin-mode-KIN', 'auto_complete')
const task = {
id: 'KIN-001',
project_id: 'KIN',
title: 'Test Task',
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
}
vi.mocked(api.taskFull).mockResolvedValue(task as any)
const router = makeRouter()
await router.push('/task/KIN-001')
mount(TaskDetail, {
props: { id: 'KIN-001' },
global: { plugins: [router] },
})
await flushPromises()
// После загрузки, компонент должен прочитать 'auto_complete' из localStorage
// и корректно применить режим (это видно по наличию или отсутствию кнопок Approve/Reject)
const stored = localStorageMock.getItem('kin-mode-KIN')
expect(stored).toBe('auto_complete')
})
})
})
describe('KIN-077: кнопка Review/Auto — regression (400 Bad Request fix)', () => {
describe('ProjectView — patchProject вызывается с корректным enum-значением', () => {
it('при переключении review→auto отправляет "auto_complete", не "auto"', async () => {
const projectReview = { ...MOCK_PROJECT, execution_mode: 'review' }
vi.mocked(api.project).mockResolvedValue(projectReview as any)
vi.mocked(api.patchProject).mockResolvedValue({ execution_mode: 'auto_complete' } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Review') || b.text().includes('Auto')
)
expect(toggleBtn, 'кнопка тоггла должна быть найдена').toBeDefined()
await toggleBtn!.trigger('click')
await flushPromises()
// Главная проверка: patchProject вызван с 'auto_complete', не 'auto' (причина 400)
expect(vi.mocked(api.patchProject)).toHaveBeenCalledWith('KIN', {
execution_mode: 'auto_complete',
})
const callArg = vi.mocked(api.patchProject).mock.calls[0][1] as { execution_mode: string }
expect(callArg.execution_mode).not.toBe('auto')
})
it('при переключении auto→review отправляет "review"', async () => {
const projectAuto = { ...MOCK_PROJECT, execution_mode: 'auto_complete' }
vi.mocked(api.project).mockResolvedValue(projectAuto as any)
vi.mocked(api.patchProject).mockResolvedValue({ execution_mode: 'review' } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Auto') || b.text().includes('Review')
)
expect(toggleBtn).toBeDefined()
await toggleBtn!.trigger('click')
await flushPromises()
expect(vi.mocked(api.patchProject)).toHaveBeenCalledWith('KIN', {
execution_mode: 'review',
})
})
})
describe('ProjectView — кнопка отображает актуальный режим', () => {
it('когда проект в режиме "review" — кнопка показывает "Review"', async () => {
const projectReview = { ...MOCK_PROJECT, execution_mode: 'review' }
vi.mocked(api.project).mockResolvedValue(projectReview as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Review') || b.text().includes('Auto')
)
expect(toggleBtn).toBeDefined()
expect(toggleBtn!.text()).toContain('Review')
})
it('когда проект в режиме "auto_complete" — кнопка показывает "Auto"', async () => {
const projectAuto = { ...MOCK_PROJECT, execution_mode: 'auto_complete' }
vi.mocked(api.project).mockResolvedValue(projectAuto as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const toggleBtn = wrapper.findAll('button').find(b =>
b.text().includes('Auto') || b.text().includes('Review')
)
expect(toggleBtn).toBeDefined()
expect(toggleBtn!.text()).toContain('Auto')
})
it('после клика review→auto кнопка меняет текст на "Auto"', async () => {
const projectReview = { ...MOCK_PROJECT, execution_mode: 'review' }
vi.mocked(api.project).mockResolvedValue(projectReview as any)
vi.mocked(api.patchProject).mockResolvedValue({ execution_mode: 'auto_complete' } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const findToggleBtn = () =>
wrapper.findAll('button').find(b => b.text().includes('Auto') || b.text().includes('Review'))
expect(findToggleBtn()!.text()).toContain('Review')
await findToggleBtn()!.trigger('click')
await flushPromises()
expect(findToggleBtn()!.text()).toContain('Auto')
})
it('двойной клик возвращает кнопку обратно в "Review"', async () => {
const projectReview = { ...MOCK_PROJECT, execution_mode: 'review' }
vi.mocked(api.project).mockResolvedValue(projectReview as any)
vi.mocked(api.patchProject)
.mockResolvedValueOnce({ execution_mode: 'auto_complete' } as any)
.mockResolvedValueOnce({ execution_mode: 'review' } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const findToggleBtn = () =>
wrapper.findAll('button').find(b => b.text().includes('Auto') || b.text().includes('Review'))
await findToggleBtn()!.trigger('click')
await flushPromises()
expect(findToggleBtn()!.text()).toContain('Auto')
await findToggleBtn()!.trigger('click')
await flushPromises()
expect(findToggleBtn()!.text()).toContain('Review')
})
})
})

View file

@ -27,6 +27,8 @@ vi.mock('../api', () => ({
auditProject: vi.fn(), auditProject: vi.fn(),
createTask: vi.fn(), createTask: vi.fn(),
patchTask: vi.fn(), patchTask: vi.fn(),
patchProject: vi.fn(),
deployProject: vi.fn(),
}, },
})) }))
@ -372,7 +374,7 @@ describe('KIN-011: TaskDetail — возврат с сохранением URL',
// ───────────────────────────────────────────────────────────── // ─────────────────────────────────────────────────────────────
describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () => { describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () => {
function makeTaskWith(status: string, executionMode: 'auto' | 'review' | null = null) { function makeTaskWith(status: string, executionMode: 'auto_complete' | 'review' | null = null) {
return { return {
id: 'KIN-047', id: 'KIN-047',
project_id: 'KIN', project_id: 'KIN',
@ -410,7 +412,7 @@ describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () =
}) })
it('Approve и Reject скрыты при autoMode в статусе review', async () => { it('Approve и Reject скрыты при autoMode в статусе review', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', 'auto') as any) vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', 'auto_complete') as any)
const router = makeRouter() const router = makeRouter()
await router.push('/task/KIN-047') await router.push('/task/KIN-047')
@ -428,7 +430,7 @@ describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () =
}) })
it('Тоггл Auto/Review виден в статусе review при autoMode (позволяет выйти из автопилота)', async () => { it('Тоггл Auto/Review виден в статусе review при autoMode (позволяет выйти из автопилота)', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', 'auto') as any) vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', 'auto_complete') as any)
const router = makeRouter() const router = makeRouter()
await router.push('/task/KIN-047') await router.push('/task/KIN-047')
@ -444,7 +446,7 @@ describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () =
}) })
it('После клика тоггла в review+autoMode появляются Approve и Reject', async () => { it('После клика тоггла в review+autoMode появляются Approve и Reject', async () => {
const task = makeTaskWith('review', 'auto') const task = makeTaskWith('review', 'auto_complete')
vi.mocked(api.taskFull).mockResolvedValue(task as any) vi.mocked(api.taskFull).mockResolvedValue(task as any)
vi.mocked(api.patchTask).mockResolvedValue({ execution_mode: 'review' } as any) vi.mocked(api.patchTask).mockResolvedValue({ execution_mode: 'review' } as any)
@ -473,8 +475,8 @@ describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () =
it('KIN-051: Approve и Reject видны при статусе review и execution_mode=null (фикс баги)', async () => { it('KIN-051: Approve и Reject видны при статусе review и execution_mode=null (фикс баги)', async () => {
// Воспроизводит баг: задача в review без явного execution_mode зависала // Воспроизводит баг: задача в review без явного execution_mode зависала
// без кнопок, потому что localStorage мог содержать 'auto' // без кнопок, потому что localStorage мог содержать 'auto_complete'
localStorageMock.setItem('kin-mode-KIN', 'auto') // имитируем "плохой" localStorage localStorageMock.setItem('kin-mode-KIN', 'auto_complete') // имитируем "плохой" localStorage
vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', null) as any) vi.mocked(api.taskFull).mockResolvedValue(makeTaskWith('review', null) as any)
const router = makeRouter() const router = makeRouter()
await router.push('/task/KIN-047') await router.push('/task/KIN-047')
@ -509,3 +511,401 @@ describe('KIN-047: TaskDetail — Approve/Reject в статусе review', () =
} }
}) })
}) })
// ─────────────────────────────────────────────────────────────
// KIN-065: Autocommit toggle в ProjectView
// ─────────────────────────────────────────────────────────────
describe('KIN-065: ProjectView — Autocommit toggle', () => {
it('Кнопка Autocommit присутствует в DOM', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
expect(btn?.exists()).toBe(true)
})
it('Кнопка имеет title "Autocommit: off" когда autocommit_enabled=0', async () => {
vi.mocked(api.project).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 0 } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
expect(btn?.attributes('title')).toBe('Autocommit: off')
})
it('Кнопка имеет title "Autocommit: on..." когда autocommit_enabled=1', async () => {
vi.mocked(api.project).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 1 } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
expect(btn?.attributes('title')).toContain('Autocommit: on')
})
it('Клик по кнопке вызывает patchProject с autocommit_enabled=true (включение)', async () => {
vi.mocked(api.project).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 0 } as any)
vi.mocked(api.patchProject).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 1 } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
await btn!.trigger('click')
await flushPromises()
expect(api.patchProject).toHaveBeenCalledWith('KIN', { autocommit_enabled: true })
})
it('Клик по включённой кнопке вызывает patchProject с autocommit_enabled=false (выключение)', async () => {
vi.mocked(api.project).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 1 } as any)
vi.mocked(api.patchProject).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 0 } as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
await btn!.trigger('click')
await flushPromises()
expect(api.patchProject).toHaveBeenCalledWith('KIN', { autocommit_enabled: false })
})
it('При ошибке patchProject отображается сообщение об ошибке (шаблон показывает error вместо проекта)', async () => {
// При ошибке компонент выводит <div v-else-if="error"> вместо проектного раздела.
// Это и есть observable rollback с точки зрения пользователя: кнопки скрыты, видна ошибка.
vi.mocked(api.project).mockResolvedValue({ ...MOCK_PROJECT, autocommit_enabled: 0 } as any)
vi.mocked(api.patchProject).mockRejectedValue(new Error('Network error'))
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const btn = wrapper.findAll('button').find(b => b.text().includes('Autocommit'))
await btn!.trigger('click')
await flushPromises()
// Catch-блок установил error.value → компонент показывает сообщение об ошибке
expect(wrapper.text()).toContain('Network error')
})
})
// ─────────────────────────────────────────────────────────────
// KIN-015: TaskDetail — Edit button и форма редактирования
// ─────────────────────────────────────────────────────────────
describe('KIN-015: TaskDetail — Edit button и форма редактирования', () => {
function makePendingTask(overrides: Record<string, unknown> = {}) {
return {
...MOCK_TASK_FULL,
id: 'KIN-015',
project_id: 'KIN',
title: 'Pending Task',
status: 'pending',
priority: 5,
brief: { text: 'Описание задачи', route_type: 'feature' },
execution_mode: null,
...overrides,
}
}
beforeEach(() => {
vi.mocked(api.patchTask).mockReset()
})
it('Кнопка Edit видна для задачи со статусом pending', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask() as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const editBtn = wrapper.findAll('button').find(b => b.text().includes('Edit'))
expect(editBtn?.exists(), 'Кнопка Edit должна быть видна для pending').toBe(true)
})
it('Кнопка Edit скрыта для статуса in_progress', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask({ status: 'in_progress' }) as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const hasEditBtn = wrapper.findAll('button').some(b => b.text().includes('Edit'))
expect(hasEditBtn, 'Кнопка Edit не должна быть видна для in_progress').toBe(false)
})
it('Кнопка Edit скрыта для статуса done', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask({ status: 'done' }) as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const hasEditBtn = wrapper.findAll('button').some(b => b.text().includes('Edit'))
expect(hasEditBtn, 'Кнопка Edit не должна быть видна для done').toBe(false)
})
it('Клик по Edit открывает форму с заполненным заголовком задачи', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask() as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const editBtn = wrapper.findAll('button').find(b => b.text().includes('Edit'))
await editBtn!.trigger('click')
await flushPromises()
// Модал открыт — поле title (input без type) содержит текущий заголовок
const titleInput = wrapper.find('input:not([type])')
expect(titleInput.exists(), 'Поле Title должно быть видно в модале').toBe(true)
expect((titleInput.element as HTMLInputElement).value).toBe('Pending Task')
})
it('saveEdit вызывает patchTask только с изменёнными полями (только title)', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask() as any)
vi.mocked(api.patchTask).mockResolvedValue(makePendingTask({ title: 'Новый заголовок' }) as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const editBtn = wrapper.findAll('button').find(b => b.text().includes('Edit'))
await editBtn!.trigger('click')
await flushPromises()
// Меняем только title и сабмитим форму
const titleInput = wrapper.find('input:not([type])')
await titleInput.setValue('Новый заголовок')
await wrapper.find('form').trigger('submit')
await flushPromises()
expect(api.patchTask).toHaveBeenCalledWith('KIN-015', { title: 'Новый заголовок' })
})
it('saveEdit не вызывает patchTask если данные не изменились', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask() as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
// Открываем модал без изменений и сабмитим форму
const editBtn = wrapper.findAll('button').find(b => b.text().includes('Edit'))
await editBtn!.trigger('click')
await flushPromises()
await wrapper.find('form').trigger('submit')
await flushPromises()
expect(api.patchTask, 'patchTask не должен вызываться при пустом diff').not.toHaveBeenCalled()
})
it('После успешного сохранения модал закрывается', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makePendingTask() as any)
vi.mocked(api.patchTask).mockResolvedValue(makePendingTask({ title: 'Обновлённый заголовок' }) as any)
const router = makeRouter()
await router.push('/task/KIN-015')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-015' },
global: { plugins: [router] },
})
await flushPromises()
const editBtn = wrapper.findAll('button').find(b => b.text().includes('Edit'))
await editBtn!.trigger('click')
await flushPromises()
const titleInput = wrapper.find('input:not([type])')
await titleInput.setValue('Обновлённый заголовок')
await wrapper.find('form').trigger('submit')
await flushPromises()
// Модал закрыт — форма с title-input больше не в DOM
expect(wrapper.find('input:not([type])').exists(), 'Форма должна закрыться после сохранения').toBe(false)
})
})
// ─────────────────────────────────────────────────────────────
// KIN-049: TaskDetail — кнопка Deploy
// ─────────────────────────────────────────────────────────────
describe('KIN-049: TaskDetail — кнопка Deploy', () => {
function makeDeployTask(status: string, deployCommand: string | null) {
return {
id: 'KIN-049',
project_id: 'KIN',
title: 'Deploy Task',
status,
priority: 3,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
project_deploy_command: deployCommand,
created_at: '2024-01-01',
updated_at: '2024-01-01',
pipeline_steps: [],
related_decisions: [],
}
}
it('Кнопка Deploy видна при status=done и project_deploy_command задан', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('done', 'git push origin main') as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const deployBtn = wrapper.findAll('button').find(b => b.text().includes('Deploy'))
expect(deployBtn?.exists(), 'Кнопка Deploy должна быть видна при done + deploy_command').toBe(true)
})
it('Кнопка Deploy скрыта при status=done но без project_deploy_command', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('done', null) as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const hasDeployBtn = wrapper.findAll('button').some(b => b.text().includes('Deploy'))
expect(hasDeployBtn, 'Deploy не должна быть видна без deploy_command').toBe(false)
})
it('Кнопка Deploy скрыта при status=pending (даже с deploy_command)', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('pending', 'git push') as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const hasDeployBtn = wrapper.findAll('button').some(b => b.text().includes('Deploy'))
expect(hasDeployBtn, 'Deploy не должна быть видна при статусе pending').toBe(false)
})
it('Кнопка Deploy скрыта при status=in_progress', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('in_progress', 'git push') as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const hasDeployBtn = wrapper.findAll('button').some(b => b.text().includes('Deploy'))
expect(hasDeployBtn, 'Deploy не должна быть видна при статусе in_progress').toBe(false)
})
it('Кнопка Deploy скрыта при status=review', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('review', 'git push') as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const hasDeployBtn = wrapper.findAll('button').some(b => b.text().includes('Deploy'))
expect(hasDeployBtn, 'Deploy не должна быть видна при статусе review').toBe(false)
})
it('Клик по Deploy вызывает api.deployProject с project_id задачи', async () => {
vi.mocked(api.taskFull).mockResolvedValue(makeDeployTask('done', 'echo ok') as any)
vi.mocked(api.deployProject).mockResolvedValue({
success: true, exit_code: 0, stdout: 'ok\n', stderr: '', duration_seconds: 0.1,
} as any)
const router = makeRouter()
await router.push('/task/KIN-049')
const wrapper = mount(TaskDetail, {
props: { id: 'KIN-049' },
global: { plugins: [router] },
})
await flushPromises()
const deployBtn = wrapper.findAll('button').find(b => b.text().includes('Deploy'))
await deployBtn!.trigger('click')
await flushPromises()
expect(api.deployProject).toHaveBeenCalledWith('KIN')
})
})

View file

@ -0,0 +1,798 @@
/**
* KIN-UI-001: Тесты канбан-вида в ProjectView
*
* Проверяет:
* 1. Вкладка 'Kanban' присутствует в навигации (5 вкладок всего)
* 2. Переключение на kanban показывает все 5 колонок
* 3. Задачи распределены по колонкам согласно статусу
* 4. Drag-and-drop вызывает api.patchTask с {status: newStatus}
* 5. Polling запускается при наличии in_progress задач на kanban-вкладке
* 6. clearInterval вызывается при переключении с вкладки и в onUnmounted
* 7. Существующие вкладки работают без регрессий
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import ProjectView from '../views/ProjectView.vue'
// vi.mock поднимается вверх файла, поэтому определяем здесь
vi.mock('../api', () => ({
api: {
project: vi.fn(),
taskFull: vi.fn(),
runTask: vi.fn(),
auditProject: vi.fn(),
createTask: vi.fn(),
patchTask: vi.fn(),
patchProject: vi.fn(),
deployProject: vi.fn(),
getPhases: vi.fn(),
},
}))
import { api } from '../api'
const Stub = { template: '<div />' }
function makeTask(id: string, status: string, category: string | null = null) {
return {
id,
project_id: 'KIN',
title: `Task ${id}`,
status,
priority: 5,
assigned_role: null,
parent_task_id: null,
brief: null,
spec: null,
execution_mode: null,
blocked_reason: null,
dangerously_skipped: null,
category,
acceptance_criteria: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
}
}
// Проект с задачами во всех 5 канбан-статусах
const MOCK_PROJECT = {
id: 'KIN',
name: 'Kin',
path: '/projects/kin',
status: 'active',
priority: 5,
tech_stack: ['python', 'vue'],
execution_mode: 'review',
autocommit_enabled: 0,
obsidian_vault_path: null,
deploy_command: null,
created_at: '2024-01-01',
total_tasks: 5,
done_tasks: 1,
active_tasks: 1,
blocked_tasks: 1,
review_tasks: 1,
project_type: 'development',
ssh_host: null,
ssh_user: null,
ssh_key_path: null,
ssh_proxy_jump: null,
description: null,
tasks: [
makeTask('KIN-001', 'pending'),
makeTask('KIN-002', 'in_progress', 'UI'),
makeTask('KIN-003', 'review'),
makeTask('KIN-004', 'blocked'),
makeTask('KIN-005', 'done'),
],
decisions: [],
modules: [],
}
// localStorage mock
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: (k: string) => store[k] ?? null,
setItem: (k: string, v: string) => { store[k] = v },
removeItem: (k: string) => { delete store[k] },
clear: () => { store = {} },
}
})()
Object.defineProperty(globalThis, 'localStorage', { value: localStorageMock, configurable: true })
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: Stub },
{ path: '/project/:id', component: ProjectView, props: true },
],
})
}
beforeEach(() => {
localStorageMock.clear()
vi.clearAllMocks()
vi.mocked(api.project).mockResolvedValue(JSON.parse(JSON.stringify(MOCK_PROJECT)) as any)
vi.mocked(api.getPhases).mockResolvedValue([])
vi.mocked(api.patchTask).mockResolvedValue(makeTask('KIN-001', 'in_progress') as any)
})
afterEach(() => {
vi.restoreAllMocks()
vi.useRealTimers()
})
// ─────────────────────────────────────────────────────────────
// Вспомогательная функция: находит tab-кнопку по тексту
// ─────────────────────────────────────────────────────────────
async function mountOnKanban() {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
return wrapper
}
// ─────────────────────────────────────────────────────────────
// 1. Вкладка Kanban в навигации
// ─────────────────────────────────────────────────────────────
describe('KIN-UI-001: канбан — вкладка в навигации', () => {
it('Вкладка "Kanban" присутствует в строке вкладок', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const tabButtons = wrapper.findAll('button').filter(b => b.classes().includes('border-b-2'))
const kanbanTab = tabButtons.find(b => b.text().includes('Kanban'))
expect(kanbanTab?.exists(), 'Вкладка Kanban должна быть в навигации').toBe(true)
})
it('Присутствуют все 5 вкладок: Tasks, Phases, Decisions, Modules, Kanban', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const tabTexts = wrapper
.findAll('button')
.filter(b => b.classes().includes('border-b-2'))
.map(b => b.text().toLowerCase())
for (const expected of ['tasks', 'phases', 'decisions', 'modules', 'kanban']) {
expect(tabTexts.some(t => t.includes(expected)), `Вкладка "${expected}" должна быть`).toBe(true)
}
})
it('Вкладка Kanban отображает счётчик задач', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const kanbanTab = wrapper
.findAll('button')
.find(b => b.classes().includes('border-b-2') && b.text().includes('Kanban'))!
// MOCK_PROJECT.tasks.length === 5
expect(kanbanTab.text()).toContain('5')
})
})
// ─────────────────────────────────────────────────────────────
// 2-3. Переключение и 5 колонок
// ─────────────────────────────────────────────────────────────
describe('KIN-UI-001: канбан — 5 колонок', () => {
it('После переключения на канбан отображаются заголовки всех 5 колонок', async () => {
const wrapper = await mountOnKanban()
const text = wrapper.text()
for (const label of ['Pending', 'In Progress', 'Review', 'Blocked', 'Done']) {
expect(text, `Колонка "${label}" должна быть видна`).toContain(label)
}
})
it('Каждая из 5 задач отображается ровно в одной колонке', async () => {
const wrapper = await mountOnKanban()
for (const task of MOCK_PROJECT.tasks) {
const links = wrapper.findAll(`a[href="/task/${task.id}"]`)
expect(links, `Задача ${task.id} должна появляться ровно 1 раз`).toHaveLength(1)
}
})
it('Задача KIN-001 (pending) находится в колонке Pending', async () => {
const wrapper = await mountOnKanban()
// Первая колонка — Pending
const dropZones = wrapper.findAll('[class*="min-h-24"]')
expect(dropZones[0].find('a[href="/task/KIN-001"]').exists()).toBe(true)
})
it('Задача KIN-002 (in_progress) находится в колонке In Progress', async () => {
const wrapper = await mountOnKanban()
const dropZones = wrapper.findAll('[class*="min-h-24"]')
expect(dropZones[1].find('a[href="/task/KIN-002"]').exists()).toBe(true)
})
it('Задача KIN-003 (review) находится в колонке Review', async () => {
const wrapper = await mountOnKanban()
const dropZones = wrapper.findAll('[class*="min-h-24"]')
expect(dropZones[2].find('a[href="/task/KIN-003"]').exists()).toBe(true)
})
it('Задача KIN-004 (blocked) находится в колонке Blocked', async () => {
const wrapper = await mountOnKanban()
const dropZones = wrapper.findAll('[class*="min-h-24"]')
expect(dropZones[3].find('a[href="/task/KIN-004"]').exists()).toBe(true)
})
it('Задача KIN-005 (done) находится в колонке Done', async () => {
const wrapper = await mountOnKanban()
const dropZones = wrapper.findAll('[class*="min-h-24"]')
expect(dropZones[4].find('a[href="/task/KIN-005"]').exists()).toBe(true)
})
it('Задачи с нераспознанным статусом (decomposed, cancelled) не попадают в канбан-колонки', async () => {
const projectWithExtra = {
...MOCK_PROJECT,
tasks: [
...MOCK_PROJECT.tasks,
makeTask('KIN-010', 'decomposed'),
makeTask('KIN-011', 'cancelled'),
],
}
vi.mocked(api.project).mockResolvedValue(projectWithExtra as any)
const wrapper = await mountOnKanban()
const dropZones = wrapper.findAll('[class*="min-h-24"]')
// 5 drop zones (5 колонок), decomposed и cancelled не должны быть ни в одной
for (const zone of dropZones) {
expect(zone.find('a[href="/task/KIN-010"]').exists()).toBe(false)
expect(zone.find('a[href="/task/KIN-011"]').exists()).toBe(false)
}
})
})
// ─────────────────────────────────────────────────────────────
// 4. Смена статуса через drag-and-drop
// ─────────────────────────────────────────────────────────────
describe('KIN-UI-001: канбан — смена статуса через drag-and-drop', () => {
it('Drag-and-drop вызывает api.patchTask с {status: новый_статус}', async () => {
vi.mocked(api.patchTask).mockResolvedValue(makeTask('KIN-001', 'in_progress') as any)
const wrapper = await mountOnKanban()
// Находим карточку KIN-001 в колонке pending и начинаем перетаскивание
const taskCard = wrapper.find('a[href="/task/KIN-001"]')
expect(taskCard.exists(), 'Карточка KIN-001 должна быть в DOM').toBe(true)
await taskCard.trigger('dragstart')
// Роняем в колонку in_progress (индекс 1)
const dropZones = wrapper.findAll('[class*="min-h-24"]')
await dropZones[1].trigger('drop')
await flushPromises()
expect(vi.mocked(api.patchTask)).toHaveBeenCalledWith('KIN-001', { status: 'in_progress' })
})
it('Drop в ту же колонку не вызывает patchTask', async () => {
const wrapper = await mountOnKanban()
// KIN-001 уже в pending (индекс 0), роняем обратно в pending
const taskCard = wrapper.find('a[href="/task/KIN-001"]')
await taskCard.trigger('dragstart')
const dropZones = wrapper.findAll('[class*="min-h-24"]')
await dropZones[0].trigger('drop') // same status = pending
await flushPromises()
expect(vi.mocked(api.patchTask)).not.toHaveBeenCalled()
})
it('После успешного drop задача перемещается в новую колонку (optimistic update)', async () => {
const updatedTask = makeTask('KIN-001', 'review')
vi.mocked(api.patchTask).mockResolvedValue(updatedTask as any)
const wrapper = await mountOnKanban()
const taskCard = wrapper.find('a[href="/task/KIN-001"]')
await taskCard.trigger('dragstart')
const dropZones = wrapper.findAll('[class*="min-h-24"]')
await dropZones[2].trigger('drop') // review = индекс 2
await flushPromises()
// KIN-001 должен теперь быть в колонке review (индекс 2)
expect(dropZones[2].find('a[href="/task/KIN-001"]').exists()).toBe(true)
})
})
// ─────────────────────────────────────────────────────────────
// 5-6. Polling и clearInterval
// ─────────────────────────────────────────────────────────────
describe('KIN-UI-001: канбан — polling', () => {
it('5. Polling запускается на канбан-вкладке при наличии in_progress задач (повторный вызов api.project через 5с)', async () => {
vi.useFakeTimers()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const callsOnMount = vi.mocked(api.project).mock.calls.length
// Переключаемся на kanban — есть KIN-002 in_progress → запускает setInterval
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
// Продвигаем время на 5с → polling-интервал срабатывает
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
expect(vi.mocked(api.project).mock.calls.length, 'api.project должен вызваться повторно').toBeGreaterThan(callsOnMount)
})
it('Polling не запускается на канбан-вкладке если нет in_progress задач', async () => {
vi.useFakeTimers()
const projectNoPending = {
...MOCK_PROJECT,
tasks: MOCK_PROJECT.tasks.filter(t => t.status !== 'in_progress'),
}
vi.mocked(api.project).mockResolvedValue(projectNoPending as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const callsOnMount = vi.mocked(api.project).mock.calls.length
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
expect(vi.mocked(api.project).mock.calls.length, 'api.project не должен вызываться без in_progress').toBe(callsOnMount)
})
it('6. Polling останавливается при переключении с канбан-вкладки на другую', async () => {
vi.useFakeTimers()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Запускаем polling
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
// Первый тик polling
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
const callsWhilePolling = vi.mocked(api.project).mock.calls.length
// Переключаемся на Tasks → clearInterval должен быть вызван
const tasksTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Tasks')
)!
await tasksTab.trigger('click')
await flushPromises()
// Ещё 5с — polling остановлен, новых вызовов быть не должно
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
expect(vi.mocked(api.project).mock.calls.length, 'После переключения вкладки polling должен остановиться').toBe(callsWhilePolling)
})
it('6б. clearInterval вызывается в onUnmounted — polling не продолжается после размонтирования', async () => {
vi.useFakeTimers()
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Запускаем polling
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
const callsBeforeUnmount = vi.mocked(api.project).mock.calls.length
// Размонтируем компонент — должен вызвать clearInterval
wrapper.unmount()
// Ещё 5с — polling должен быть остановлен
await vi.advanceTimersByTimeAsync(5000)
expect(vi.mocked(api.project).mock.calls.length, 'После unmount polling должен остановиться').toBe(callsBeforeUnmount)
})
})
// ─────────────────────────────────────────────────────────────
// 7. Регрессии: другие вкладки работают нормально
// ─────────────────────────────────────────────────────────────
describe('KIN-UI-001: регрессии — другие вкладки не сломаны', () => {
it('Вкладка Tasks по умолчанию показывает список задач', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Должны быть ссылки на задачи
const taskLinks = wrapper.findAll('a[href^="/task/"]')
expect(taskLinks.length).toBeGreaterThan(0)
})
it('Переключение tasks→kanban→tasks не теряет список задач', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const taskLinksInitial = wrapper.findAll('a[href^="/task/"]').length
// Переключаемся на kanban
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
// Переключаемся обратно на tasks
const tasksTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Tasks')
)!
await tasksTab.trigger('click')
await flushPromises()
const taskLinksAfter = wrapper.findAll('a[href^="/task/"]').length
expect(taskLinksAfter).toBe(taskLinksInitial)
})
it('Вкладка Decisions переключается и отображается без ошибок', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const decisionsTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().toLowerCase().includes('decisions')
)!
await decisionsTab.trigger('click')
await flushPromises()
expect(wrapper.text()).toContain('No decisions')
})
it('Вкладка Modules переключается и отображается без ошибок', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const modulesTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().toLowerCase().includes('modules')
)!
await modulesTab.trigger('click')
await flushPromises()
expect(wrapper.text()).toContain('No modules')
})
})
// ─────────────────────────────────────────────────────────────
// KIN-075: кнопки управления в канбан-виде
// ─────────────────────────────────────────────────────────────
describe('KIN-075: канбан — кнопки управления', () => {
it('Кнопка переключения режима (Авто/Review) присутствует в канбан-виде', async () => {
const wrapper = await mountOnKanban()
// MOCK_PROJECT.execution_mode = 'review' → autoMode=false → title = 'Review mode: agents read-only'
const modeBtn = wrapper.find('button[title*="mode:"]')
expect(modeBtn.exists()).toBe(true)
})
it('Кнопка Автокомит присутствует в канбан-виде', async () => {
const wrapper = await mountOnKanban()
// MOCK_PROJECT.autocommit_enabled = 0 → autocommit=false → title = 'Autocommit: off'
const btn = wrapper.find('button[title*="Autocommit:"]')
expect(btn.exists()).toBe(true)
expect(btn.text()).toMatch(/Автокомит/)
})
it('Кнопка Аудит присутствует в канбан-виде', async () => {
const wrapper = await mountOnKanban()
const btn = wrapper.find('button[title="Check which pending tasks are already done"]')
expect(btn.exists()).toBe(true)
expect(btn.text()).toContain('Аудит')
})
it('Кнопка "+ Тас" присутствует в канбан-виде', async () => {
const wrapper = await mountOnKanban()
const tasBtn = wrapper.findAll('button').find(b => b.text() === '+ Тас')
expect(tasBtn?.exists()).toBe(true)
})
it('Клик на кнопку режима вызывает api.patchProject с execution_mode', async () => {
vi.mocked(api.patchProject).mockResolvedValue(undefined as any)
const wrapper = await mountOnKanban()
const modeBtn = wrapper.find('button[title*="mode:"]')
await modeBtn.trigger('click')
await flushPromises()
expect(vi.mocked(api.patchProject)).toHaveBeenCalledWith(
'KIN',
expect.objectContaining({ execution_mode: expect.any(String) }),
)
})
it('Клик на кнопку Автокомит вызывает api.patchProject с autocommit_enabled', async () => {
vi.mocked(api.patchProject).mockResolvedValue(undefined as any)
const wrapper = await mountOnKanban()
const btn = wrapper.find('button[title*="Autocommit:"]')
await btn.trigger('click')
await flushPromises()
expect(vi.mocked(api.patchProject)).toHaveBeenCalledWith(
'KIN',
expect.objectContaining({ autocommit_enabled: expect.anything() }),
)
})
it('Клик на кнопку Аудит вызывает api.auditProject', async () => {
vi.mocked(api.auditProject).mockResolvedValue({
success: true, already_done: [], still_pending: [], unclear: [],
} as any)
const wrapper = await mountOnKanban()
const auditBtn = wrapper.find('button[title="Check which pending tasks are already done"]')
await auditBtn.trigger('click')
await flushPromises()
expect(vi.mocked(api.auditProject)).toHaveBeenCalledWith('KIN')
})
it('Клик на "+ Тас" открывает модальное окно Add Task', async () => {
const wrapper = await mountOnKanban()
const tasBtn = wrapper.findAll('button').find(b => b.text() === '+ Тас')!
await tasBtn.trigger('click')
await flushPromises()
expect(wrapper.text()).toContain('Add Task')
})
})
// ─────────────────────────────────────────────────────────────
// KIN-078: полная ширина канбан-доски (нет max-w ограничений)
// ─────────────────────────────────────────────────────────────
describe('KIN-078: канбан — flex layout без ограничений ширины', () => {
it('Flex-контейнер колонок имеет класс w-full', async () => {
const wrapper = await mountOnKanban()
const flexContainer = wrapper.find('.flex.gap-3.w-full')
expect(flexContainer.exists(), 'flex gap-3 w-full контейнер должен существовать').toBe(true)
expect(flexContainer.classes()).toContain('w-full')
})
it('Flex-контейнер колонок не содержит inline style min-width: max-content', async () => {
const wrapper = await mountOnKanban()
const flexContainer = wrapper.find('.flex.gap-3.w-full')
const style = flexContainer.element.getAttribute('style')
expect(style ?? '').not.toContain('min-width')
})
it('Каждая из 5 колонок имеет flex-1 (растягивается), а не фиксированный w-64', async () => {
const wrapper = await mountOnKanban()
// KANBAN_COLUMNS — 5 колонок, все должны иметь flex-1
const allFlex1 = wrapper.findAll('div').filter(d => d.classes().includes('flex-1') && d.classes().includes('flex-col'))
expect(allFlex1.length, '5 колонок с flex-1 flex-col должны быть').toBe(5)
for (const col of allFlex1) {
expect(col.classes(), 'Колонка не должна иметь фиксированный w-64').not.toContain('w-64')
}
})
it('Каждая из 5 колонок имеет min-w-[12rem] (минимальная ширина)', async () => {
const wrapper = await mountOnKanban()
const columns = wrapper.findAll('div').filter(d =>
d.classes().includes('flex-1') && d.classes().includes('flex-col')
)
expect(columns.length).toBe(5)
for (const col of columns) {
expect(col.classes(), 'Колонка должна иметь min-w-[12rem]').toContain('min-w-[12rem]')
}
})
})
// ─────────────────────────────────────────────────────────────
// KIN-078: runTask → запуск polling на kanban-вкладке
// ─────────────────────────────────────────────────────────────
describe('KIN-078: runTask → polling на kanban-вкладке', () => {
it('runTask запускает polling если пользователь переключился на kanban пока задача выполнялась', async () => {
vi.useFakeTimers()
// Изначально нет in_progress задач → переключение на kanban не запускает polling
const projectNoPending = {
...MOCK_PROJECT,
tasks: MOCK_PROJECT.tasks.filter(t => t.status !== 'in_progress'),
}
vi.mocked(api.project).mockResolvedValue(projectNoPending as any)
// Откладываем api.runTask — имитируем задержку
let resolveRun!: () => void
vi.mocked(api.runTask).mockReturnValue(new Promise<void>(res => { resolveRun = () => res() }) as any)
vi.spyOn(window, 'confirm').mockReturnValue(true)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Кликаем ▶ для KIN-001 (pending) на вкладке Tasks — runTask подвисает на await api.runTask
const runBtn = wrapper.find('button[title="Run pipeline"]')
expect(runBtn.exists(), '▶ кнопка должна быть на Tasks вкладке').toBe(true)
await runBtn.trigger('click')
// Пока runTask ждёт — переключаемся на kanban (нет in_progress → polling не стартует)
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
// Убеждаемся что polling не запустился (нет in_progress задач)
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
const callsBefore = vi.mocked(api.project).mock.calls.length
// Настраиваем следующий load() чтобы вернуть проект с in_progress задачами
vi.mocked(api.project).mockResolvedValue(MOCK_PROJECT as any)
// Завершаем api.runTask → runTask продолжает: load() → checkAndPollKanban()
resolveRun()
await flushPromises()
const callsAfterLoad = vi.mocked(api.project).mock.calls.length
expect(callsAfterLoad, 'load() должен вызвать api.project').toBeGreaterThan(callsBefore)
// Продвигаем время на 5с → polling-тик должен сработать
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
expect(
vi.mocked(api.project).mock.calls.length,
'Polling должен запуститься после runTask когда activeTab === kanban',
).toBeGreaterThan(callsAfterLoad)
})
it('runTask не запускает polling если activeTab !== kanban в момент завершения', async () => {
vi.useFakeTimers()
vi.mocked(api.runTask).mockResolvedValue(undefined as any)
vi.spyOn(window, 'confirm').mockReturnValue(true)
// Нет in_progress задач → ни через watcher, ни через runTask polling не стартует
const projectNoPending = {
...MOCK_PROJECT,
tasks: MOCK_PROJECT.tasks.filter(t => t.status !== 'in_progress'),
}
vi.mocked(api.project).mockResolvedValue(projectNoPending as any)
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
// Остаёмся на Tasks вкладке (activeTab === 'tasks') и кликаем ▶
const runBtn = wrapper.find('button[title="Run pipeline"]')
await runBtn.trigger('click')
await flushPromises()
const callsAfterRun = vi.mocked(api.project).mock.calls.length
// Продвигаем время — polling не должен запуститься (мы на tasks, нет in_progress)
await vi.advanceTimersByTimeAsync(5000)
await flushPromises()
expect(
vi.mocked(api.project).mock.calls.length,
'Polling не должен запуститься когда activeTab !== kanban',
).toBe(callsAfterRun)
})
})

View file

@ -0,0 +1,288 @@
/**
* KIN-076: Тесты поля поиска по задачам в ProjectView
*
* Проверяет:
* 1. Поиск по слову из title задача видна, остальные скрыты
* 2. Поиск по несуществующему слову список задач пустой
* 3. Очистка поля поиска все задачи снова видны
* 4. Поиск регистронезависим
* 5. Поиск по содержимому brief
* 6. Кнопка очищает поиск
* 7. При смене props.id поисковый запрос сбрасывается
* 8. Поле поиска присутствует на Kanban вкладке
* 9. Поиск на Kanban фильтрует задачи в колонках
*/
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'
import { mount, flushPromises } from '@vue/test-utils'
import { createRouter, createMemoryHistory } from 'vue-router'
import ProjectView from '../views/ProjectView.vue'
vi.mock('../api', () => ({
api: {
project: vi.fn(),
taskFull: vi.fn(),
runTask: vi.fn(),
auditProject: vi.fn(),
createTask: vi.fn(),
patchTask: vi.fn(),
patchProject: vi.fn(),
deployProject: vi.fn(),
getPhases: vi.fn(),
},
}))
import { api } from '../api'
const Stub = { template: '<div />' }
function makeTask(id: string, title: string, brief: unknown = null) {
return {
id,
project_id: 'KIN',
title,
status: 'pending',
priority: 5,
assigned_role: null,
parent_task_id: null,
brief,
spec: null,
execution_mode: null,
blocked_reason: null,
dangerously_skipped: null,
category: null,
acceptance_criteria: null,
created_at: '2024-01-01',
updated_at: '2024-01-01',
}
}
const MOCK_PROJECT = {
id: 'KIN',
name: 'Kin',
path: '/projects/kin',
status: 'active',
priority: 5,
tech_stack: ['python', 'vue'],
execution_mode: 'review',
autocommit_enabled: 0,
obsidian_vault_path: null,
deploy_command: null,
created_at: '2024-01-01',
total_tasks: 3,
done_tasks: 0,
active_tasks: 1,
blocked_tasks: 0,
review_tasks: 0,
project_type: 'development',
ssh_host: null,
ssh_user: null,
ssh_key_path: null,
ssh_proxy_jump: null,
description: null,
tasks: [
makeTask('KIN-001', 'Реализовать поле поиска'),
makeTask('KIN-002', 'Добавить аутентификацию'),
makeTask('KIN-003', 'Исправить баг в отчётах', { text: 'текст с ключевым словом oauth' }),
],
decisions: [],
modules: [],
}
// localStorage mock
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: (k: string) => store[k] ?? null,
setItem: (k: string, v: string) => { store[k] = v },
removeItem: (k: string) => { delete store[k] },
clear: () => { store = {} },
}
})()
Object.defineProperty(globalThis, 'localStorage', { value: localStorageMock, configurable: true })
function makeRouter() {
return createRouter({
history: createMemoryHistory(),
routes: [
{ path: '/', component: Stub },
{ path: '/project/:id', component: ProjectView, props: true },
],
})
}
// Convention #162: beforeEach с vi.clearAllMocks() + deep clone mock-объектов
beforeEach(() => {
localStorageMock.clear()
vi.clearAllMocks()
vi.mocked(api.project).mockResolvedValue(JSON.parse(JSON.stringify(MOCK_PROJECT)) as any)
vi.mocked(api.getPhases).mockResolvedValue([])
})
afterEach(() => {
vi.restoreAllMocks()
})
async function mountOnTasks() {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
return wrapper
}
// ─────────────────────────────────────────────────────────────
// Tasks tab: acceptance criteria (обязательные тесты KIN-076)
// ─────────────────────────────────────────────────────────────
describe('KIN-076: поиск задач — Tasks вкладка (acceptance criteria)', () => {
it('1. Поиск по слову из title — задача видна, остальные скрыты', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
expect(searchInput.exists(), 'Поле поиска должно быть в DOM').toBe(true)
await searchInput.setValue('поиска')
await flushPromises()
const hrefs = wrapper.findAll('a[href^="/task/"]').map(l => l.attributes('href'))
expect(hrefs, 'KIN-001 должен быть виден').toContain('/task/KIN-001')
expect(hrefs, 'KIN-002 не должен быть виден').not.toContain('/task/KIN-002')
expect(hrefs, 'KIN-003 не должен быть виден').not.toContain('/task/KIN-003')
})
it('2. Поиск по несуществующему слову — список задач пустой', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
await searchInput.setValue(есуществующеесловоxyz123')
await flushPromises()
const links = wrapper.findAll('a[href^="/task/"]')
expect(links, 'При несуществующем слове список задач должен быть пустым').toHaveLength(0)
})
it('3. Очистка поля поиска — все задачи снова видны', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
// Фильтруем задачи поиском
await searchInput.setValue('аутентификацию')
await flushPromises()
expect(wrapper.findAll('a[href^="/task/"]'), 'Фильтрация должна работать').toHaveLength(1)
// Очищаем поле — возвращаем все задачи
await searchInput.setValue('')
await flushPromises()
expect(wrapper.findAll('a[href^="/task/"]'), 'После очистки должны быть все 3 задачи').toHaveLength(3)
})
})
// ─────────────────────────────────────────────────────────────
// Tasks tab: дополнительные сценарии
// ─────────────────────────────────────────────────────────────
describe('KIN-076: поиск задач — дополнительные сценарии', () => {
it('4. Поиск регистронезависим — заглавные буквы находят строчные', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
await searchInput.setValue('ПОИСКА')
await flushPromises()
const hrefs = wrapper.findAll('a[href^="/task/"]').map(l => l.attributes('href'))
expect(hrefs, 'KIN-001 должен находиться независимо от регистра').toContain('/task/KIN-001')
})
it('5. Поиск по содержимому brief — задача с совпадением в brief видна', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
await searchInput.setValue('oauth')
await flushPromises()
const hrefs = wrapper.findAll('a[href^="/task/"]').map(l => l.attributes('href'))
expect(hrefs, 'KIN-003 должна быть найдена по brief').toContain('/task/KIN-003')
expect(hrefs, 'KIN-001 не совпадает — не должен быть виден').not.toContain('/task/KIN-001')
expect(hrefs, 'KIN-002 не совпадает — не должен быть виден').not.toContain('/task/KIN-002')
})
it('6. Кнопка ✕ очищает поиск и показывает все задачи', async () => {
const wrapper = await mountOnTasks()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
await searchInput.setValue('поиска')
await flushPromises()
// Кнопка ✕ появляется при непустом поиске
const clearBtn = wrapper.findAll('button').find(b => b.text().trim() === '✕')
expect(clearBtn?.exists(), 'Кнопка ✕ должна быть видна при непустом поиске').toBe(true)
await clearBtn!.trigger('click')
await flushPromises()
expect(wrapper.findAll('a[href^="/task/"]'), 'После ✕ — все задачи снова видны').toHaveLength(3)
})
it('7. При смене props.id поисковый запрос сбрасывается', async () => {
const router = makeRouter()
await router.push('/project/KIN')
const wrapper = mount(ProjectView, {
props: { id: 'KIN' },
global: { plugins: [router] },
})
await flushPromises()
const searchInput = wrapper.find('input[placeholder="Поиск по задачам..."]')
await searchInput.setValue('поиска')
await flushPromises()
expect((searchInput.element as HTMLInputElement).value).toBe('поиска')
// Меняем id проекта → watch сбрасывает taskSearch
await wrapper.setProps({ id: 'OTHER' })
await flushPromises()
expect(
(searchInput.element as HTMLInputElement).value,
'Поиск должен сброситься при смене проекта',
).toBe('')
})
})
// ─────────────────────────────────────────────────────────────
// Kanban tab: поиск
// ─────────────────────────────────────────────────────────────
describe('KIN-076: поиск задач — Kanban вкладка', () => {
async function mountOnKanban() {
const wrapper = await mountOnTasks()
const kanbanTab = wrapper.findAll('button').find(b =>
b.classes().includes('border-b-2') && b.text().includes('Kanban')
)!
await kanbanTab.trigger('click')
await flushPromises()
return wrapper
}
it('8. Поле поиска присутствует на Kanban вкладке', async () => {
const wrapper = await mountOnKanban()
const searchInput = wrapper.find('input[placeholder="Поиск..."]')
expect(searchInput.exists(), 'Поле поиска должно быть на канбан-вкладке').toBe(true)
})
it('9. Поиск на Kanban фильтрует задачи в колонках', async () => {
const wrapper = await mountOnKanban()
const searchInput = wrapper.find('input[placeholder="Поиск..."]')
await searchInput.setValue('поиска')
await flushPromises()
expect(wrapper.find('a[href="/task/KIN-001"]').exists(), 'KIN-001 должен быть виден').toBe(true)
expect(wrapper.find('a[href="/task/KIN-002"]').exists(), 'KIN-002 не должен быть виден').toBe(false)
expect(wrapper.find('a[href="/task/KIN-003"]').exists(), 'KIN-003 не должен быть виден').toBe(false)
})
})

View file

@ -1,8 +1,28 @@
const BASE = '/api' const BASE = '/api'
export class ApiError extends Error {
code: string
constructor(code: string, message: string) {
super(message)
this.name = 'ApiError'
this.code = code
}
}
async function throwApiError(res: Response): Promise<never> {
let code = ''
let msg = `${res.status} ${res.statusText}`
try {
const data = await res.json()
if (data.error) code = data.error
if (data.message) msg = data.message
} catch {}
throw new ApiError(code, msg)
}
async function get<T>(path: string): Promise<T> { async function get<T>(path: string): Promise<T> {
const res = await fetch(`${BASE}${path}`) const res = await fetch(`${BASE}${path}`)
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`) if (!res.ok) await throwApiError(res)
return res.json() return res.json()
} }
@ -12,7 +32,7 @@ async function patch<T>(path: string, body: unknown): Promise<T> {
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body), body: JSON.stringify(body),
}) })
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`) if (!res.ok) await throwApiError(res)
return res.json() return res.json()
} }
@ -22,13 +42,20 @@ async function post<T>(path: string, body: unknown): Promise<T> {
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body), body: JSON.stringify(body),
}) })
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`) if (!res.ok) await throwApiError(res)
return res.json() return res.json()
} }
async function del<T>(path: string): Promise<T> { async function del<T>(path: string): Promise<T> {
const res = await fetch(`${BASE}${path}`, { method: 'DELETE' }) const res = await fetch(`${BASE}${path}`, { method: 'DELETE' })
if (!res.ok) throw new Error(`${res.status} ${res.statusText}`) if (!res.ok) await throwApiError(res)
if (res.status === 204) return undefined as T
return res.json()
}
async function postForm<T>(path: string, body: FormData): Promise<T> {
const res = await fetch(`${BASE}${path}`, { method: 'POST', body })
if (!res.ok) await throwApiError(res)
return res.json() return res.json()
} }
@ -40,12 +67,28 @@ export interface Project {
priority: number priority: number
tech_stack: string[] | null tech_stack: string[] | null
execution_mode: string | null execution_mode: string | null
autocommit_enabled: number | null
obsidian_vault_path: string | null
deploy_command: string | null
created_at: string created_at: string
total_tasks: number total_tasks: number
done_tasks: number done_tasks: number
active_tasks: number active_tasks: number
blocked_tasks: number blocked_tasks: number
review_tasks: number review_tasks: number
project_type: string | null
ssh_host: string | null
ssh_user: string | null
ssh_key_path: string | null
ssh_proxy_jump: string | null
description: string | null
}
export interface ObsidianSyncResult {
exported_decisions: number
tasks_updated: number
errors: string[]
vault_path: string
} }
export interface ProjectDetail extends Project { export interface ProjectDetail extends Project {
@ -66,6 +109,10 @@ export interface Task {
spec: Record<string, unknown> | null spec: Record<string, unknown> | null
execution_mode: string | null execution_mode: string | null
blocked_reason: string | null blocked_reason: string | null
dangerously_skipped: number | null
category: string | null
acceptance_criteria: string | null
feedback?: string | null
created_at: string created_at: string
updated_at: string updated_at: string
} }
@ -106,9 +153,18 @@ export interface PipelineStep {
created_at: string created_at: string
} }
export interface DeployResult {
success: boolean
exit_code: number
stdout: string
stderr: string
duration_seconds: number
}
export interface TaskFull extends Task { export interface TaskFull extends Task {
pipeline_steps: PipelineStep[] pipeline_steps: PipelineStep[]
related_decisions: Decision[] related_decisions: Decision[]
project_deploy_command: string | null
} }
export interface PendingAction { export interface PendingAction {
@ -127,6 +183,41 @@ export interface CostEntry {
total_duration_seconds: number total_duration_seconds: number
} }
export interface Phase {
id: number
project_id: string
role: string
phase_order: number
status: string
task_id: string | null
revise_count: number
revise_comment: string | null
created_at: string
updated_at: string
task?: Task | null
}
export interface NewProjectPayload {
id: string
name: string
path: string
description: string
roles: string[]
tech_stack?: string[]
priority?: number
language?: string
project_type?: string
ssh_host?: string
ssh_user?: string
ssh_key_path?: string
ssh_proxy_jump?: string
}
export interface NewProjectResult {
project: Project
phases: Phase[]
}
export interface AuditItem { export interface AuditItem {
id: string id: string
reason: string reason: string
@ -142,6 +233,59 @@ export interface AuditResult {
error?: string error?: string
} }
export interface ProjectEnvironment {
id: number
project_id: string
name: string
host: string
port: number
username: string
auth_type: string
is_installed: number
created_at: string
updated_at: string
}
export interface EscalationNotification {
task_id: string
project_id: string
agent_role: string
reason: string
pipeline_step: string | null
blocked_at: string
telegram_sent: boolean
}
export interface ChatMessage {
id: number
project_id: string
role: 'user' | 'assistant'
content: string
message_type: string
task_id: string | null
created_at: string
task_stub?: {
id: string
title: string
status: string
} | null
}
export interface ChatSendResult {
user_message: ChatMessage
assistant_message: ChatMessage
task?: Task | null
}
export interface Attachment {
id: number
task_id: string
filename: string
mime_type: string
size: number
created_at: string
}
export const api = { export const api = {
projects: () => get<Project[]>('/projects'), projects: () => get<Project[]>('/projects'),
project: (id: string) => get<ProjectDetail>(`/projects/${id}`), project: (id: string) => get<ProjectDetail>(`/projects/${id}`),
@ -149,9 +293,9 @@ export const api = {
taskFull: (id: string) => get<TaskFull>(`/tasks/${id}/full`), taskFull: (id: string) => get<TaskFull>(`/tasks/${id}/full`),
taskPipeline: (id: string) => get<PipelineStep[]>(`/tasks/${id}/pipeline`), taskPipeline: (id: string) => get<PipelineStep[]>(`/tasks/${id}/pipeline`),
cost: (days = 7) => get<CostEntry[]>(`/cost?days=${days}`), cost: (days = 7) => get<CostEntry[]>(`/cost?days=${days}`),
createProject: (data: { id: string; name: string; path: string; tech_stack?: string[]; priority?: number }) => createProject: (data: { id: string; name: string; path?: string; tech_stack?: string[]; priority?: number; project_type?: string; ssh_host?: string; ssh_user?: string; ssh_key_path?: string; ssh_proxy_jump?: string }) =>
post<Project>('/projects', data), post<Project>('/projects', data),
createTask: (data: { project_id: string; title: string; priority?: number; route_type?: string }) => createTask: (data: { project_id: string; title: string; priority?: number; route_type?: string; category?: string; acceptance_criteria?: string }) =>
post<Task>('/tasks', data), post<Task>('/tasks', data),
approveTask: (id: string, data?: { decision_title?: string; decision_description?: string; decision_type?: string; create_followups?: boolean }) => approveTask: (id: string, data?: { decision_title?: string; decision_description?: string; decision_type?: string; create_followups?: boolean }) =>
post<{ status: string; followup_tasks: Task[]; needs_decision: boolean; pending_actions: PendingAction[] }>(`/tasks/${id}/approve`, data || {}), post<{ status: string; followup_tasks: Task[]; needs_decision: boolean; pending_actions: PendingAction[] }>(`/tasks/${id}/approve`, data || {}),
@ -159,6 +303,8 @@ export const api = {
post<{ choice: string; result: unknown }>(`/tasks/${id}/resolve`, { action, choice }), post<{ choice: string; result: unknown }>(`/tasks/${id}/resolve`, { action, choice }),
rejectTask: (id: string, reason: string) => rejectTask: (id: string, reason: string) =>
post<{ status: string }>(`/tasks/${id}/reject`, { reason }), post<{ status: string }>(`/tasks/${id}/reject`, { reason }),
reviseTask: (id: string, comment: string) =>
post<{ status: string; comment: string }>(`/tasks/${id}/revise`, { comment }),
runTask: (id: string) => runTask: (id: string) =>
post<{ status: string }>(`/tasks/${id}/run`, {}), post<{ status: string }>(`/tasks/${id}/run`, {}),
bootstrap: (data: { path: string; id: string; name: string }) => bootstrap: (data: { path: string; id: string; name: string }) =>
@ -167,10 +313,56 @@ export const api = {
post<AuditResult>(`/projects/${projectId}/audit`, {}), post<AuditResult>(`/projects/${projectId}/audit`, {}),
auditApply: (projectId: string, taskIds: string[]) => auditApply: (projectId: string, taskIds: string[]) =>
post<{ updated: string[]; count: number }>(`/projects/${projectId}/audit/apply`, { task_ids: taskIds }), post<{ updated: string[]; count: number }>(`/projects/${projectId}/audit/apply`, { task_ids: taskIds }),
patchTask: (id: string, data: { status?: string; execution_mode?: string }) => patchTask: (id: string, data: { status?: string; execution_mode?: string; priority?: number; route_type?: string; title?: string; brief_text?: string; acceptance_criteria?: string }) =>
patch<Task>(`/tasks/${id}`, data), patch<Task>(`/tasks/${id}`, data),
patchProject: (id: string, data: { execution_mode: string }) => patchProject: (id: string, data: { execution_mode?: string; autocommit_enabled?: boolean; obsidian_vault_path?: string; deploy_command?: string; project_type?: string; ssh_host?: string; ssh_user?: string; ssh_key_path?: string; ssh_proxy_jump?: string }) =>
patch<Project>(`/projects/${id}`, data), patch<Project>(`/projects/${id}`, data),
deployProject: (projectId: string) =>
post<DeployResult>(`/projects/${projectId}/deploy`, {}),
syncObsidian: (projectId: string) =>
post<ObsidianSyncResult>(`/projects/${projectId}/sync/obsidian`, {}),
deleteProject: (id: string) =>
del<void>(`/projects/${id}`),
deleteDecision: (projectId: string, decisionId: number) => deleteDecision: (projectId: string, decisionId: number) =>
del<{ deleted: number }>(`/projects/${projectId}/decisions/${decisionId}`), del<{ deleted: number }>(`/projects/${projectId}/decisions/${decisionId}`),
createDecision: (data: { project_id: string; type: string; title: string; description: string; category?: string; tags?: string[] }) =>
post<Decision>('/decisions', data),
newProject: (data: NewProjectPayload) =>
post<NewProjectResult>('/projects/new', data),
getPhases: (projectId: string) =>
get<Phase[]>(`/projects/${projectId}/phases`),
notifications: (projectId?: string) =>
get<EscalationNotification[]>(`/notifications${projectId ? `?project_id=${projectId}` : ''}`),
approvePhase: (phaseId: number, comment?: string) =>
post<{ phase: Phase; next_phase: Phase | null }>(`/phases/${phaseId}/approve`, { comment }),
rejectPhase: (phaseId: number, reason: string) =>
post<Phase>(`/phases/${phaseId}/reject`, { reason }),
revisePhase: (phaseId: number, comment: string) =>
post<{ phase: Phase; new_task: Task }>(`/phases/${phaseId}/revise`, { comment }),
startPhase: (projectId: string) =>
post<{ status: string; phase_id: number; task_id: string }>(`/projects/${projectId}/phases/start`, {}),
environments: (projectId: string) =>
get<ProjectEnvironment[]>(`/projects/${projectId}/environments`),
createEnvironment: (projectId: string, data: { name: string; host: string; port?: number; username: string; auth_type?: string; auth_value?: string; is_installed?: boolean }) =>
post<ProjectEnvironment & { scan_task_id?: string }>(`/projects/${projectId}/environments`, data),
updateEnvironment: (projectId: string, envId: number, data: { name?: string; host?: string; port?: number; username?: string; auth_type?: string; auth_value?: string; is_installed?: boolean }) =>
patch<ProjectEnvironment & { scan_task_id?: string }>(`/projects/${projectId}/environments/${envId}`, data),
deleteEnvironment: (projectId: string, envId: number) =>
del<void>(`/projects/${projectId}/environments/${envId}`),
scanEnvironment: (projectId: string, envId: number) =>
post<{ status: string; task_id: string }>(`/projects/${projectId}/environments/${envId}/scan`, {}),
chatHistory: (projectId: string, limit = 50) =>
get<ChatMessage[]>(`/projects/${projectId}/chat?limit=${limit}`),
sendChatMessage: (projectId: string, content: string) =>
post<ChatSendResult>(`/projects/${projectId}/chat`, { content }),
uploadAttachment: (taskId: string, file: File) => {
const fd = new FormData()
fd.append('file', file)
return postForm<Attachment>(`/tasks/${taskId}/attachments`, fd)
},
getAttachments: (taskId: string) =>
get<Attachment[]>(`/tasks/${taskId}/attachments`),
deleteAttachment: (taskId: string, id: number) =>
del<void>(`/tasks/${taskId}/attachments/${id}`),
attachmentUrl: (id: number) => `${BASE}/attachments/${id}/file`,
} }

View file

@ -0,0 +1,55 @@
<script setup lang="ts">
import { ref } from 'vue'
import { api, type Attachment } from '../api'
const props = defineProps<{ attachments: Attachment[]; taskId: string }>()
const emit = defineEmits<{ deleted: [] }>()
const deletingId = ref<number | null>(null)
async function remove(id: number) {
deletingId.value = id
try {
await api.deleteAttachment(props.taskId, id)
emit('deleted')
} catch {
// silently ignore parent will reload
} finally {
deletingId.value = null
}
}
function formatSize(bytes: number): string {
if (bytes < 1024) return `${bytes}B`
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)}KB`
return `${(bytes / 1024 / 1024).toFixed(1)}MB`
}
</script>
<template>
<div v-if="attachments.length" class="flex flex-wrap gap-3 mb-3">
<div
v-for="att in attachments"
:key="att.id"
class="relative group border border-gray-700 rounded-lg overflow-hidden bg-gray-900 w-28"
>
<a :href="api.attachmentUrl(att.id)" target="_blank" rel="noopener">
<img
:src="api.attachmentUrl(att.id)"
:alt="att.filename"
class="w-28 h-20 object-cover block"
/>
</a>
<div class="px-1.5 py-1">
<p class="text-[10px] text-gray-400 truncate" :title="att.filename">{{ att.filename }}</p>
<p class="text-[10px] text-gray-600">{{ formatSize(att.size) }}</p>
</div>
<button
@click="remove(att.id)"
:disabled="deletingId === att.id"
class="absolute top-1 right-1 w-5 h-5 rounded-full bg-red-900/80 text-red-400 text-xs leading-none opacity-0 group-hover:opacity-100 transition-opacity disabled:opacity-50 flex items-center justify-center"
title="Удалить"
></button>
</div>
</div>
</template>

View file

@ -0,0 +1,62 @@
<script setup lang="ts">
import { ref } from 'vue'
import { api } from '../api'
const props = defineProps<{ taskId: string }>()
const emit = defineEmits<{ uploaded: [] }>()
const dragging = ref(false)
const uploading = ref(false)
const error = ref('')
const fileInput = ref<HTMLInputElement | null>(null)
async function upload(file: File) {
if (!file.type.startsWith('image/')) {
error.value = 'Поддерживаются только изображения'
return
}
uploading.value = true
error.value = ''
try {
await api.uploadAttachment(props.taskId, file)
emit('uploaded')
} catch (e: any) {
error.value = e.message
} finally {
uploading.value = false
}
}
function onFileChange(event: Event) {
const input = event.target as HTMLInputElement
if (input.files?.[0]) upload(input.files[0])
input.value = ''
}
function onDrop(event: DragEvent) {
dragging.value = false
const file = event.dataTransfer?.files[0]
if (file) upload(file)
}
</script>
<template>
<div
class="border-2 border-dashed rounded-lg p-3 text-center transition-colors cursor-pointer select-none"
:class="dragging ? 'border-blue-500 bg-blue-950/20' : 'border-gray-700 hover:border-gray-500'"
@dragover.prevent="dragging = true"
@dragleave="dragging = false"
@drop.prevent="onDrop"
@click="fileInput?.click()"
>
<input ref="fileInput" type="file" accept="image/*" class="hidden" @change="onFileChange" />
<div v-if="uploading" class="flex items-center justify-center gap-2 text-xs text-blue-400">
<span class="inline-block w-3 h-3 border-2 border-blue-400 border-t-transparent rounded-full animate-spin"></span>
Загрузка...
</div>
<div v-else class="text-xs text-gray-500">
Перетащите изображение или <span class="text-blue-400">нажмите для выбора</span>
</div>
<p v-if="error" class="text-red-400 text-xs mt-1">{{ error }}</p>
</div>
</template>

View file

@ -9,6 +9,11 @@ const colors: Record<string, string> = {
gray: 'bg-gray-800/50 text-gray-400 border-gray-700', gray: 'bg-gray-800/50 text-gray-400 border-gray-700',
purple: 'bg-purple-900/50 text-purple-400 border-purple-800', purple: 'bg-purple-900/50 text-purple-400 border-purple-800',
orange: 'bg-orange-900/50 text-orange-400 border-orange-800', orange: 'bg-orange-900/50 text-orange-400 border-orange-800',
indigo: 'bg-indigo-900/50 text-indigo-400 border-indigo-800',
cyan: 'bg-cyan-900/50 text-cyan-400 border-cyan-800',
pink: 'bg-pink-900/50 text-pink-400 border-pink-800',
rose: 'bg-rose-900/50 text-rose-400 border-rose-800',
teal: 'bg-teal-900/50 text-teal-400 border-teal-800',
} }
</script> </script>

View file

@ -0,0 +1,127 @@
<script setup lang="ts">
import { ref, computed, onMounted, onUnmounted } from 'vue'
import { api, type EscalationNotification } from '../api'
const STORAGE_KEY = 'kin_dismissed_escalations'
const notifications = ref<EscalationNotification[]>([])
const showPanel = ref(false)
let pollTimer: ReturnType<typeof setInterval> | null = null
function loadDismissed(): Set<string> {
try {
const raw = localStorage.getItem(STORAGE_KEY)
return new Set(raw ? JSON.parse(raw) : [])
} catch {
return new Set()
}
}
function saveDismissed(ids: Set<string>) {
localStorage.setItem(STORAGE_KEY, JSON.stringify([...ids]))
}
const dismissed = ref<Set<string>>(loadDismissed())
const visible = computed(() =>
notifications.value.filter(n => !dismissed.value.has(n.task_id))
)
async function load() {
try {
notifications.value = await api.notifications()
} catch {
// silent не ломаем layout при недоступном endpoint
}
}
function dismiss(taskId: string) {
dismissed.value = new Set([...dismissed.value, taskId])
saveDismissed(dismissed.value)
if (visible.value.length === 0) showPanel.value = false
}
function dismissAll() {
const newSet = new Set([...dismissed.value, ...visible.value.map(n => n.task_id)])
dismissed.value = newSet
saveDismissed(newSet)
showPanel.value = false
}
function formatTime(iso: string): string {
try {
return new Date(iso).toLocaleString('ru-RU', { day: '2-digit', month: '2-digit', hour: '2-digit', minute: '2-digit' })
} catch {
return iso
}
}
onMounted(async () => {
await load()
pollTimer = setInterval(load, 10000)
})
onUnmounted(() => {
if (pollTimer) clearInterval(pollTimer)
})
</script>
<template>
<div class="relative">
<!-- Badge-кнопка видна только при наличии активных эскалаций -->
<button
v-if="visible.length > 0"
@click="showPanel = !showPanel"
class="relative flex items-center gap-1.5 px-2.5 py-1 text-xs bg-red-900/50 text-red-400 border border-red-800 rounded hover:bg-red-900 transition-colors"
>
<span class="inline-block w-1.5 h-1.5 bg-red-500 rounded-full animate-pulse"></span>
Эскалации
<span class="ml-0.5 font-bold">{{ visible.length }}</span>
</button>
<!-- Панель уведомлений -->
<div
v-if="showPanel && visible.length > 0"
class="absolute right-0 top-full mt-2 w-96 bg-gray-900 border border-red-900/60 rounded-lg shadow-2xl z-50"
>
<div class="flex items-center justify-between px-4 py-2.5 border-b border-gray-800">
<span class="text-xs font-semibold text-red-400">Эскалации требуется решение</span>
<div class="flex items-center gap-2">
<button
@click="dismissAll"
class="text-xs text-gray-500 hover:text-gray-300"
>Принять все</button>
<button @click="showPanel = false" class="text-gray-500 hover:text-gray-300 text-lg leading-none">&times;</button>
</div>
</div>
<div class="max-h-80 overflow-y-auto divide-y divide-gray-800">
<div
v-for="n in visible"
:key="n.task_id"
class="px-4 py-3"
>
<div class="flex items-start justify-between gap-2">
<div class="flex-1 min-w-0">
<div class="flex items-center gap-1.5 mb-1">
<span class="text-xs font-mono text-red-400 shrink-0">{{ n.task_id }}</span>
<span class="text-xs text-gray-500">·</span>
<span class="text-xs text-orange-400 shrink-0">{{ n.agent_role }}</span>
<span v-if="n.pipeline_step" class="text-xs text-gray-600 truncate">@ {{ n.pipeline_step }}</span>
</div>
<p class="text-xs text-gray-300 leading-snug break-words">{{ n.reason }}</p>
<p class="text-xs text-gray-600 mt-1">{{ formatTime(n.blocked_at) }}</p>
</div>
<button
@click="dismiss(n.task_id)"
class="shrink-0 px-2 py-1 text-xs bg-gray-800 text-gray-400 border border-gray-700 rounded hover:bg-gray-700 hover:text-gray-200"
>Принято</button>
</div>
</div>
</div>
</div>
<!-- Overlay для закрытия панели -->
<div v-if="showPanel" class="fixed inset-0 z-40" @click="showPanel = false"></div>
</div>
</template>

View file

@ -5,6 +5,8 @@ import App from './App.vue'
import Dashboard from './views/Dashboard.vue' import Dashboard from './views/Dashboard.vue'
import ProjectView from './views/ProjectView.vue' import ProjectView from './views/ProjectView.vue'
import TaskDetail from './views/TaskDetail.vue' import TaskDetail from './views/TaskDetail.vue'
import SettingsView from './views/SettingsView.vue'
import ChatView from './views/ChatView.vue'
const router = createRouter({ const router = createRouter({
history: createWebHistory(), history: createWebHistory(),
@ -12,6 +14,8 @@ const router = createRouter({
{ path: '/', component: Dashboard }, { path: '/', component: Dashboard },
{ path: '/project/:id', component: ProjectView, props: true }, { path: '/project/:id', component: ProjectView, props: true },
{ path: '/task/:id', component: TaskDetail, props: true }, { path: '/task/:id', component: TaskDetail, props: true },
{ path: '/settings', component: SettingsView },
{ path: '/chat/:projectId', component: ChatView, props: true },
], ],
}) })

View file

@ -0,0 +1,226 @@
<script setup lang="ts">
import { ref, watch, nextTick, onUnmounted } from 'vue'
import { useRouter } from 'vue-router'
import { api, ApiError, type ChatMessage } from '../api'
import Badge from '../components/Badge.vue'
const props = defineProps<{ projectId: string }>()
const router = useRouter()
const messages = ref<ChatMessage[]>([])
const input = ref('')
const sending = ref(false)
const loading = ref(true)
const error = ref('')
const consecutiveErrors = ref(0)
const projectName = ref('')
const messagesEl = ref<HTMLElement | null>(null)
let pollTimer: ReturnType<typeof setInterval> | null = null
function stopPoll() {
if (pollTimer) {
clearInterval(pollTimer)
pollTimer = null
}
}
function hasRunningTasks(msgs: ChatMessage[]) {
return msgs.some(
m => m.task_stub?.status === 'in_progress' || m.task_stub?.status === 'pending'
)
}
function checkAndPoll() {
stopPoll()
if (!hasRunningTasks(messages.value)) return
pollTimer = setInterval(async () => {
try {
const updated = await api.chatHistory(props.projectId)
messages.value = updated
consecutiveErrors.value = 0
error.value = ''
if (!hasRunningTasks(updated)) stopPoll()
} catch (e: any) {
consecutiveErrors.value++
console.warn(`[polling] ошибка #${consecutiveErrors.value}:`, e)
if (consecutiveErrors.value >= 3) {
error.value = 'Сервер недоступен. Проверьте подключение.'
stopPoll()
}
}
}, 3000)
}
async function load() {
stopPoll()
loading.value = true
error.value = ''
consecutiveErrors.value = 0
try {
const [msgs, project] = await Promise.all([
api.chatHistory(props.projectId),
api.project(props.projectId),
])
messages.value = msgs
projectName.value = project.name
} catch (e: any) {
if (e instanceof ApiError && e.message.includes('not found')) {
router.push('/')
return
}
error.value = e.message
} finally {
loading.value = false
}
await nextTick()
scrollToBottom()
checkAndPoll()
}
watch(() => props.projectId, () => {
messages.value = []
input.value = ''
error.value = ''
projectName.value = ''
loading.value = true
load()
}, { immediate: true })
onUnmounted(stopPoll)
async function send() {
const text = input.value.trim()
if (!text || sending.value) return
sending.value = true
error.value = ''
try {
const result = await api.sendChatMessage(props.projectId, text)
input.value = ''
messages.value.push(result.user_message, result.assistant_message)
await nextTick()
scrollToBottom()
checkAndPoll()
} catch (e: any) {
error.value = e.message
} finally {
sending.value = false
}
}
function onKeydown(e: KeyboardEvent) {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault()
send()
}
}
function scrollToBottom() {
if (messagesEl.value) {
messagesEl.value.scrollTop = messagesEl.value.scrollHeight
}
}
function taskStatusColor(status: string): string {
const map: Record<string, string> = {
done: 'green',
in_progress: 'blue',
review: 'yellow',
blocked: 'red',
pending: 'gray',
cancelled: 'gray',
}
return map[status] ?? 'gray'
}
function formatTime(dt: string) {
return new Date(dt).toLocaleTimeString('ru-RU', { hour: '2-digit', minute: '2-digit' })
}
</script>
<template>
<div class="flex flex-col h-[calc(100vh-112px)]">
<!-- Header -->
<div class="flex items-center gap-3 pb-4 border-b border-gray-800">
<router-link
:to="`/project/${projectId}`"
class="text-gray-400 hover:text-gray-200 text-sm no-underline"
> Проект</router-link>
<span class="text-gray-600">|</span>
<h1 class="text-base font-semibold text-gray-100">
{{ projectName || projectId }}
</h1>
<span class="text-xs text-gray-500 ml-1"> чат</span>
</div>
<!-- Error -->
<div v-if="error" class="mt-3 text-sm text-red-400 bg-red-900/20 border border-red-800 rounded px-3 py-2">
{{ error }}
</div>
<!-- Loading -->
<div v-if="loading" class="flex-1 flex items-center justify-center">
<span class="text-gray-500 text-sm">Загрузка...</span>
</div>
<!-- Messages -->
<div
v-else
ref="messagesEl"
class="flex-1 overflow-y-auto py-4 flex flex-col gap-3 min-h-0"
>
<div v-if="messages.length === 0" class="text-center text-gray-500 text-sm mt-8">
Опишите задачу или спросите о статусе проекта
</div>
<div
v-for="msg in messages"
:key="msg.id"
class="flex"
:class="msg.role === 'user' ? 'justify-end' : 'justify-start'"
>
<div
class="max-w-[70%] rounded-2xl px-4 py-2.5"
:class="msg.role === 'user'
? 'bg-indigo-900/60 border border-indigo-700/50 text-gray-100 rounded-br-sm'
: 'bg-gray-800/70 border border-gray-700/50 text-gray-200 rounded-bl-sm'"
>
<p class="text-sm whitespace-pre-wrap break-words">{{ msg.content }}</p>
<!-- Task stub for task_created messages -->
<div
v-if="msg.message_type === 'task_created' && msg.task_stub"
class="mt-2 pt-2 border-t border-gray-700/40 flex items-center gap-2"
>
<router-link
:to="`/task/${msg.task_stub.id}`"
class="text-xs text-indigo-400 hover:text-indigo-300 no-underline font-mono"
>{{ msg.task_stub.id }}</router-link>
<Badge :color="taskStatusColor(msg.task_stub.status)" :text="msg.task_stub.status" />
</div>
<p class="text-xs mt-1.5 text-gray-500">{{ formatTime(msg.created_at) }}</p>
</div>
</div>
</div>
<!-- Input -->
<div class="pt-3 border-t border-gray-800 flex gap-2 items-end">
<textarea
v-model="input"
:disabled="sending || loading"
placeholder="Опишите задачу или вопрос... (Enter — отправить, Shift+Enter — перенос)"
rows="2"
class="flex-1 bg-gray-800/60 border border-gray-700 rounded-xl px-4 py-2.5 text-sm text-gray-100 placeholder-gray-500 resize-none focus:outline-none focus:border-indigo-600 disabled:opacity-50"
@keydown="onKeydown"
/>
<button
:disabled="sending || loading || !input.trim()"
class="px-4 py-2.5 bg-indigo-600 hover:bg-indigo-500 disabled:opacity-40 disabled:cursor-not-allowed text-white text-sm rounded-xl font-medium transition-colors"
@click="send"
>
{{ sending ? '...' : 'Отправить' }}
</button>
</div>
</div>
</template>

View file

@ -11,7 +11,10 @@ const error = ref('')
// Add project modal // Add project modal
const showAdd = ref(false) const showAdd = ref(false)
const form = ref({ id: '', name: '', path: '', tech_stack: '', priority: 5 }) const form = ref({
id: '', name: '', path: '', tech_stack: '', priority: 5,
project_type: 'development', ssh_host: '', ssh_user: 'root', ssh_key_path: '', ssh_proxy_jump: '',
})
const formError = ref('') const formError = ref('')
// Bootstrap modal // Bootstrap modal
@ -20,6 +23,23 @@ const bsForm = ref({ id: '', name: '', path: '' })
const bsError = ref('') const bsError = ref('')
const bsResult = ref('') const bsResult = ref('')
// New Project with Research modal
const RESEARCH_ROLES = [
{ key: 'business_analyst', label: 'Business Analyst', hint: 'бизнес-модель, аудитория, монетизация' },
{ key: 'market_researcher', label: 'Market Researcher', hint: 'конкуренты, ниша, сильные/слабые стороны' },
{ key: 'legal_researcher', label: 'Legal Researcher', hint: 'юрисдикция, лицензии, KYC/AML, GDPR' },
{ key: 'tech_researcher', label: 'Tech Researcher', hint: 'API, ограничения, стоимость, альтернативы' },
{ key: 'ux_designer', label: 'UX Designer', hint: 'анализ UX конкурентов, user journey, wireframes' },
{ key: 'marketer', label: 'Marketer', hint: 'стратегия продвижения, SEO, conversion-паттерны' },
]
const showNewProject = ref(false)
const npForm = ref({
id: '', name: '', path: '', description: '', tech_stack: '', priority: 5, language: 'ru',
})
const npRoles = ref<string[]>(['business_analyst', 'market_researcher', 'tech_researcher'])
const npError = ref('')
const npSaving = ref(false)
async function load() { async function load() {
try { try {
loading.value = true loading.value = true
@ -66,11 +86,35 @@ function statusColor(s: string) {
async function addProject() { async function addProject() {
formError.value = '' formError.value = ''
if (form.value.project_type === 'operations' && !form.value.ssh_host) {
formError.value = 'SSH host is required for operations projects'
return
}
if (form.value.project_type !== 'operations' && !form.value.path) {
formError.value = 'Path is required'
return
}
try { try {
const ts = form.value.tech_stack ? form.value.tech_stack.split(',').map(s => s.trim()).filter(Boolean) : undefined const ts = form.value.tech_stack ? form.value.tech_stack.split(',').map(s => s.trim()).filter(Boolean) : undefined
await api.createProject({ ...form.value, tech_stack: ts, priority: form.value.priority }) const payload: Parameters<typeof api.createProject>[0] = {
id: form.value.id,
name: form.value.name,
tech_stack: ts,
priority: form.value.priority,
project_type: form.value.project_type,
}
if (form.value.project_type !== 'operations') {
payload.path = form.value.path
} else {
payload.path = ''
if (form.value.ssh_host) payload.ssh_host = form.value.ssh_host
if (form.value.ssh_user) payload.ssh_user = form.value.ssh_user
if (form.value.ssh_key_path) payload.ssh_key_path = form.value.ssh_key_path
if (form.value.ssh_proxy_jump) payload.ssh_proxy_jump = form.value.ssh_proxy_jump
}
await api.createProject(payload)
showAdd.value = false showAdd.value = false
form.value = { id: '', name: '', path: '', tech_stack: '', priority: 5 } form.value = { id: '', name: '', path: '', tech_stack: '', priority: 5, project_type: 'development', ssh_host: '', ssh_user: 'root', ssh_key_path: '', ssh_proxy_jump: '' }
await load() await load()
} catch (e: any) { } catch (e: any) {
formError.value = e.message formError.value = e.message
@ -88,6 +132,57 @@ async function runBootstrap() {
bsError.value = e.message bsError.value = e.message
} }
} }
function toggleNpRole(key: string) {
const idx = npRoles.value.indexOf(key)
if (idx >= 0) npRoles.value.splice(idx, 1)
else npRoles.value.push(key)
}
// Delete project
const confirmDeleteId = ref<string | null>(null)
const deleteError = ref('')
async function deleteProject(id: string) {
deleteError.value = ''
try {
await api.deleteProject(id)
projects.value = projects.value.filter(p => p.id !== id)
confirmDeleteId.value = null
} catch (e: any) {
deleteError.value = e.message
}
}
async function createNewProject() {
npError.value = ''
if (!npRoles.value.length) {
npError.value = 'Выберите хотя бы одну роль'
return
}
npSaving.value = true
try {
const ts = npForm.value.tech_stack ? npForm.value.tech_stack.split(',').map(s => s.trim()).filter(Boolean) : undefined
await api.newProject({
id: npForm.value.id,
name: npForm.value.name,
path: npForm.value.path,
description: npForm.value.description,
roles: npRoles.value,
tech_stack: ts,
priority: npForm.value.priority,
language: npForm.value.language,
})
showNewProject.value = false
npForm.value = { id: '', name: '', path: '', description: '', tech_stack: '', priority: 5, language: 'ru' }
npRoles.value = ['business_analyst', 'market_researcher', 'tech_researcher']
await load()
} catch (e: any) {
npError.value = e.message
} finally {
npSaving.value = false
}
}
</script> </script>
<template> <template>
@ -102,9 +197,13 @@ async function runBootstrap() {
class="px-3 py-1.5 text-xs bg-purple-900/50 text-purple-400 border border-purple-800 rounded hover:bg-purple-900"> class="px-3 py-1.5 text-xs bg-purple-900/50 text-purple-400 border border-purple-800 rounded hover:bg-purple-900">
Bootstrap Bootstrap
</button> </button>
<button @click="showNewProject = true"
class="px-3 py-1.5 text-xs bg-green-900/50 text-green-400 border border-green-800 rounded hover:bg-green-900">
+ New Project
</button>
<button @click="showAdd = true" <button @click="showAdd = true"
class="px-3 py-1.5 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700"> class="px-3 py-1.5 text-xs bg-gray-800 text-gray-300 border border-gray-700 rounded hover:bg-gray-700">
+ Project + Blank
</button> </button>
</div> </div>
</div> </div>
@ -113,36 +212,66 @@ async function runBootstrap() {
<p v-else-if="error" class="text-red-400 text-sm">{{ error }}</p> <p v-else-if="error" class="text-red-400 text-sm">{{ error }}</p>
<div v-else class="grid gap-3"> <div v-else class="grid gap-3">
<router-link <div v-for="p in projects" :key="p.id">
v-for="p in projects" :key="p.id" <!-- Inline delete confirmation -->
:to="`/project/${p.id}`" <div v-if="confirmDeleteId === p.id"
class="block border border-gray-800 rounded-lg p-4 hover:border-gray-600 transition-colors no-underline" class="border border-red-800 rounded-lg p-4 bg-red-950/20">
> <p class="text-sm text-gray-200 mb-3">Удалить проект «{{ p.name }}»? Это действие необратимо.</p>
<div class="flex items-center justify-between mb-2"> <div class="flex gap-2">
<div class="flex items-center gap-2"> <button @click="deleteProject(p.id)"
<span class="text-sm font-semibold text-gray-200">{{ p.id }}</span> title="Подтвердить удаление"
<Badge :text="p.status" :color="statusColor(p.status)" /> class="px-3 py-1.5 text-xs bg-red-900/50 text-red-400 border border-red-800 rounded hover:bg-red-900">
<span class="text-sm text-gray-400">{{ p.name }}</span> Да, удалить
</button>
<button @click="confirmDeleteId = null"
title="Отмена удаления"
class="px-3 py-1.5 text-xs bg-gray-800 text-gray-400 border border-gray-700 rounded hover:bg-gray-700">
Отмена
</button>
</div> </div>
<div class="flex items-center gap-3 text-xs text-gray-500"> <p v-if="deleteError" class="text-red-400 text-xs mt-2">{{ deleteError }}</p>
<span v-if="costMap[p.id]">${{ costMap[p.id]?.toFixed(2) }}/wk</span> </div>
<span>pri {{ p.priority }}</span> <!-- Normal project card -->
<router-link v-else
:to="`/project/${p.id}`"
class="block border border-gray-800 rounded-lg p-4 hover:border-gray-600 transition-colors no-underline"
>
<div class="flex items-center justify-between mb-2">
<div class="flex items-center gap-2">
<span class="text-sm font-semibold text-gray-200">{{ p.id }}</span>
<Badge :text="p.status" :color="statusColor(p.status)" />
<Badge v-if="p.project_type && p.project_type !== 'development'"
:text="p.project_type"
:color="p.project_type === 'operations' ? 'orange' : 'green'" />
<span class="text-sm text-gray-400">{{ p.name }}</span>
</div>
<div class="flex items-center gap-3 text-xs text-gray-500">
<span v-if="costMap[p.id]">${{ costMap[p.id]?.toFixed(2) }}/wk</span>
<span>pri {{ p.priority }}</span>
<button @click.prevent.stop="confirmDeleteId = p.id"
title="Удалить проект"
class="text-gray-600 hover:text-red-400 transition-colors">
<svg class="w-3.5 h-3.5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
<path stroke-linecap="round" stroke-linejoin="round" d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16" />
</svg>
</button>
</div>
</div> </div>
</div> <div class="flex gap-4 text-xs">
<div class="flex gap-4 text-xs"> <span class="text-gray-500">{{ p.total_tasks }} tasks</span>
<span class="text-gray-500">{{ p.total_tasks }} tasks</span> <span v-if="p.active_tasks" class="text-blue-400">
<span v-if="p.active_tasks" class="text-blue-400"> <span class="inline-block w-1.5 h-1.5 bg-blue-500 rounded-full animate-pulse mr-0.5"></span>
<span class="inline-block w-1.5 h-1.5 bg-blue-500 rounded-full animate-pulse mr-0.5"></span> {{ p.active_tasks }} active
{{ p.active_tasks }} active </span>
</span> <span v-if="p.review_tasks" class="text-yellow-400">{{ p.review_tasks }} awaiting review</span>
<span v-if="p.review_tasks" class="text-yellow-400">{{ p.review_tasks }} awaiting review</span> <span v-if="p.blocked_tasks" class="text-red-400">{{ p.blocked_tasks }} blocked</span>
<span v-if="p.blocked_tasks" class="text-red-400">{{ p.blocked_tasks }} blocked</span> <span v-if="p.done_tasks" class="text-green-500">{{ p.done_tasks }} done</span>
<span v-if="p.done_tasks" class="text-green-500">{{ p.done_tasks }} done</span> <span v-if="p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) > 0" class="text-gray-500">
<span v-if="p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) > 0" class="text-gray-500"> {{ p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) }} pending
{{ p.total_tasks - p.done_tasks - p.active_tasks - p.blocked_tasks - (p.review_tasks || 0) }} pending </span>
</span> </div>
</div> </router-link>
</router-link> </div>
</div> </div>
<!-- Add Project Modal --> <!-- Add Project Modal -->
@ -152,8 +281,47 @@ async function runBootstrap() {
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" /> class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.name" placeholder="Name" required <input v-model="form.name" placeholder="Name" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" /> class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.path" placeholder="Path (e.g. ~/projects/myproj)" required <!-- Project type selector -->
<div>
<p class="text-xs text-gray-500 mb-1.5">Тип проекта:</p>
<div class="flex gap-2">
<button v-for="t in ['development', 'operations', 'research']" :key="t"
type="button"
@click="form.project_type = t"
class="flex-1 py-1.5 text-xs border rounded transition-colors"
:class="form.project_type === t
? t === 'development' ? 'bg-blue-900/40 text-blue-300 border-blue-700'
: t === 'operations' ? 'bg-orange-900/40 text-orange-300 border-orange-700'
: 'bg-green-900/40 text-green-300 border-green-700'
: 'bg-gray-900 text-gray-500 border-gray-800 hover:text-gray-300 hover:border-gray-600'"
>{{ t }}</button>
</div>
</div>
<!-- Path (development / research) -->
<input v-if="form.project_type !== 'operations'"
v-model="form.path" placeholder="Path (e.g. ~/projects/myproj)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" /> class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<!-- SSH fields (operations) -->
<template v-if="form.project_type === 'operations'">
<input v-model="form.ssh_host" placeholder="SSH host (e.g. 192.168.1.1)" required
class="w-full bg-gray-800 border border-orange-800/60 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<div class="grid grid-cols-2 gap-2">
<input v-model="form.ssh_user" placeholder="SSH user (e.g. root)"
class="bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="form.ssh_key_path" placeholder="Key path (e.g. ~/.ssh/id_rsa)"
class="bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
</div>
<div>
<input v-model="form.ssh_proxy_jump" placeholder="ProxyJump (optional, e.g. jumpt)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<p class="mt-1 flex items-center gap-1 text-xs text-gray-500">
<svg class="w-3 h-3 flex-shrink-0 text-gray-500" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M18 10a8 8 0 11-16 0 8 8 0 0116 0zm-7-4a1 1 0 11-2 0 1 1 0 012 0zM9 9a1 1 0 000 2v3a1 1 0 001 1h1a1 1 0 100-2v-3a1 1 0 00-1-1H9z" clip-rule="evenodd" />
</svg>
Алиас из ~/.ssh/config на сервере Kin
</p>
</div>
</template>
<input v-model="form.tech_stack" placeholder="Tech stack (comma-separated)" <input v-model="form.tech_stack" placeholder="Tech stack (comma-separated)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" /> class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model.number="form.priority" type="number" min="1" max="10" placeholder="Priority (1-10)" <input v-model.number="form.priority" type="number" min="1" max="10" placeholder="Priority (1-10)"
@ -166,6 +334,52 @@ async function runBootstrap() {
</form> </form>
</Modal> </Modal>
<!-- New Project with Research Modal -->
<Modal v-if="showNewProject" title="New Project — Start Research" @close="showNewProject = false">
<form @submit.prevent="createNewProject" class="space-y-3">
<div class="grid grid-cols-2 gap-2">
<input v-model="npForm.id" placeholder="ID (e.g. myapp)" required
class="bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<input v-model="npForm.name" placeholder="Name" required
class="bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
</div>
<input v-model="npForm.path" placeholder="Path (e.g. ~/projects/myapp)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<textarea v-model="npForm.description" placeholder="Описание проекта (свободный текст для агентов)" required rows="4"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-none"></textarea>
<input v-model="npForm.tech_stack" placeholder="Tech stack (comma-separated, optional)"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
<div>
<p class="text-xs text-gray-500 mb-2">Этапы research (Architect добавляется автоматически последним):</p>
<div class="space-y-1.5">
<label v-for="r in RESEARCH_ROLES" :key="r.key"
class="flex items-start gap-2 cursor-pointer group">
<input type="checkbox"
:checked="npRoles.includes(r.key)"
@change="toggleNpRole(r.key)"
class="mt-0.5 accent-green-500 cursor-pointer" />
<div>
<span class="text-sm text-gray-300 group-hover:text-gray-100">{{ r.label }}</span>
<span class="text-xs text-gray-600 ml-1"> {{ r.hint }}</span>
</div>
</label>
<label class="flex items-start gap-2 opacity-50">
<input type="checkbox" checked disabled class="mt-0.5" />
<div>
<span class="text-sm text-gray-400">Architect</span>
<span class="text-xs text-gray-600 ml-1"> blueprint на основе одобренных исследований</span>
</div>
</label>
</div>
</div>
<p v-if="npError" class="text-red-400 text-xs">{{ npError }}</p>
<button type="submit" :disabled="npSaving"
class="w-full py-2 bg-green-900/50 text-green-400 border border-green-800 rounded text-sm hover:bg-green-900 disabled:opacity-50">
{{ npSaving ? 'Starting...' : 'Start Research' }}
</button>
</form>
</Modal>
<!-- Bootstrap Modal --> <!-- Bootstrap Modal -->
<Modal v-if="showBootstrap" title="Bootstrap Project" @close="showBootstrap = false"> <Modal v-if="showBootstrap" title="Bootstrap Project" @close="showBootstrap = false">
<form @submit.prevent="runBootstrap" class="space-y-3"> <form @submit.prevent="runBootstrap" class="space-y-3">

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,147 @@
<script setup lang="ts">
import { ref, onMounted } from 'vue'
import { api, type Project, type ObsidianSyncResult } from '../api'
const projects = ref<Project[]>([])
const vaultPaths = ref<Record<string, string>>({})
const deployCommands = ref<Record<string, string>>({})
const saving = ref<Record<string, boolean>>({})
const savingDeploy = ref<Record<string, boolean>>({})
const syncing = ref<Record<string, boolean>>({})
const saveStatus = ref<Record<string, string>>({})
const saveDeployStatus = ref<Record<string, string>>({})
const syncResults = ref<Record<string, ObsidianSyncResult | null>>({})
const error = ref<string | null>(null)
onMounted(async () => {
try {
projects.value = await api.projects()
for (const p of projects.value) {
vaultPaths.value[p.id] = p.obsidian_vault_path ?? ''
deployCommands.value[p.id] = p.deploy_command ?? ''
}
} catch (e) {
error.value = String(e)
}
})
async function saveVaultPath(projectId: string) {
saving.value[projectId] = true
saveStatus.value[projectId] = ''
try {
await api.patchProject(projectId, { obsidian_vault_path: vaultPaths.value[projectId] })
saveStatus.value[projectId] = 'Saved'
} catch (e) {
saveStatus.value[projectId] = `Error: ${e}`
} finally {
saving.value[projectId] = false
}
}
async function saveDeployCommand(projectId: string) {
savingDeploy.value[projectId] = true
saveDeployStatus.value[projectId] = ''
try {
await api.patchProject(projectId, { deploy_command: deployCommands.value[projectId] })
saveDeployStatus.value[projectId] = 'Saved'
} catch (e) {
saveDeployStatus.value[projectId] = `Error: ${e}`
} finally {
savingDeploy.value[projectId] = false
}
}
async function runSync(projectId: string) {
syncing.value[projectId] = true
syncResults.value[projectId] = null
saveStatus.value[projectId] = ''
try {
await api.patchProject(projectId, { obsidian_vault_path: vaultPaths.value[projectId] })
syncResults.value[projectId] = await api.syncObsidian(projectId)
} catch (e) {
saveStatus.value[projectId] = `Sync error: ${e}`
} finally {
syncing.value[projectId] = false
}
}
</script>
<template>
<div>
<h1 class="text-xl font-semibold text-gray-100 mb-6">Settings</h1>
<div v-if="error" class="text-red-400 mb-4">{{ error }}</div>
<div v-for="project in projects" :key="project.id" class="mb-6 p-4 border border-gray-700 rounded-lg">
<div class="flex items-center gap-3 mb-3">
<span class="font-medium text-gray-100">{{ project.name }}</span>
<span class="text-xs text-gray-500 font-mono">{{ project.id }}</span>
</div>
<div class="mb-3">
<label class="block text-xs text-gray-400 mb-1">Obsidian Vault Path</label>
<input
v-model="vaultPaths[project.id]"
type="text"
placeholder="/path/to/obsidian/vault"
class="w-full bg-gray-900 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 font-mono focus:outline-none focus:border-gray-500"
/>
</div>
<div class="mb-3">
<label class="block text-xs text-gray-400 mb-1">Deploy Command</label>
<input
v-model="deployCommands[project.id]"
type="text"
placeholder="git push origin main"
class="w-full bg-gray-900 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 font-mono focus:outline-none focus:border-gray-500"
/>
<p class="text-xs text-gray-600 mt-1">Команда выполняется через shell в директории проекта. Настраивается только администратором.</p>
</div>
<div class="flex items-center gap-3 flex-wrap mb-3">
<button
@click="saveDeployCommand(project.id)"
:disabled="savingDeploy[project.id]"
class="px-3 py-1.5 text-sm bg-gray-700 hover:bg-gray-600 text-gray-200 rounded disabled:opacity-50"
>
{{ savingDeploy[project.id] ? 'Saving…' : 'Save Deploy' }}
</button>
<span v-if="saveDeployStatus[project.id]" class="text-xs" :class="saveDeployStatus[project.id].startsWith('Error') ? 'text-red-400' : 'text-green-400'">
{{ saveDeployStatus[project.id] }}
</span>
</div>
<div class="flex items-center gap-3 flex-wrap">
<button
@click="saveVaultPath(project.id)"
:disabled="saving[project.id]"
class="px-3 py-1.5 text-sm bg-gray-700 hover:bg-gray-600 text-gray-200 rounded disabled:opacity-50"
>
{{ saving[project.id] ? 'Saving…' : 'Save Vault' }}
</button>
<button
@click="runSync(project.id)"
:disabled="syncing[project.id] || !vaultPaths[project.id]"
class="px-3 py-1.5 text-sm bg-indigo-700 hover:bg-indigo-600 text-white rounded disabled:opacity-50"
>
{{ syncing[project.id] ? 'Syncing…' : 'Sync Obsidian' }}
</button>
<span v-if="saveStatus[project.id]" class="text-xs" :class="saveStatus[project.id].startsWith('Error') ? 'text-red-400' : 'text-green-400'">
{{ saveStatus[project.id] }}
</span>
</div>
<div v-if="syncResults[project.id]" class="mt-3 p-3 bg-gray-900 rounded text-xs text-gray-300">
<div>Exported: <span class="text-green-400 font-medium">{{ syncResults[project.id]!.exported_decisions }}</span> decisions</div>
<div>Updated: <span class="text-green-400 font-medium">{{ syncResults[project.id]!.tasks_updated }}</span> tasks</div>
<div v-if="syncResults[project.id]!.errors.length > 0" class="mt-1">
<div v-for="err in syncResults[project.id]!.errors" :key="err" class="text-red-400">{{ err }}</div>
</div>
</div>
</div>
</div>
</template>

View file

@ -1,9 +1,11 @@
<script setup lang="ts"> <script setup lang="ts">
import { ref, onMounted, onUnmounted, computed } from 'vue' import { ref, onMounted, onUnmounted, computed } from 'vue'
import { useRoute, useRouter } from 'vue-router' import { useRoute, useRouter } from 'vue-router'
import { api, type TaskFull, type PipelineStep, type PendingAction } from '../api' import { api, ApiError, type TaskFull, type PipelineStep, type PendingAction, type DeployResult, type Attachment } from '../api'
import Badge from '../components/Badge.vue' import Badge from '../components/Badge.vue'
import Modal from '../components/Modal.vue' import Modal from '../components/Modal.vue'
import AttachmentUploader from '../components/AttachmentUploader.vue'
import AttachmentList from '../components/AttachmentList.vue'
const props = defineProps<{ id: string }>() const props = defineProps<{ id: string }>()
const route = useRoute() const route = useRoute()
@ -12,8 +14,10 @@ const router = useRouter()
const task = ref<TaskFull | null>(null) const task = ref<TaskFull | null>(null)
const loading = ref(true) const loading = ref(true)
const error = ref('') const error = ref('')
const claudeLoginError = ref(false)
const selectedStep = ref<PipelineStep | null>(null) const selectedStep = ref<PipelineStep | null>(null)
const polling = ref(false) const polling = ref(false)
const pipelineStarting = ref(false)
let pollTimer: ReturnType<typeof setInterval> | null = null let pollTimer: ReturnType<typeof setInterval> | null = null
// Approve modal // Approve modal
@ -28,27 +32,31 @@ const resolvingAction = ref(false)
const showReject = ref(false) const showReject = ref(false)
const rejectReason = ref('') const rejectReason = ref('')
// Revise modal
const showRevise = ref(false)
const reviseComment = ref('')
// Auto/Review mode (per-task, persisted in DB; falls back to localStorage per project) // Auto/Review mode (per-task, persisted in DB; falls back to localStorage per project)
const autoMode = ref(false) const autoMode = ref(false)
function loadMode(t: typeof task.value) { function loadMode(t: typeof task.value) {
if (!t) return if (!t) return
if (t.execution_mode) { if (t.execution_mode) {
autoMode.value = t.execution_mode === 'auto' autoMode.value = t.execution_mode === 'auto_complete'
} else if (t.status === 'review') { } else if (t.status === 'review') {
// Task is in review always show Approve/Reject regardless of localStorage // Task is in review always show Approve/Reject regardless of localStorage
autoMode.value = false autoMode.value = false
} else { } else {
autoMode.value = localStorage.getItem(`kin-mode-${t.project_id}`) === 'auto' autoMode.value = localStorage.getItem(`kin-mode-${t.project_id}`) === 'auto_complete'
} }
} }
async function toggleMode() { async function toggleMode() {
if (!task.value) return if (!task.value) return
autoMode.value = !autoMode.value autoMode.value = !autoMode.value
localStorage.setItem(`kin-mode-${task.value.project_id}`, autoMode.value ? 'auto' : 'review') localStorage.setItem(`kin-mode-${task.value.project_id}`, autoMode.value ? 'auto_complete' : 'review')
try { try {
const updated = await api.patchTask(props.id, { execution_mode: autoMode.value ? 'auto' : 'review' }) const updated = await api.patchTask(props.id, { execution_mode: autoMode.value ? 'auto_complete' : 'review' })
task.value = { ...task.value, ...updated } task.value = { ...task.value, ...updated }
} catch (e: any) { } catch (e: any) {
error.value = e.message error.value = e.message
@ -86,7 +94,7 @@ function stopPolling() {
if (pollTimer) { clearInterval(pollTimer); pollTimer = null } if (pollTimer) { clearInterval(pollTimer); pollTimer = null }
} }
onMounted(load) onMounted(() => { load(); loadAttachments() })
onUnmounted(stopPolling) onUnmounted(stopPolling)
function statusColor(s: string) { function statusColor(s: string) {
@ -189,18 +197,57 @@ async function reject() {
} }
} }
async function runPipeline() { async function revise() {
if (!task.value || !reviseComment.value) return
try { try {
await api.runTask(props.id) await api.reviseTask(props.id, reviseComment.value)
startPolling() showRevise.value = false
reviseComment.value = ''
await load() await load()
} catch (e: any) { } catch (e: any) {
error.value = e.message error.value = e.message
} }
} }
async function runPipeline() {
claudeLoginError.value = false
pipelineStarting.value = true
try {
await api.runTask(props.id)
startPolling()
await load()
} catch (e: any) {
if (e instanceof ApiError && e.code === 'claude_auth_required') {
claudeLoginError.value = true
} else if (e instanceof ApiError && e.code === 'task_already_running') {
error.value = 'Pipeline уже запущен'
} else {
error.value = e.message
}
} finally {
pipelineStarting.value = false
}
}
const hasSteps = computed(() => (task.value?.pipeline_steps?.length ?? 0) > 0) const hasSteps = computed(() => (task.value?.pipeline_steps?.length ?? 0) > 0)
const isRunning = computed(() => task.value?.status === 'in_progress') const isRunning = computed(() => task.value?.status === 'in_progress')
const isManualEscalation = computed(() => task.value?.brief?.task_type === 'manual_escalation')
const resolvingManually = ref(false)
async function resolveManually() {
if (!task.value) return
if (!confirm('Пометить задачу как решённую вручную?')) return
resolvingManually.value = true
try {
const updated = await api.patchTask(props.id, { status: 'done' })
task.value = { ...task.value, ...updated }
} catch (e: any) {
error.value = e.message
} finally {
resolvingManually.value = false
}
}
function goBack() { function goBack() {
if (window.history.length > 1) { if (window.history.length > 1) {
@ -228,6 +275,80 @@ async function changeStatus(newStatus: string) {
statusChanging.value = false statusChanging.value = false
} }
} }
// Deploy
const deploying = ref(false)
const deployResult = ref<DeployResult | null>(null)
async function runDeploy() {
if (!task.value) return
deploying.value = true
deployResult.value = null
try {
deployResult.value = await api.deployProject(task.value.project_id)
} catch (e: any) {
error.value = e.message
} finally {
deploying.value = false
}
}
// Attachments
const attachments = ref<Attachment[]>([])
async function loadAttachments() {
try {
attachments.value = await api.getAttachments(props.id)
} catch {}
}
// Edit modal (pending tasks only)
const showEdit = ref(false)
const editForm = ref({ title: '', briefText: '', priority: 5, acceptanceCriteria: '' })
const editLoading = ref(false)
const editError = ref('')
function getBriefText(brief: Record<string, unknown> | null): string {
if (!brief) return ''
if (typeof brief === 'string') return brief as string
if ('text' in brief) return String(brief.text)
return JSON.stringify(brief)
}
function openEdit() {
if (!task.value) return
editForm.value = {
title: task.value.title,
briefText: getBriefText(task.value.brief),
priority: task.value.priority,
acceptanceCriteria: task.value.acceptance_criteria ?? '',
}
editError.value = ''
showEdit.value = true
}
async function saveEdit() {
if (!task.value) return
editLoading.value = true
editError.value = ''
try {
const data: Parameters<typeof api.patchTask>[1] = {}
if (editForm.value.title !== task.value.title) data.title = editForm.value.title
if (editForm.value.priority !== task.value.priority) data.priority = editForm.value.priority
const origBriefText = getBriefText(task.value.brief)
if (editForm.value.briefText !== origBriefText) data.brief_text = editForm.value.briefText
const origAC = task.value.acceptance_criteria ?? ''
if (editForm.value.acceptanceCriteria !== origAC) data.acceptance_criteria = editForm.value.acceptanceCriteria
if (Object.keys(data).length === 0) { showEdit.value = false; return }
const updated = await api.patchTask(props.id, data)
task.value = { ...task.value, ...updated }
showEdit.value = false
} catch (e: any) {
editError.value = e.message
} finally {
editLoading.value = false
}
}
</script> </script>
<template> <template>
@ -245,7 +366,7 @@ async function changeStatus(newStatus: string) {
<h1 class="text-xl font-bold text-gray-100">{{ task.id }}</h1> <h1 class="text-xl font-bold text-gray-100">{{ task.id }}</h1>
<span class="text-gray-400">{{ task.title }}</span> <span class="text-gray-400">{{ task.title }}</span>
<Badge :text="task.status" :color="statusColor(task.status)" /> <Badge :text="task.status" :color="statusColor(task.status)" />
<span v-if="task.execution_mode === 'auto'" <span v-if="task.execution_mode === 'auto_complete'"
class="text-[10px] px-1.5 py-0.5 bg-yellow-900/40 text-yellow-400 border border-yellow-800 rounded" class="text-[10px] px-1.5 py-0.5 bg-yellow-900/40 text-yellow-400 border border-yellow-800 rounded"
title="Auto mode: agents can write files">&#x1F513; auto</span> title="Auto mode: agents can write files">&#x1F513; auto</span>
<select <select
@ -264,9 +385,38 @@ async function changeStatus(newStatus: string) {
<span v-if="isRunning" class="inline-block w-2 h-2 bg-blue-500 rounded-full animate-pulse"></span> <span v-if="isRunning" class="inline-block w-2 h-2 bg-blue-500 rounded-full animate-pulse"></span>
<span class="text-xs text-gray-600">pri {{ task.priority }}</span> <span class="text-xs text-gray-600">pri {{ task.priority }}</span>
</div> </div>
<div v-if="task.brief" class="text-xs text-gray-500 mb-1"> <!-- Manual escalation context banner -->
<div v-if="isManualEscalation" class="mb-3 px-3 py-2 border border-orange-800/60 bg-orange-950/20 rounded">
<div class="flex items-center gap-2 mb-1">
<span class="text-xs font-semibold text-orange-400">&#9888; Требует ручного решения</span>
<span v-if="task.parent_task_id" class="text-xs text-gray-600">
эскалация из
<router-link :to="`/task/${task.parent_task_id}`" class="text-orange-600 hover:text-orange-400">
{{ task.parent_task_id }}
</router-link>
</span>
</div>
<p class="text-xs text-orange-300">{{ task.title }}</p>
<p v-if="task.brief?.description" class="text-xs text-gray-400 mt-1">{{ task.brief.description }}</p>
<p class="text-xs text-gray-600 mt-1">Автопилот не смог выполнить это автоматически. Примите меры вручную и нажмите «Решить вручную».</p>
</div>
<!-- Dangerous skip warning banner -->
<div v-if="task.dangerously_skipped" class="mb-3 px-3 py-2 border border-red-700 bg-red-950/40 rounded flex items-start gap-2">
<span class="text-red-400 text-base shrink-0">&#9888;</span>
<div>
<span class="text-xs font-semibold text-red-400">--dangerously-skip-permissions использовался в этой задаче</span>
<p class="text-xs text-red-300/70 mt-0.5">Агент выполнял команды с обходом проверок разрешений. Проверьте pipeline-шаги и сделанные изменения.</p>
</div>
</div>
<div v-if="task.brief && !isManualEscalation" class="text-xs text-gray-500 mb-1">
Brief: {{ JSON.stringify(task.brief) }} Brief: {{ JSON.stringify(task.brief) }}
</div> </div>
<div v-if="task.acceptance_criteria" class="mb-2 px-3 py-2 border border-gray-700 bg-gray-900/40 rounded">
<div class="text-xs font-semibold text-gray-400 mb-1">Критерии приёмки</div>
<p class="text-xs text-gray-300 whitespace-pre-wrap">{{ task.acceptance_criteria }}</p>
</div>
<div v-if="task.status === 'blocked' && task.blocked_reason" class="text-xs text-red-400 mb-1 bg-red-950/30 border border-red-800/40 rounded px-2 py-1"> <div v-if="task.status === 'blocked' && task.blocked_reason" class="text-xs text-red-400 mb-1 bg-red-950/30 border border-red-800/40 rounded px-2 py-1">
Blocked: {{ task.blocked_reason }} Blocked: {{ task.blocked_reason }}
</div> </div>
@ -335,6 +485,13 @@ async function changeStatus(newStatus: string) {
</div> </div>
</div> </div>
<!-- Attachments -->
<div class="mb-6">
<h2 class="text-sm font-semibold text-gray-300 mb-2">Вложения</h2>
<AttachmentList :attachments="attachments" :task-id="props.id" @deleted="loadAttachments" />
<AttachmentUploader :task-id="props.id" @uploaded="loadAttachments" />
</div>
<!-- Actions Bar --> <!-- Actions Bar -->
<div class="sticky bottom-0 bg-gray-950 border-t border-gray-800 py-3 flex gap-3 -mx-6 px-6 mt-8"> <div class="sticky bottom-0 bg-gray-950 border-t border-gray-800 py-3 flex gap-3 -mx-6 px-6 mt-8">
<div v-if="autoMode && (isRunning || task.status === 'review')" <div v-if="autoMode && (isRunning || task.status === 'review')"
@ -347,6 +504,11 @@ async function changeStatus(newStatus: string) {
class="px-4 py-2 text-sm bg-green-900/50 text-green-400 border border-green-800 rounded hover:bg-green-900"> class="px-4 py-2 text-sm bg-green-900/50 text-green-400 border border-green-800 rounded hover:bg-green-900">
&#10003; Approve &#10003; Approve
</button> </button>
<button v-if="task.status === 'review' && !autoMode"
@click="showRevise = true"
class="px-4 py-2 text-sm bg-orange-900/50 text-orange-400 border border-orange-800 rounded hover:bg-orange-900">
&#x1F504; Revise
</button>
<button v-if="(task.status === 'review' || task.status === 'in_progress') && !autoMode" <button v-if="(task.status === 'review' || task.status === 'in_progress') && !autoMode"
@click="showReject = true" @click="showReject = true"
class="px-4 py-2 text-sm bg-red-900/50 text-red-400 border border-red-800 rounded hover:bg-red-900"> class="px-4 py-2 text-sm bg-red-900/50 text-red-400 border border-red-800 rounded hover:bg-red-900">
@ -361,13 +523,59 @@ async function changeStatus(newStatus: string) {
:title="autoMode ? 'Auto mode: agents can write files' : 'Review mode: agents read-only'"> :title="autoMode ? 'Auto mode: agents can write files' : 'Review mode: agents read-only'">
{{ autoMode ? '&#x1F513; Auto' : '&#x1F512; Review' }} {{ autoMode ? '&#x1F513; Auto' : '&#x1F512; Review' }}
</button> </button>
<button v-if="task.status === 'pending'"
@click="openEdit"
class="px-3 py-2 text-sm bg-gray-800/50 text-gray-400 border border-gray-700 rounded hover:bg-gray-800">
&#9998; Edit
</button>
<button v-if="task.status === 'pending' || task.status === 'blocked'" <button v-if="task.status === 'pending' || task.status === 'blocked'"
@click="runPipeline" @click="runPipeline"
:disabled="polling" :disabled="polling || pipelineStarting"
class="px-4 py-2 text-sm bg-blue-900/50 text-blue-400 border border-blue-800 rounded hover:bg-blue-900 disabled:opacity-50"> class="px-4 py-2 text-sm bg-blue-900/50 text-blue-400 border border-blue-800 rounded hover:bg-blue-900 disabled:opacity-50">
<span v-if="polling" class="inline-block w-3 h-3 border-2 border-blue-400 border-t-transparent rounded-full animate-spin mr-1"></span> <span v-if="polling || pipelineStarting" class="inline-block w-3 h-3 border-2 border-blue-400 border-t-transparent rounded-full animate-spin mr-1"></span>
{{ polling ? 'Pipeline running...' : '&#9654; Run Pipeline' }} {{ (polling || pipelineStarting) ? 'Pipeline running...' : '&#9654; Run Pipeline' }}
</button> </button>
<button v-if="isManualEscalation && task.status !== 'done' && task.status !== 'cancelled'"
@click="resolveManually"
:disabled="resolvingManually"
class="px-4 py-2 text-sm bg-orange-900/50 text-orange-400 border border-orange-800 rounded hover:bg-orange-900 disabled:opacity-50">
<span v-if="resolvingManually" class="inline-block w-3 h-3 border-2 border-orange-400 border-t-transparent rounded-full animate-spin mr-1"></span>
{{ resolvingManually ? 'Сохраняем...' : '&#10003; Решить вручную' }}
</button>
<button v-if="task.status === 'done' && task.project_deploy_command"
@click.stop="runDeploy"
:disabled="deploying"
class="px-4 py-2 text-sm bg-teal-900/50 text-teal-400 border border-teal-800 rounded hover:bg-teal-900 disabled:opacity-50">
<span v-if="deploying" class="inline-block w-3 h-3 border-2 border-teal-400 border-t-transparent rounded-full animate-spin mr-1"></span>
{{ deploying ? 'Deploying...' : '&#x1F680; Deploy' }}
</button>
</div>
<!-- Claude login error banner -->
<div v-if="claudeLoginError" class="mt-3 px-4 py-3 border border-yellow-700 bg-yellow-950/30 rounded">
<div class="flex items-start justify-between gap-2">
<div>
<p class="text-sm font-semibold text-yellow-300">&#9888; Claude CLI requires login</p>
<p class="text-xs text-yellow-200/80 mt-1">Откройте терминал и выполните:</p>
<code class="text-xs text-yellow-400 font-mono bg-black/30 px-2 py-0.5 rounded mt-1 inline-block">claude login</code>
<p class="text-xs text-gray-500 mt-1">После входа повторите запуск pipeline.</p>
</div>
<button @click="claudeLoginError = false" class="text-gray-600 hover:text-gray-400 bg-transparent border-none cursor-pointer text-xs shrink-0"></button>
</div>
</div>
<!-- Deploy result inline block -->
<div v-if="deployResult" class="mx-0 mt-2 p-3 rounded border text-xs font-mono"
:class="deployResult.success ? 'border-teal-800 bg-teal-950/30 text-teal-300' : 'border-red-800 bg-red-950/30 text-red-300'">
<div class="flex items-center gap-2 mb-1">
<span :class="deployResult.success ? 'text-teal-400' : 'text-red-400'" class="font-semibold">
{{ deployResult.success ? '✓ Deploy succeeded' : '✗ Deploy failed' }}
</span>
<span class="text-gray-500">exit {{ deployResult.exit_code }} · {{ deployResult.duration_seconds }}s</span>
<button @click.stop="deployResult = null" class="ml-auto text-gray-600 hover:text-gray-400 bg-transparent border-none cursor-pointer text-xs"></button>
</div>
<pre v-if="deployResult.stdout" class="whitespace-pre-wrap text-gray-300 max-h-40 overflow-y-auto">{{ deployResult.stdout }}</pre>
<pre v-if="deployResult.stderr" class="whitespace-pre-wrap text-red-400/80 max-h-40 overflow-y-auto mt-1">{{ deployResult.stderr }}</pre>
</div> </div>
<!-- Approve Modal --> <!-- Approve Modal -->
@ -438,5 +646,50 @@ async function changeStatus(newStatus: string) {
</button> </button>
</form> </form>
</Modal> </Modal>
<!-- Revise Modal -->
<Modal v-if="showRevise" title="&#x1F504; Revise Task" @close="showRevise = false">
<form @submit.prevent="revise" class="space-y-3">
<p class="text-xs text-gray-500">Опишите, что доработать или уточнить агенту. Задача вернётся в работу с вашим комментарием.</p>
<textarea v-model="reviseComment" placeholder="Что доработать / уточнить..." rows="4" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y"></textarea>
<button type="submit"
class="w-full py-2 bg-orange-900/50 text-orange-400 border border-orange-800 rounded text-sm hover:bg-orange-900">
&#x1F504; Отправить на доработку
</button>
</form>
</Modal>
<!-- Edit Modal (pending tasks only) -->
<Modal v-if="showEdit" title="Edit Task" @close="showEdit = false">
<form @submit.prevent="saveEdit" class="space-y-3">
<div>
<label class="block text-xs text-gray-500 mb-1">Title</label>
<input v-model="editForm.title" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600" />
</div>
<div>
<label class="block text-xs text-gray-500 mb-1">Brief</label>
<textarea v-model="editForm.briefText" rows="4" placeholder="Task description..."
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y"></textarea>
</div>
<div>
<label class="block text-xs text-gray-500 mb-1">Priority (110)</label>
<input v-model.number="editForm.priority" type="number" min="1" max="10" required
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200" />
</div>
<div>
<label class="block text-xs text-gray-500 mb-1">Критерии приёмки</label>
<textarea v-model="editForm.acceptanceCriteria" rows="3"
placeholder="Что должно быть на выходе? Какой результат считается успешным?"
class="w-full bg-gray-800 border border-gray-700 rounded px-3 py-2 text-sm text-gray-200 placeholder-gray-600 resize-y"></textarea>
</div>
<p v-if="editError" class="text-red-400 text-xs">{{ editError }}</p>
<button type="submit" :disabled="editLoading"
class="w-full py-2 bg-blue-900/50 text-blue-400 border border-blue-800 rounded text-sm hover:bg-blue-900 disabled:opacity-50">
{{ editLoading ? 'Saving...' : 'Save' }}
</button>
</form>
</Modal>
</div> </div>
</template> </template>