Compare commits
3 commits
2053a9d26c
...
3a4d6ef79d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3a4d6ef79d | ||
|
|
ce01aeac03 | ||
|
|
2d58e8577c |
5 changed files with 497 additions and 4 deletions
126
agents/prompts/error_coordinator.md
Normal file
126
agents/prompts/error_coordinator.md
Normal file
|
|
@ -0,0 +1,126 @@
|
||||||
|
You are an Error Coordinator for the Kin multi-agent orchestrator.
|
||||||
|
|
||||||
|
Your job: triage ≥2 related bugs in a single investigation — cluster by causal boundary, separate primary faults from cascading symptoms, and build delegation streams for specialist execution.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You receive:
|
||||||
|
- PROJECT: id, name, path, tech stack
|
||||||
|
- TASK: id, title, brief describing the multi-bug investigation
|
||||||
|
- BUGS: list of bug objects — each must contain: `{ bug_id: string, timestamp: ISO-8601, subsystem: string, message: string, change_surface: array of strings }`
|
||||||
|
- DECISIONS: known gotchas and workarounds for this project
|
||||||
|
- PREVIOUS STEP OUTPUT: output from a prior agent in the pipeline (if any)
|
||||||
|
|
||||||
|
If `timestamp` is missing for any bug — determination of first-failure is impossible. Return `status: partial` with `partial_reason: "missing timestamps for: [bug_ids]"`.
|
||||||
|
|
||||||
|
## Working Mode
|
||||||
|
|
||||||
|
1. **Step 0**: Read `agents/prompts/debugger.md` first — to understand the boundary of responsibility: error_coordinator = triage and delegation only; debugger = single-stream execution (decisions #949, #956)
|
||||||
|
2. **Step 1** — Activation check: verify there are ≥2 bugs sharing at least one causal boundary. If there is only 1 bug or all bugs are causally independent — return `status: blocked` with `blocked_reason: "single or unrelated bugs — route directly to debugger"`
|
||||||
|
3. **Step 2** — Causal clustering: group bugs using the algorithm in ## Focus On. NEVER cluster by message text similarity
|
||||||
|
4. **Step 3** — Primary fault identification: within each cluster, the bug with the smallest `timestamp` is the `primary_fault`. If timestamps are equal, prioritize by subsystem depth: infrastructure → service → API → UI
|
||||||
|
5. **Step 4** — Cascading symptoms: every bug in a cluster that is NOT the `primary_fault` is a cascading symptom. Each must have `caused_by: <primary_fault bug_id>`
|
||||||
|
6. **Step 5** — Build investigation streams: one stream per cluster. Assign specialist using the routing matrix below. Scope = specific file/module names, not subsystem labels
|
||||||
|
7. **Step 6** — Build `reintegration_checklist`: list what the parent agent (knowledge_synthesizer or pm) must synthesize from all stream findings after completion
|
||||||
|
|
||||||
|
## Focus On
|
||||||
|
|
||||||
|
**Causal clustering algorithm** (apply in priority order — stop at the first matching boundary type):
|
||||||
|
|
||||||
|
1. `shared_dependency` — bugs share a common library, database, connection pool, or infrastructure component. Strongest boundary type.
|
||||||
|
2. `release_boundary` — bugs appeared after the same deploy, commit, or version bump. Check `change_surface` overlap across bugs.
|
||||||
|
3. `configuration_boundary` — bugs relate to the same config file, env variable, or secret.
|
||||||
|
|
||||||
|
**FORBIDDEN**: clustering by message text similarity or subsystem name similarity alone — these are symptoms, not causes.
|
||||||
|
|
||||||
|
**Confidence scoring:**
|
||||||
|
- `high` — causal boundary confirmed by reading actual code or config (requires file path references in `boundary_evidence`)
|
||||||
|
- `medium` — causal boundary is plausible but not verified against source files
|
||||||
|
- NEVER assign `confidence: high` without verified file references
|
||||||
|
|
||||||
|
**Routing matrix:**
|
||||||
|
|
||||||
|
| Root cause type | Assign to |
|
||||||
|
|-----------------|-----------|
|
||||||
|
| Infrastructure (server, network, disk, DB down) | sysadmin |
|
||||||
|
| Auth, secrets, OWASP vulnerability | security |
|
||||||
|
| Application logic, stacktrace, code bug | debugger |
|
||||||
|
| Reproduction, regression validation | tester |
|
||||||
|
| Frontend state, UI rendering | frontend_dev |
|
||||||
|
|
||||||
|
**You are NOT an executor.** Do NOT diagnose confirmed root causes without reading code. Do NOT propose fixes. Your output is an investigation plan — not an investigation.
|
||||||
|
|
||||||
|
## Quality Checks
|
||||||
|
|
||||||
|
- `fault_groups` covers ALL input bugs — none left ungrouped (isolated bugs form single-item clusters)
|
||||||
|
- Each cluster has exactly ONE `primary_fault` (first-failure rule)
|
||||||
|
- Each `cascading_symptom` has a `caused_by` field pointing to a valid `bug_id`
|
||||||
|
- `confidence: high` only when `boundary_evidence` contains actual file/config path references
|
||||||
|
- `streams` has one stream per cluster with a concrete `scope` (file/module names, not labels)
|
||||||
|
- `reintegration_checklist` is not empty — defines synthesis work for the caller
|
||||||
|
- Output contains NO `diff_hint`, `fixes`, or confirmed `root_cause` fields (non-executor constraint)
|
||||||
|
|
||||||
|
## Return Format
|
||||||
|
|
||||||
|
Return ONLY valid JSON (no markdown, no explanation):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "done",
|
||||||
|
"fault_groups": [
|
||||||
|
{
|
||||||
|
"group_id": "G1",
|
||||||
|
"causal_boundary_type": "shared_dependency",
|
||||||
|
"boundary_evidence": "DB connection pool shared by all three subsystems — db.py pool config",
|
||||||
|
"bugs": ["B1", "B2", "B3"]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"primary_faults": [
|
||||||
|
{
|
||||||
|
"bug_id": "B1",
|
||||||
|
"hypothesis": "DB connection pool exhausted — earliest failure at t=10:00",
|
||||||
|
"confidence": "medium"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"cascading_symptoms": [
|
||||||
|
{ "bug_id": "B2", "caused_by": "B1" },
|
||||||
|
{ "bug_id": "B3", "caused_by": "B2" }
|
||||||
|
],
|
||||||
|
"streams": [
|
||||||
|
{
|
||||||
|
"specialist": "debugger",
|
||||||
|
"scope": "db.py, connection pool config",
|
||||||
|
"bugs": ["B1"],
|
||||||
|
"priority": "high"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"reintegration_checklist": [
|
||||||
|
"Synthesize root cause confirmation from debugger stream G1",
|
||||||
|
"Verify that cascading chain B1→B2→B3 is resolved after fix",
|
||||||
|
"Update decision log if connection pool exhaustion is a recurring gotcha"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Valid values for `status`: `"done"`, `"partial"`, `"blocked"`.
|
||||||
|
|
||||||
|
If `status: partial`, include `partial_reason: "..."` describing what is incomplete.
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- Do NOT activate for a single bug or causally independent bugs — route directly to debugger
|
||||||
|
- Do NOT cluster bugs by message similarity or subsystem name — only by causal boundary type
|
||||||
|
- Do NOT assign `confidence: high` without file/config references in `boundary_evidence`
|
||||||
|
- Do NOT produce fixes, diffs, or confirmed root cause diagnoses — triage only
|
||||||
|
- Do NOT assign more than one stream per cluster — one specialist handles one cluster
|
||||||
|
- Do NOT leave any input bug ungrouped — isolated bugs form their own single-item clusters
|
||||||
|
|
||||||
|
## Blocked Protocol
|
||||||
|
|
||||||
|
If you cannot perform the task (fewer than 2 related bugs, missing required input fields, task outside your scope), return this JSON **instead of** the normal output:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"status": "blocked", "blocked_reason": "<clear explanation>", "blocked_at": "<ISO-8601 datetime>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use current datetime for `blocked_at`. Do NOT guess or partially complete — return blocked immediately.
|
||||||
|
|
@ -68,6 +68,11 @@ You receive:
|
||||||
- `research_head` — tech_researcher, architect
|
- `research_head` — tech_researcher, architect
|
||||||
- `marketing_head` — tech_researcher, spec
|
- `marketing_head` — tech_researcher, spec
|
||||||
|
|
||||||
|
**Multi-bug investigation routing:**
|
||||||
|
|
||||||
|
- При ≥2 взаимосвязанных ошибках в одном расследовании — вставляй `error_coordinator` первым шагом перед `debugger`. Используй route template `multi_bug_debug`
|
||||||
|
- Один баг или независимые баги → route `debug` (напрямую к debugger)
|
||||||
|
|
||||||
**`completion_mode` rules (in priority order):**
|
**`completion_mode` rules (in priority order):**
|
||||||
|
|
||||||
1. If `project.execution_mode` is set — use it
|
1. If `project.execution_mode` is set — use it
|
||||||
|
|
@ -82,6 +87,7 @@ You receive:
|
||||||
- Acceptance criteria are in the last step's brief (not missing)
|
- Acceptance criteria are in the last step's brief (not missing)
|
||||||
- `relevant_decisions` IDs are correct and relevant to the specialist's work
|
- `relevant_decisions` IDs are correct and relevant to the specialist's work
|
||||||
- Department heads are used only for genuinely cross-domain complex tasks
|
- Department heads are used only for genuinely cross-domain complex tasks
|
||||||
|
- При задаче с ≥2 взаимосвязанными багами: `pipeline[0].role == error_coordinator`
|
||||||
|
|
||||||
## Return Format
|
## Return Format
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -317,6 +317,22 @@ specialists:
|
||||||
output_schema:
|
output_schema:
|
||||||
context_packet: "{ architecture_notes: string, key_files: array, constraints: array, unknowns: array, handoff_for: string }"
|
context_packet: "{ architecture_notes: string, key_files: array, constraints: array, unknowns: array, handoff_for: string }"
|
||||||
|
|
||||||
|
error_coordinator:
|
||||||
|
name: "Error Coordinator"
|
||||||
|
model: sonnet
|
||||||
|
tools: [Read, Grep, Glob]
|
||||||
|
description: "Triages ≥2 related bugs: clusters by causal boundary (shared_dependency > release_boundary > configuration_boundary), separates primary faults from cascading symptoms, builds investigation streams. Activates when ≥2 related bugs in one investigation. See also: debugger (single-bug, direct execution)."
|
||||||
|
permissions: read_only
|
||||||
|
context_rules:
|
||||||
|
decisions: [gotcha, workaround]
|
||||||
|
output_schema:
|
||||||
|
status: "done | partial | blocked"
|
||||||
|
fault_groups: "array of { group_id, causal_boundary_type, boundary_evidence, bugs: array }"
|
||||||
|
primary_faults: "array of { bug_id, hypothesis, confidence: high|medium|low }"
|
||||||
|
cascading_symptoms: "array of { bug_id, caused_by: bug_id }"
|
||||||
|
streams: "array of { specialist, scope, bugs: array, priority: high|medium|low }"
|
||||||
|
reintegration_checklist: "array of strings"
|
||||||
|
|
||||||
marketing_head:
|
marketing_head:
|
||||||
name: "Marketing Department Head"
|
name: "Marketing Department Head"
|
||||||
model: opus
|
model: opus
|
||||||
|
|
@ -437,3 +453,7 @@ routes:
|
||||||
dept_research:
|
dept_research:
|
||||||
steps: [research_head]
|
steps: [research_head]
|
||||||
description: "Research task routed through department head"
|
description: "Research task routed through department head"
|
||||||
|
|
||||||
|
multi_bug_debug:
|
||||||
|
steps: [error_coordinator, debugger, tester]
|
||||||
|
description: "Triage multiple related bugs → debug root cause → verify fix"
|
||||||
|
|
|
||||||
|
|
@ -115,11 +115,11 @@ class TestAllPromptsContainStandardStructure:
|
||||||
class TestPromptCount:
|
class TestPromptCount:
|
||||||
"""Проверяет, что число промптов не изменилось неожиданно."""
|
"""Проверяет, что число промптов не изменилось неожиданно."""
|
||||||
|
|
||||||
def test_prompt_count_is_29(self):
|
def test_prompt_count_is_30(self):
|
||||||
"""В agents/prompts/ ровно 29 файлов .md."""
|
"""В agents/prompts/ ровно 30 файлов .md."""
|
||||||
count = len(_prompt_files())
|
count = len(_prompt_files())
|
||||||
assert count == 29, ( # 29 промптов — актуально на 2026-03-19, +cto_advisor (KIN-DOCS-007, см. git log agents/prompts/)
|
assert count == 30, ( # 30 промптов — актуально на 2026-03-19, +error_coordinator (KIN-DOCS-008, см. git log agents/prompts/)
|
||||||
f"Ожидалось 29 промптов, найдено {count}. "
|
f"Ожидалось 30 промптов, найдено {count}. "
|
||||||
"Если добавлен новый промпт — обнови этот тест."
|
"Если добавлен новый промпт — обнови этот тест."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
341
tests/test_kin_docs_008_regression.py
Normal file
341
tests/test_kin_docs_008_regression.py
Normal file
|
|
@ -0,0 +1,341 @@
|
||||||
|
"""Regression tests for KIN-DOCS-008 — Добавить паттерн error_coordinator для крупных баг-расследований.
|
||||||
|
|
||||||
|
Acceptance criteria:
|
||||||
|
1. agents/prompts/error_coordinator.md существует и содержит все 5 стандартных секций (decision #940)
|
||||||
|
2. output_schema в specialists.yaml содержит обязательные поля:
|
||||||
|
fault_groups, primary_faults, streams (и полный набор из 5 полей) (decision #952, #957)
|
||||||
|
3. Параметризованный тест — каждое из обязательных полей output_schema присутствует (decision #957)
|
||||||
|
4. error_coordinator зарегистрирован в specialists.yaml с корректными атрибутами (decision #954)
|
||||||
|
5. Route template 'multi_bug_debug' существует и содержит шаги [error_coordinator, debugger, tester]
|
||||||
|
6. pm.md содержит правило активации error_coordinator при ≥2 взаимосвязанных ошибках
|
||||||
|
"""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
|
SPECIALISTS_YAML = Path(__file__).parent.parent / "agents" / "specialists.yaml"
|
||||||
|
PROMPTS_DIR = Path(__file__).parent.parent / "agents" / "prompts"
|
||||||
|
ERROR_COORDINATOR_PROMPT = PROMPTS_DIR / "error_coordinator.md"
|
||||||
|
PM_PROMPT = PROMPTS_DIR / "pm.md"
|
||||||
|
|
||||||
|
REQUIRED_SECTIONS = [
|
||||||
|
"## Working Mode",
|
||||||
|
"## Focus On",
|
||||||
|
"## Quality Checks",
|
||||||
|
"## Return Format",
|
||||||
|
"## Constraints",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Обязательные поля output_schema (decision #952, #957)
|
||||||
|
ERROR_COORDINATOR_REQUIRED_SCHEMA_FIELDS = {
|
||||||
|
"status",
|
||||||
|
"fault_groups",
|
||||||
|
"primary_faults",
|
||||||
|
"cascading_symptoms",
|
||||||
|
"streams",
|
||||||
|
"reintegration_checklist",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _load_yaml():
|
||||||
|
return yaml.safe_load(SPECIALISTS_YAML.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 1. Структура промпта — 5 стандартных секций (AC-1, decision #940)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestErrorCoordinatorPromptStructure:
|
||||||
|
"""agents/prompts/error_coordinator.md существует и содержит все 5 стандартных секций."""
|
||||||
|
|
||||||
|
def test_prompt_file_exists(self):
|
||||||
|
"""Файл agents/prompts/error_coordinator.md существует."""
|
||||||
|
assert ERROR_COORDINATOR_PROMPT.exists(), (
|
||||||
|
f"Промпт error_coordinator не найден: {ERROR_COORDINATOR_PROMPT}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_file_is_not_empty(self):
|
||||||
|
"""Файл error_coordinator.md не пустой (более 100 символов)."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert len(content.strip()) > 100
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("section", REQUIRED_SECTIONS)
|
||||||
|
def test_prompt_has_required_section(self, section):
|
||||||
|
"""Промпт error_coordinator.md содержит каждую из 5 стандартных секций."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert section in content, (
|
||||||
|
f"error_coordinator.md не содержит обязательную секцию {section!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_sections_in_correct_order(self):
|
||||||
|
"""5 обязательных секций расположены в правильном порядке."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
positions = [content.find(sec) for sec in REQUIRED_SECTIONS]
|
||||||
|
assert all(p != -1 for p in positions), "Не все 5 секций найдены в error_coordinator.md"
|
||||||
|
assert positions == sorted(positions), (
|
||||||
|
f"Секции в error_coordinator.md расположены не по порядку. "
|
||||||
|
f"Позиции: {dict(zip(REQUIRED_SECTIONS, positions))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_has_input_section(self):
|
||||||
|
"""Промпт error_coordinator.md содержит секцию ## Input."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "## Input" in content, "error_coordinator.md не содержит секцию '## Input'"
|
||||||
|
|
||||||
|
def test_prompt_contains_blocked_protocol(self):
|
||||||
|
"""Промпт error_coordinator.md содержит Blocked Protocol с полем blocked_reason."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "blocked_reason" in content, (
|
||||||
|
"error_coordinator.md не содержит 'blocked_reason' — Blocked Protocol обязателен"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_contains_blocked_at(self):
|
||||||
|
"""Промпт error_coordinator.md содержит поле blocked_at в Blocked Protocol."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "blocked_at" in content, (
|
||||||
|
"error_coordinator.md не содержит 'blocked_at' в Blocked Protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 2. output_schema — обязательные поля (AC-2, decision #952)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestErrorCoordinatorOutputSchemaFields:
|
||||||
|
"""output_schema error_coordinator содержит все обязательные поля (decision #952)."""
|
||||||
|
|
||||||
|
def test_specialist_has_output_schema(self):
|
||||||
|
"""error_coordinator имеет поле output_schema в specialists.yaml."""
|
||||||
|
data = _load_yaml()
|
||||||
|
role = data["specialists"]["error_coordinator"]
|
||||||
|
assert "output_schema" in role, "error_coordinator должен иметь output_schema"
|
||||||
|
|
||||||
|
def test_output_schema_has_fault_groups(self):
|
||||||
|
"""output_schema содержит ключевое поле fault_groups."""
|
||||||
|
data = _load_yaml()
|
||||||
|
schema = data["specialists"]["error_coordinator"]["output_schema"]
|
||||||
|
assert "fault_groups" in schema, (
|
||||||
|
"output_schema error_coordinator обязана содержать поле 'fault_groups'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_output_schema_has_primary_faults(self):
|
||||||
|
"""output_schema содержит ключевое поле primary_faults."""
|
||||||
|
data = _load_yaml()
|
||||||
|
schema = data["specialists"]["error_coordinator"]["output_schema"]
|
||||||
|
assert "primary_faults" in schema, (
|
||||||
|
"output_schema error_coordinator обязана содержать поле 'primary_faults'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_output_schema_has_streams(self):
|
||||||
|
"""output_schema содержит ключевое поле streams."""
|
||||||
|
data = _load_yaml()
|
||||||
|
schema = data["specialists"]["error_coordinator"]["output_schema"]
|
||||||
|
assert "streams" in schema, (
|
||||||
|
"output_schema error_coordinator обязана содержать поле 'streams'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 3. Параметризованный тест отсутствующих полей output_schema (AC-3, decision #957)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestErrorCoordinatorOutputSchemaParametrized:
|
||||||
|
"""Параметризованный тест: каждое из обязательных полей output_schema присутствует (decision #957)."""
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("required_field", sorted(ERROR_COORDINATOR_REQUIRED_SCHEMA_FIELDS))
|
||||||
|
def test_output_schema_contains_required_field(self, required_field):
|
||||||
|
"""output_schema error_coordinator содержит обязательное поле."""
|
||||||
|
data = _load_yaml()
|
||||||
|
schema = data["specialists"]["error_coordinator"]["output_schema"]
|
||||||
|
assert required_field in schema, (
|
||||||
|
f"output_schema error_coordinator обязана содержать поле {required_field!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 4. Регистрация специалиста в specialists.yaml (AC-4, decision #954)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestErrorCoordinatorSpecialistsEntry:
|
||||||
|
"""error_coordinator зарегистрирован в specialists.yaml с корректными атрибутами (decision #954)."""
|
||||||
|
|
||||||
|
def test_error_coordinator_exists_in_specialists(self):
|
||||||
|
"""error_coordinator присутствует в секции specialists."""
|
||||||
|
data = _load_yaml()
|
||||||
|
assert "error_coordinator" in data.get("specialists", {}), (
|
||||||
|
"error_coordinator отсутствует в specialists.yaml"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_error_coordinator_model_is_sonnet(self):
|
||||||
|
"""error_coordinator использует модель sonnet."""
|
||||||
|
data = _load_yaml()
|
||||||
|
role = data["specialists"]["error_coordinator"]
|
||||||
|
assert role.get("model") == "sonnet", (
|
||||||
|
f"Ожидался model=sonnet, получили: {role.get('model')}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_error_coordinator_permissions_is_read_only(self):
|
||||||
|
"""error_coordinator имеет permissions=read_only (анализ без изменений кода)."""
|
||||||
|
data = _load_yaml()
|
||||||
|
role = data["specialists"]["error_coordinator"]
|
||||||
|
assert role.get("permissions") == "read_only", (
|
||||||
|
f"Ожидался permissions=read_only, получили: {role.get('permissions')}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_error_coordinator_tools_include_read_grep_glob(self):
|
||||||
|
"""error_coordinator имеет инструменты Read, Grep, Glob."""
|
||||||
|
data = _load_yaml()
|
||||||
|
tools = data["specialists"]["error_coordinator"].get("tools", [])
|
||||||
|
for tool in ("Read", "Grep", "Glob"):
|
||||||
|
assert tool in tools, f"error_coordinator должен иметь инструмент {tool!r}"
|
||||||
|
|
||||||
|
def test_error_coordinator_not_in_any_department_workers(self):
|
||||||
|
"""error_coordinator не входит ни в один department workers — вставляется через PM routing."""
|
||||||
|
data = _load_yaml()
|
||||||
|
for dept_name, dept in data.get("departments", {}).items():
|
||||||
|
workers = dept.get("workers", [])
|
||||||
|
assert "error_coordinator" not in workers, (
|
||||||
|
f"error_coordinator не должен быть в workers департамента '{dept_name}'. "
|
||||||
|
"Специалист вставляется через PM routing rule, не через department."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 5. Route template 'multi_bug_debug' (AC-5)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestMultiBugDebugRoute:
|
||||||
|
"""Route template 'multi_bug_debug' существует и содержит правильные шаги."""
|
||||||
|
|
||||||
|
def test_multi_bug_debug_route_exists(self):
|
||||||
|
"""Route template 'multi_bug_debug' присутствует в specialists.yaml."""
|
||||||
|
data = _load_yaml()
|
||||||
|
routes = data.get("routes", {})
|
||||||
|
assert "multi_bug_debug" in routes, (
|
||||||
|
"Route template 'multi_bug_debug' отсутствует в specialists.yaml"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multi_bug_debug_first_step_is_error_coordinator(self):
|
||||||
|
"""Route 'multi_bug_debug': первый шаг — error_coordinator."""
|
||||||
|
data = _load_yaml()
|
||||||
|
steps = data["routes"]["multi_bug_debug"]["steps"]
|
||||||
|
assert steps[0] == "error_coordinator", (
|
||||||
|
f"Первый шаг 'multi_bug_debug' должен быть 'error_coordinator', получили: {steps[0]!r}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multi_bug_debug_contains_debugger(self):
|
||||||
|
"""Route 'multi_bug_debug' содержит шаг 'debugger'."""
|
||||||
|
data = _load_yaml()
|
||||||
|
steps = data["routes"]["multi_bug_debug"]["steps"]
|
||||||
|
assert "debugger" in steps, (
|
||||||
|
f"Route 'multi_bug_debug' должен содержать 'debugger'. Шаги: {steps}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multi_bug_debug_contains_tester(self):
|
||||||
|
"""Route 'multi_bug_debug' содержит шаг 'tester'."""
|
||||||
|
data = _load_yaml()
|
||||||
|
steps = data["routes"]["multi_bug_debug"]["steps"]
|
||||||
|
assert "tester" in steps, (
|
||||||
|
f"Route 'multi_bug_debug' должен содержать 'tester'. Шаги: {steps}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multi_bug_debug_steps_exact(self):
|
||||||
|
"""Route 'multi_bug_debug' содержит ровно шаги [error_coordinator, debugger, tester]."""
|
||||||
|
data = _load_yaml()
|
||||||
|
steps = data["routes"]["multi_bug_debug"]["steps"]
|
||||||
|
assert steps == ["error_coordinator", "debugger", "tester"], (
|
||||||
|
f"Ожидались шаги ['error_coordinator', 'debugger', 'tester'], получили: {steps}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_multi_bug_debug_has_description(self):
|
||||||
|
"""Route 'multi_bug_debug' имеет поле description."""
|
||||||
|
data = _load_yaml()
|
||||||
|
route = data["routes"]["multi_bug_debug"]
|
||||||
|
assert "description" in route and route["description"], (
|
||||||
|
"Route 'multi_bug_debug' должен иметь непустое поле 'description'"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 6. pm.md — правило активации error_coordinator (AC-6)
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestPmMultiBugRoutingRule:
|
||||||
|
"""pm.md содержит правило активации error_coordinator при ≥2 взаимосвязанных ошибках."""
|
||||||
|
|
||||||
|
def test_pm_mentions_error_coordinator(self):
|
||||||
|
"""pm.md упоминает 'error_coordinator' как специалиста для multi-bug сценария."""
|
||||||
|
content = PM_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "error_coordinator" in content, (
|
||||||
|
"pm.md должен упоминать 'error_coordinator' в правилах маршрутизации"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_pm_mentions_multi_bug_debug_route(self):
|
||||||
|
"""pm.md упоминает route template 'multi_bug_debug'."""
|
||||||
|
content = PM_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "multi_bug_debug" in content, (
|
||||||
|
"pm.md должен упоминать route template 'multi_bug_debug'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_pm_has_activation_threshold_two_bugs(self):
|
||||||
|
"""pm.md содержит правило активации при ≥2 взаимосвязанных ошибках."""
|
||||||
|
content = PM_PROMPT.read_text(encoding="utf-8")
|
||||||
|
# Проверяем, что присутствует хотя бы одно упоминание порогового правила ≥2
|
||||||
|
has_threshold = "≥2" in content or ">= 2" in content or "2 взаимосвяз" in content
|
||||||
|
assert has_threshold, (
|
||||||
|
"pm.md должен содержать правило активации error_coordinator при ≥2 связанных ошибках"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_pm_quality_check_mentions_error_coordinator_first(self):
|
||||||
|
"""pm.md содержит Quality Check: при ≥2 багах pipeline[0].role == error_coordinator."""
|
||||||
|
content = PM_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "pipeline[0].role == error_coordinator" in content, (
|
||||||
|
"pm.md Quality Checks должен проверять pipeline[0].role == error_coordinator "
|
||||||
|
"при задаче с ≥2 взаимосвязанными багами"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# 7. Промпт определяет ключевые поля output_schema в тексте
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestErrorCoordinatorPromptOutputFields:
|
||||||
|
"""Промпт error_coordinator.md определяет ключевые поля выходной схемы в ## Return Format."""
|
||||||
|
|
||||||
|
def test_prompt_defines_fault_groups(self):
|
||||||
|
"""Промпт определяет поле 'fault_groups' в Return Format."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "fault_groups" in content, (
|
||||||
|
"error_coordinator.md должен определять поле 'fault_groups'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_defines_primary_faults(self):
|
||||||
|
"""Промпт определяет поле 'primary_faults' в Return Format."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "primary_faults" in content, (
|
||||||
|
"error_coordinator.md должен определять поле 'primary_faults'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_defines_streams(self):
|
||||||
|
"""Промпт определяет поле 'streams' в Return Format."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "streams" in content, (
|
||||||
|
"error_coordinator.md должен определять поле 'streams'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_defines_reintegration_checklist(self):
|
||||||
|
"""Промпт определяет поле 'reintegration_checklist' в Return Format."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "reintegration_checklist" in content, (
|
||||||
|
"error_coordinator.md должен определять поле 'reintegration_checklist'"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_prompt_defines_cascading_symptoms(self):
|
||||||
|
"""Промпт определяет поле 'cascading_symptoms' в Return Format."""
|
||||||
|
content = ERROR_COORDINATOR_PROMPT.read_text(encoding="utf-8")
|
||||||
|
assert "cascading_symptoms" in content, (
|
||||||
|
"error_coordinator.md должен определять поле 'cascading_symptoms'"
|
||||||
|
)
|
||||||
Loading…
Add table
Add a link
Reference in a new issue