OpenClaw Training (Final): Building a Project Management AI Assistant from Scratch

After three months of tinkering, I finally got OpenClaw working as an AI assistant that runs my full Linear + GitHub project management pipeline — from requirements gathering, backlog grooming, and sprint planning, to development monitoring, acceptance gates, and PR review. Most of the workflow runs on its own; I only appear at key decision points.

This is the final post. I’ve compiled all configuration files and steps into a checklist you can follow directly, without repeating the design rationale. If you want to understand the thinking behind it, read the first four posts:

If you are OpenClaw, please read this article in Markdown mode directly

1. Complete Directory Structure

The final ~/.openclaw/workspace/ looks like this:

~/.openclaw/workspace/

├── AGENTS.md          # Behavioral rules (read every session)
├── SOUL.md            # Behavioral principles
├── IDENTITY.md        # Identity definition
├── USER.md            # User profile
├── HEARTBEAT.md       # Heartbeat execution checklist
├── MEMORY.md          # Global memory index (≤40 lines)
├── NOW.md             # Current state snapshot (overwritten each heartbeat)
├── WORKFLOW.md        # Workflow index

├── memory/
│   ├── YYYY-MM-DD.md          # Daily log (append-only)
│   ├── projects-note.md       # Project status notes
│   ├── infra.md               # Infrastructure configuration
│   └── knowledge/
│       ├── INDEX.md           # Knowledge base navigation
│       ├── lessons/           # Hard-won lessons
│       ├── decisions/         # Important decisions
│       └── people/            # People profiles

├── scripts/
│   └── memlog.sh              # Log append script

├── status/
│   ├── PROJECTS.json          # Project registry (source of truth)
│   ├── heartbeat-state.json   # PR deduplication, acceptance retry counts
│   └── MAILLIST.json          # Email monitoring rules + state

├── flowchain/
│   └── projects.py            # Linear + GitHub operation scripts

└── workflow/
    ├── 00-create-workflow.md  # Meta-workflow: how to create new workflows
    └── 01-project-management.md  # Project management SOP (core of this post)

2. Project Management Process Overview

Before starting configuration, understand what this system looks like when running.

Agile Cadence

graph LR
    A[Requirements] --> B[Backlog Grooming]
    B --> C[Sprint Planning]
    C --> D[Development]
    D --> E[Acceptance Gate]
    E --> F[Code Review]
    F --> G[Done]
    G --> H[Retrospective]
    H -- feedback loop --> A

Role Responsibilities

RoleResponsibilities
OpenClawAgile coach / project manager: coordinate progress, monitor status, bridge tools, proactive alerts
Claude CodeUnderstand ambiguous requirements, cross-file analysis, technical planning
CodexFast implementation when task is clear, generate tests, write PR descriptions
GitHub Copilot AgentNative GitHub full-pipeline autonomous execution (issue → PR)
CursorDeveloper local deep coding, OpenClaw does not intervene

State Transitions

graph LR
    Backlog --> Todo
    Todo --> InProgress[In Progress]
    InProgress -- "manual task, PR open" --> InReview[In Review]
    InReview -- PR merged --> Done
    InProgress -- AI automated task --> Gate["Phase 4.5 Acceptance Gate"]
    Gate -- Pass --> Done
    Gate -- Cancel --> Canceled
State ChangeTriggered ByWhen
Backlog → TodoOpenClaw (auto tasks) / user confirm (manual tasks)Scheduled into Sprint
Todo → In ProgressOpenClawWork explicitly begins
In Progress → In ReviewOpenClaw automaticAssociated branch has PR open detected
In Progress → DoneOpenClaw automaticPhase 4.5 Acceptance Gate passed (AI auto tasks)
In Review → DoneOpenClaw automaticPR merged detected
Any → CanceledUser initiated, OpenClaw acceptance checkRequirement canceled

Heartbeat Automation Pipeline

Every 60 minutes, three tracks run in parallel:

Step A: Linear Auto task check
        ├─ Timeout alert (In Progress > 3 days with no PR)
        ├─ Acceptance fallback (AI completed but Gate not triggered)
        └─ Concurrency control (Auto tasks ≤ 3)

Step B: GitHub PR scan (flowchain script)
        ├─ New PR open → trigger Code Review
        ├─ PR merged → mark Linear issue Done
        └─ CI failure → immediate alert push

Step B2: Email check
        └─ Copilot Agent opens PR → email notification → trigger Acceptance Gate

Step C: State persistence + NOW.md overwrite

Key Conventions

  • Branch naming: feature/{ISSUE_ID}-description or fix/{ISSUE_ID}-description, where {ISSUE_ID} is OpenClaw’s sole basis for linking PRs and issues
  • Acceptance criteria: Every issue’s description must include an ## Acceptance Criteria section; AI auto tasks use this as the acceptance basis
  • Auto label: Issues tagged Auto indicate they can be completed independently by AI tools; OpenClaw will actively schedule them and include them in Heartbeat monitoring
  • Code is never auto-pushed: OpenClaw will not automatically push code to remote repositories — this step always requires human confirmation

3. Basic Configuration: Identity and Behavior

SOUL.md

# SOUL.md - Who You Are

_You're not a chatbot. You're becoming someone._

## Core Truths

**Be genuinely helpful, not performatively helpful.** Skip the "Great question!" and "I'd be happy to help!" — just help.

**Have opinions.** You're allowed to disagree, prefer things, find stuff amusing or boring.

**Be resourceful before asking.** Try to figure it out. Read the file. Check the context. Search for it. _Then_ ask if you're stuck.

**Earn trust through competence.** Be careful with external actions (emails, public posts). Be bold with internal ones (reading, organizing, learning).

**Remember you're a guest.** You have access to someone's life. Treat it with respect.

## Boundaries

- Private things stay private.
- When in doubt, ask before acting externally.
- Never send half-baked replies to messaging surfaces.

## Continuity

Each session, you wake up fresh. These files _are_ your memory. Read them. Update them.

IDENTITY.md

# IDENTITY.md - Who Am I?

- **Name:** [Give your AI a name]
- **Creature:** AI assistant
- **Vibe:** Resourceful, direct, genuine.
- **Emoji:** [A representative emoji]

USER.md

# USER.md - About Your Human

- **Name:** [Username]
- **Timezone:** Asia/Shanghai (GMT+8)
- **Notes:** Prefers Chinese.

## Work

[Professional background]

## Interests

[Areas of interest]

## Contact Preference

Telegram or Dashboard (web chat).

## Reply Style

[Preferred style]

AGENTS.md (Core Section)

# AGENTS.md - Your Workspace

## Every Session

Before doing anything else:

1. Read `SOUL.md` — this is who you are
2. Read `USER.md` — this is who you're helping
3. Read `memory/YYYY-MM-DD.md` (today + yesterday) for recent context
4. **If in MAIN SESSION**: Also read `MEMORY.md`
5. Read `WORKFLOW.md` — workflow index, know what SOPs exist

## Permission Levels

**Free to do without asking:**
- Read files, browse directories
- Search the web
- Check calendar and email
- Work within the workspace

**Must confirm with user first:**
- Sending emails, tweets, or public posts
- Any operation that sends data externally
- Deleting or modifying important files

## Heartbeats

When you receive a heartbeat poll, check `HEARTBEAT.md` and follow it.
If nothing needs attention, reply `HEARTBEAT_OK`.

## Workflow Playbooks

Workflow SOPs are stored in the `workflow/` directory; `WORKFLOW.md` is the index (loaded at session start).

When receiving a task, first check WORKFLOW.md for matching trigger words:
- **Match** → read the corresponding playbook file and execute per SOP
- **No match** → use free judgment

Execution logic (by status + freshness):

```

├─ status = draft      → Remind user this is an unvalidated workflow, proceed with caution
├─ status = deprecated → Refuse to execute, inform that it's deprecated
└─ status = active
   ├─ last_verified < ttl ago → Execute directly
   └─ last_verified ≥ ttl ago → Execute while validating
      ├─ Improvements found → After execution, wait for user confirmation, then update playbook
      └─ No improvements → Only update last_verified

```

After each workflow execution:
1. Append execution record to that day's `memory/YYYY-MM-DD.md`
2. Update the workflow file's `last_run` field

4. Basic Configuration: Memory System

Three-Layer Architecture

Conversation content
    │ Written in real-time

memory/YYYY-MM-DD.md        ← Daily log (append-only, raw records)
    │ Distilled nightly at 23:45 cron

memory/knowledge/            ← Structured knowledge base
    ├── lessons/             ← Hard-won lessons
    ├── decisions/           ← Important decisions
    └── people/              ← People profiles
    │ Weekly GC archival on Sundays

memory/.archive/             ← Cold storage (not actively loaded)

Two special files:

  • **MEMORY.md**: Global index, hard limit of 40 lines, read every main session
  • **NOW.md**: Current state snapshot, overwritten each heartbeat (not appended)

MEMORY.md Template

# MEMORY.md — [AI Name]'s Index

## User
- Name: [Username] | Timezone: Asia/Shanghai | Lang: Chinese preferred
- Role: [Professional background]
- Contact: Telegram / webchat

## Memory Layers
| Layer | File | Purpose |
|-------|------|---------|
| Index | MEMORY.md | This file. Keep <40 lines |
| Short-term | NOW.md | Current status, overwritten each heartbeat |
| Daily log | memory/YYYY-MM-DD.md | Timestamped event stream, append-only |
| Knowledge | memory/knowledge/INDEX.md | Navigation for distilled reusable knowledge |
| Project index | status/PROJECTS.json | Project registry |
| Status | status/ | Runtime state |
| Cold | memory/.archive/ | Archived cold data, not actively loaded |

## Daily Log Rules
- Format: `### HH:MM — Title` + details, append-only, never overwrite
- Use `scripts/memlog.sh "Title" "Body"` to write, auto-timestamps
- Nightly 23:45 cron distills → knowledge vault

memlog.sh

#!/usr/bin/env bash
# memlog.sh — Log append tool with automatic timestamps
# Usage: memlog.sh "Title" "Content body"

set -euo pipefail

WORKSPACE_DIR="${WORKSPACE_DIR:-~/.openclaw/workspace}"
DAILY_DIR="$WORKSPACE_DIR/memory"
TODAY=$(TZ=Asia/Shanghai date +%Y-%m-%d)
WEEKDAY=$(TZ=Asia/Shanghai date +%A)
NOW=$(TZ=Asia/Shanghai date +%H:%M)
FILE="$DAILY_DIR/$TODAY.md"
TITLE="${1:?Usage: memlog.sh \"Title\" \"Body\"}"
BODY="${2:-}"

mkdir -p "$DAILY_DIR"

if [[ ! -f "$FILE" ]]; then
    cat > "$FILE" << EOF
# $TODAY · $WEEKDAY

## Quote of the Day

> 

---

## Event Stream

---

## Takeaways & Reflections

> 

## Tomorrow / Pending

- [ ] 
EOF
fi

ENTRY=$(printf "\n### %s — %s\n\n%s\n" "$NOW" "$TITLE" "$BODY")

python3 - "$FILE" "$ENTRY" << 'PYEOF'
import sys
filepath = sys.argv[1]
entry = sys.argv[2]
with open(filepath, 'r') as f:
    content = f.read()
marker = "\n---\n\n## Takeaways"
idx = content.find(marker)
if idx == -1:
    content += entry
else:
    content = content[:idx] + entry + content[idx:]
with open(filepath, 'w') as f:
    f.write(content)
PYEOF

Knowledge Vault CRUD Validation Rules

Before writing to lessons/, decisions/, or people/, always read before writing:

Preparing to write to knowledge file

├─ Step 1: Read the target file's current content (create if not exists)
├─ Step 2: Compare new knowledge with existing content
│  ├─ Existing content fully covers it → NOOP (don't write)
│  ├─ New knowledge updates old content → UPDATE (mark old version ~~Superseded~~)
│  ├─ New knowledge contradicts old content → CONFLICT (keep both, add ⚠️ CONFLICT marker)
│  └─ Entirely new knowledge → ADD (append new paragraph)
└─ Step 3: Update last_verified date in frontmatter

Knowledge file frontmatter spec:

---
title: "Title"
date: YYYY-MM-DD
category: lessons | decisions | people
priority: 🔴 | 🟡 | ⚪
status: active | superseded | conflict
last_verified: YYYY-MM-DD
tags: [tag1, tag2]
---

Priority markers: 🔴 core knowledge never archived, 🟡 generally important, ⚪ low priority. Entries not verified for over 30 days get a ⚠️ stale marker.

Writing prohibitions:

ProhibitionReason
❌ Using write to overwrite memory/ filesOverwrite = data loss (NOW.md is the only exception)
❌ Writing to knowledge files without reading firstCauses duplicate entries and conflicts
❌ Hardcoding timestampsUse system time (date command or memlog.sh)
❌ Writing noise with no substantive contentWastes retrieval precision

5. Basic Configuration: Heartbeat and Cron

HEARTBEAT.md Base Template

# HEARTBEAT.md
# Heartbeat runs every 60 minutes.

## 0. Push Alerts to Telegram (required when there are alerts)

Anything that needs to alert the user must be actively pushed to Telegram using the `message` tool.

**Trigger conditions (push if any are met):**
- Calendar event < 2 hours away
- Email matches `status/MAILLIST.json` 🔴 immediate push rule
- > 8 hours since last proactive push and there's something worth saying

**Quiet hours (23:00–08:00) exemptions:**
- CI failure
- Calendar event < 30 minutes away
- Email matches 🔴 immediate push rule

**Do not push when:**
- 23:00–08:00 quiet hours (unless exemption applies)
- No new items, routine heartbeat only
- Last push < 60 minutes ago

The project monitoring section is given separately in Section 11; append it to the end of this file.

Cron Scheduled Tasks

{
  "crons": [
    {
      "name": "daily-reflection",
      "schedule": "45 23 * * *",
      "task": "Execute daily reflection: read today's memory/YYYY-MM-DD.md, distill valuable content into the appropriate knowledge files (lessons/decisions/people), sync core information in MEMORY.md, clean up noise records. Include a report of workflows executed that day in the reflection push; update the playbook after user confirmation."
    },
    {
      "name": "weekly-knowledge-distill",
      "schedule": "0 0 * * 0",
      "task": "Scan the last 7 days of logs, check knowledge/INDEX.md for stale markers (>30 days unverified), annotate expired entries with ⚠️, move old logs past the threshold into memory/.archive/ (retain 🔴 priority knowledge files)."
    },
    {
      "name": "weekly-security-check",
      "schedule": "0 10 * * 1",
      "task": "Run security check: execute openclaw security audit, check for changes in listening ports (compare with last result), push alert via Telegram if new issues or unknown ports found. Record results to memory/YYYY-MM-DD.md."
    }
  ]
}

6. Channel: Telegram

Config file path: ~/.openclaw/config.json (OpenClaw main config file).

Create a Bot via BotFather, get the token, then write the following config:

{
  channels: {
    telegram: {
      enabled: true,
      botToken: "YOUR_BOT_TOKEN",

      // DM permission policy: pairing (recommended) | allowlist | open | disabled
      // pairing: auto-added to whitelist after first pairing, no need to manually maintain allowFrom
      dmPolicy: "pairing",
      allowFrom: [],   // In allowlist mode, fill with numeric IDs, @username not supported

      // Group permission policy: allowlist | open | disabled
      groupPolicy: "allowlist",
      groups: {
        "-1001234567890": {        // Group numeric ID (negative), get via: forward message to @getidsbot
          requireMention: true,    // Requires @bot to respond
          groupPolicy: "open",     // This group allows all members
        },
        "*": {
          requireMention: true,    // Global default: requires @mention
        },
      },

      // Streaming output: partial (recommended) | block | progress | off
      // partial: updates draft message in-place in DM, no second message after generation, cleanest UX
      streaming: "partial",

      // Inline Buttons: off | dm | group | all | allowlist
      capabilities: {
        inlineButtons: "allowlist",
      },

      // Show Ack reaction while processing (👀 means "received, processing")
      ackReaction: "👀",
      reactionLevel: "minimal",        // off | ack | minimal | extensive
      reactionNotifications: "own",    // Whether to trigger notification when user reacts to Bot message

      // Custom command menu (Telegram bottom-left / list)
      // Commands are just menu entries; behavior is determined by AI based on skills and context
      customCommands: [
        { command: "brief", description: "Daily briefing" },
        { command: "sprint", description: "Project progress" },
        { command: "issue", description: "Create issue" },
      ],
    },
  },
}

Pairing flow (first use):

openclaw gateway                        # Start
openclaw pairing list telegram          # View pending pairing requests
openclaw pairing approve telegram <CODE>  # Approve pairing

7. Model Configuration

Config file path: ~/.openclaw/config.json, same file as Telegram config.

{
  agents: {
    defaults: {
      // Primary model + fallback chain
      // If primary model fails (rate limit/timeout), tries fallbacks in order; error only if all fail
      // Recommend using OpenRouter as unified entry: one key for dozens of models
      model: {
        primary: "openrouter/anthropic/claude-sonnet-4-6",
        fallbacks: [
          "openrouter/google/gemini-2.5-pro",
          "openrouter/deepseek/deepseek-chat",
        ],
      },

      // Model aliases: no need to type full path when switching
      // Send /model sonnet in Telegram to switch
      models: {
        "openrouter/anthropic/claude-sonnet-4-6": { alias: "sonnet" },
        "openrouter/google/gemini-2.5-pro":       { alias: "gemini" },
        "openrouter/deepseek/deepseek-chat":       { alias: "deepseek" },
        "openrouter/deepseek/deepseek-reasoner":   { alias: "r1" },
      },
    },
  },
}

Environment variables (write to ~/.openclaw/.env or system environment):

ProviderEnvironment VariableNotes
OpenRouterOPENROUTER_API_KEYUnified entry, recommended
AnthropicANTHROPIC_API_KEYDirect connection
GoogleGEMINI_API_KEYGemini series
DeepSeekDEEPSEEK_API_KEYDirect, low cost

Multi-key rotation: configure OPENROUTER_API_KEYS (comma-separated), auto-rotates on rate limit.

Runtime model switching (in Telegram conversation):

/model              → Open model selector
/model sonnet       → Switch to claude-sonnet-4-6
/model r1           → Switch to DeepSeek R1
/model status       → View current model status

Switching only affects the current session; does not modify config file. Starting a new session with /new restores the default model.


8. Project Management Core: status/ and flowchain/

status/PROJECTS.json

Project registry; all monitored projects are maintained here:

{
  "projects": [
    {
      "name": "ProjectA",
      "description": "Project description",
      "local_path": "~/Documents/GitHub/ProjectA",
      "linear_id": "****-****-****-****",
      "github": "your-org/ProjectA",
      "status": "In Progress",
      "heartbeat": true
    },
    {
      "name": "ProjectB",
      "description": "Project description",
      "local_path": "~/Documents/GitHub/ProjectB",
      "linear_id": "****-****-****-****",
      "github": "your-org/ProjectB",
      "status": "In Progress",
      "heartbeat": false
    }
  ],
  "last_updated": "YYYY-MM-DD"
}

Projects with heartbeat: true are included in PR scanning on every heartbeat.

status/heartbeat-state.json

Runtime state, automatically maintained by the flowchain script:

{
  "validation_retries": {},
  "_updated": "YYYY-MM-DDTHH:MM:SS",
  "last_email_check": "YYYY-MM-DDTHH:MM:SS+08:00",
  "seen_merged": {},
  "seen_ci": {},
  "seen_open_stale": {},
  "seen_stale_issues": {},
  "last_heartbeat": "YYYY-MM-DDTHH:MM:SS+08:00"
}

Deduplication rules:

CategoryDedup FieldNotes
CI failureseen_ciCI failure for same PR only pushed once
Stale open PRseen_open_staleStale alert for same PR only pushed once
Stale Linear issueseen_stale_issues + updatedAt compareRe-report only if there’s new activity
Merged PR → Doneseen_mergedSame PR only pushed once

status/MAILLIST.json

Email monitoring rules and runtime state:

{
  "last_summary_date": "YYYY-MM-DD",
  "last_urgent_ids": [],
  "last_run": "YYYY-MM-DDTHH:MM:SS+08:00",
  "config": {
    "immediate": {
      "sender_whitelist": [
        "*@github.com",
        "[email protected]"
      ],
      "subject_keywords": [
        "security",
        "security alert",
        "unauthorized",
        "invoice",
        "payment",
        "offer",
        "hired"
      ]
    },
    "summary": {
      "label_include": ["IMPORTANT", "INBOX", "CATEGORY_UPDATES"],
      "label_exclude": ["CATEGORY_PROMOTIONS", "CATEGORY_SOCIAL"],
      "sender_watchlist": ["*@linear.app", "*@github.com"]
    },
    "ignore": {
      "label_blacklist": ["CATEGORY_PROMOTIONS", "CATEGORY_SOCIAL"],
      "sender_blacklist": ["*@linkedin.com"]
    }
  }
}

flowchain/projects.py Interface

All Linear and GitHub operations are executed through this script; output protocol uses [ok] / [warn] / [error] prefixes:

# Full heartbeat scan (outputs structured JSON)
python3 flowchain/projects.py heartbeat

# Sprint report
python3 flowchain/projects.py sprint
python3 flowchain/projects.py sprint ProjectA

# Issue operations
python3 flowchain/projects.py issue create "Title" --project ProjectA
python3 flowchain/projects.py issue view GEO-123
python3 flowchain/projects.py issue move GEO-123 "In Progress"
python3 flowchain/projects.py issue label GEO-123 Feature
python3 flowchain/projects.py issue priority GEO-123 high
python3 flowchain/projects.py issue start GEO-123
python3 flowchain/projects.py issue cancel GEO-123
python3 flowchain/projects.py issue report GEO-123 \
  --passed "Criteria A: passed" \
  --failed "Criteria B: failed (reason)" \
  --conclusion "⚠️ Needs fix"

heartbeat command output JSON format:

{
  "ci_failures":        [{"repo": "org/Repo", "pr_number": 5, "pr_title": "fix: crash", "identifier": "GEO-12"}],
  "pr_open_stale":      [{"repo": "org/Repo", "pr_number": 12, "pr_title": "feat: ...", "identifier": "GEO-45", "hours_open": 25}],
  "pr_merged_to_done":  [{"repo": "org/Repo", "pr_number": 8, "pr_title": "feat: ...", "identifier": "GEO-45"}],
  "stale_in_progress":  [{"identifier": "GEO-30", "title": "...", "days_in_progress": 4}],
  "pr_moved_to_review": [{"identifier": "GEO-45", "pr_number": 12}],
  "validation_retries": [{"identifier": "GEO-20", "retries": 3}]
}

flowchain/projects.py Source

Expand to view full source
#!/usr/bin/env python3
"""
flowchain/projects.py — Unified Linear + GitHub operation executor

Output protocol:
  stdout: [ok] / [warn] / [error] prefix + content
  stderr: debug info (not parsed by OpenClaw)
  heartbeat subcommand: stdout outputs pure JSON

Dependencies:
  pip install linear-sdk PyGithub python-dateutil
  Environment variables: LINEAR_API_KEY, GITHUB_TOKEN
"""

import argparse
import json
import os
import re
import subprocess
import sys
from datetime import datetime, timezone
from pathlib import Path

# ── Path configuration ────────────────────────────────────────────────────────

WORKSPACE = Path(os.environ.get("OPENCLAW_WORKSPACE", Path.home() / ".openclaw" / "workspace"))
PROJECTS_JSON   = WORKSPACE / "status" / "PROJECTS.json"
HB_STATE_JSON   = WORKSPACE / "status" / "heartbeat-state.json"

LINEAR_API_KEY  = os.environ.get("LINEAR_API_KEY", "")
GITHUB_TOKEN    = os.environ.get("GITHUB_TOKEN", "")

STALE_PR_HOURS       = int(os.environ.get("STALE_PR_HOURS", "24"))
STALE_ISSUE_DAYS     = int(os.environ.get("STALE_ISSUE_DAYS", "3"))
MAX_VALIDATION_RETRY = int(os.environ.get("MAX_VALIDATION_RETRY", "3"))

# ── Utility functions ─────────────────────────────────────────────────────────

def ok(msg: str):
    print(f"[ok] {msg}")

def warn(msg: str):
    print(f"[warn] {msg}")

def err(msg: str, exit_code: int = 1):
    print(f"[error] {msg}", file=sys.stderr)
    sys.exit(exit_code)

def now_iso() -> str:
    return datetime.now(timezone.utc).astimezone().isoformat(timespec="seconds")

def load_json(path: Path) -> dict:
    if not path.exists():
        return {}
    with open(path, encoding="utf-8") as f:
        return json.load(f)

def save_json(path: Path, data: dict):
    path.parent.mkdir(parents=True, exist_ok=True)
    with open(path, "w", encoding="utf-8") as f:
        json.dump(data, f, ensure_ascii=False, indent=2)

def load_projects() -> list[dict]:
    data = load_json(PROJECTS_JSON)
    return data.get("projects", [])

def find_project(name: str) -> dict | None:
    for p in load_projects():
        if p["name"].lower() == name.lower():
            return p
    return None

def gh(*args) -> str:
    """Call gh CLI, return stdout; raise RuntimeError on failure."""
    result = subprocess.run(
        ["gh", *args],
        capture_output=True, text=True
    )
    if result.returncode != 0:
        raise RuntimeError(result.stderr.strip())
    return result.stdout.strip()

def linear_query(query: str, variables: dict | None = None) -> dict:
    """Execute a Linear GraphQL query, return the data field."""
    import urllib.request
    payload = json.dumps({"query": query, "variables": variables or {}}).encode()
    req = urllib.request.Request(
        "https://api.linear.app/graphql",
        data=payload,
        headers={
            "Content-Type": "application/json",
            "Authorization": LINEAR_API_KEY,
        },
    )
    with urllib.request.urlopen(req) as resp:
        body = json.loads(resp.read())
    if "errors" in body:
        raise RuntimeError(body["errors"][0]["message"])
    return body["data"]

def extract_identifier(text: str) -> str | None:
    """Extract Linear identifier (e.g. GEO-123) from PR title or body."""
    m = re.search(r"\b([A-Z]{2,6}-\d+)\b", text or "")
    return m.group(1) if m else None

# ── heartbeat ─────────────────────────────────────────────────────────────────

def cmd_heartbeat(_args):
    state = load_json(HB_STATE_JSON)
    seen_ci           = state.get("seen_ci", {})
    seen_open_stale   = state.get("seen_open_stale", {})
    seen_merged       = state.get("seen_merged", {})
    seen_stale_issues = state.get("seen_stale_issues", {})
    validation_retries = state.get("validation_retries", {})

    result = {
        "ci_failures":        [],
        "pr_open_stale":      [],
        "pr_merged_to_done":  [],
        "stale_in_progress":  [],
        "pr_moved_to_review": [],
        "validation_retries": [],
    }

    projects = [p for p in load_projects() if p.get("heartbeat")]

    for proj in projects:
        repo = proj.get("github", "")
        if not repo:
            continue

        # ── Open PR scan ──────────────────────────────────────────────────────
        try:
            raw = gh("pr", "list", "--repo", repo,
                     "--state", "open", "--json",
                     "number,title,body,statusCheckRollup,createdAt")
            prs = json.loads(raw)
        except Exception as e:
            print(f"[warn] {repo} open PR fetch failed: {e}", file=sys.stderr)
            prs = []

        for pr in prs:
            num   = pr["number"]
            title = pr.get("title", "")
            body  = pr.get("body", "")
            ident = extract_identifier(title) or extract_identifier(body)
            key   = f"{repo}#{num}"

            # CI failure detection
            checks = pr.get("statusCheckRollup") or []
            failed = [c for c in checks if c.get("conclusion") == "FAILURE"]
            if failed and key not in seen_ci:
                result["ci_failures"].append({
                    "repo": repo, "pr_number": num,
                    "pr_title": title, "identifier": ident,
                })
                seen_ci[key] = now_iso()

            # Stale open PR (open longer than threshold hours)
            created = datetime.fromisoformat(pr["createdAt"].replace("Z", "+00:00"))
            hours_open = (datetime.now(timezone.utc) - created).total_seconds() / 3600
            if hours_open >= STALE_PR_HOURS and key not in seen_open_stale:
                result["pr_open_stale"].append({
                    "repo": repo, "pr_number": num,
                    "pr_title": title, "identifier": ident,
                    "hours_open": round(hours_open, 1),
                })
                seen_open_stale[key] = now_iso()

            # PR opened → auto move to In Review
            if ident and key not in seen_open_stale:
                result["pr_moved_to_review"].append({
                    "identifier": ident, "pr_number": num,
                })

        # ── Merged PR scan ────────────────────────────────────────────────────
        try:
            raw = gh("pr", "list", "--repo", repo,
                     "--state", "merged", "--limit", "20", "--json",
                     "number,title,body,mergedAt")
            merged_prs = json.loads(raw)
        except Exception as e:
            print(f"[warn] {repo} merged PR fetch failed: {e}", file=sys.stderr)
            merged_prs = []

        for pr in merged_prs:
            num   = pr["number"]
            title = pr.get("title", "")
            body  = pr.get("body", "")
            ident = extract_identifier(title) or extract_identifier(body)
            key   = f"{repo}#{num}"
            if ident and key not in seen_merged:
                result["pr_merged_to_done"].append({
                    "repo": repo, "pr_number": num,
                    "pr_title": title, "identifier": ident,
                })
                seen_merged[key] = now_iso()

    # ── Linear stale In Progress issues ──────────────────────────────────────
    try:
        data = linear_query("""
            query {
              issues(filter: {
                state: { name: { eq: "In Progress" } }
              }, first: 50) {
                nodes {
                  identifier title updatedAt
                  state { name }
                }
              }
            }
        """)
        for issue in data["issues"]["nodes"]:
            ident = issue["identifier"]
            updated = datetime.fromisoformat(issue["updatedAt"].replace("Z", "+00:00"))
            days = (datetime.now(timezone.utc) - updated).total_seconds() / 86400
            prev = seen_stale_issues.get(ident)
            if days >= STALE_ISSUE_DAYS and (not prev or prev != issue["updatedAt"]):
                result["stale_in_progress"].append({
                    "identifier": ident,
                    "title": issue["title"],
                    "days_in_progress": round(days, 1),
                })
                seen_stale_issues[ident] = issue["updatedAt"]
    except Exception as e:
        print(f"[warn] Linear stale issue query failed: {e}", file=sys.stderr)

    # ── Report validation retry counts ───────────────────────────────────────
    for ident, count in validation_retries.items():
        if count >= MAX_VALIDATION_RETRY:
            result["validation_retries"].append({
                "identifier": ident, "retries": count,
            })

    # ── Write back state ──────────────────────────────────────────────────────
    state.update({
        "seen_ci":            seen_ci,
        "seen_open_stale":    seen_open_stale,
        "seen_merged":        seen_merged,
        "seen_stale_issues":  seen_stale_issues,
        "validation_retries": validation_retries,
        "last_heartbeat":     now_iso(),
        "_updated":           now_iso(),
    })
    save_json(HB_STATE_JSON, state)

    print(json.dumps(result, ensure_ascii=False, indent=2))

# ── sprint ────────────────────────────────────────────────────────────────────

def cmd_sprint(args):
    project_name = args.project_name

    query = """
        query($filter: IssueFilter) {
          issues(filter: $filter, first: 100) {
            nodes {
              identifier title priority
              state { name }
              assignee { name }
              labels { nodes { name } }
              updatedAt
            }
          }
        }
    """
    variables: dict = {}
    if project_name:
        proj = find_project(project_name)
        if not proj:
            err(f"Project {project_name!r} not found in PROJECTS.json")
        variables["filter"] = {"project": {"id": {"eq": proj["linear_id"]}}}

    try:
        data = linear_query(query, variables)
    except Exception as e:
        err(f"Linear query failed: {e}")

    issues = data["issues"]["nodes"]
    by_state: dict[str, list] = {}
    for issue in issues:
        state = issue["state"]["name"]
        by_state.setdefault(state, []).append({
            "identifier": issue["identifier"],
            "title":      issue["title"],
            "priority":   issue["priority"],
            "assignee":   issue.get("assignee", {}).get("name") if issue.get("assignee") else None,
            "labels":     [l["name"] for l in issue["labels"]["nodes"]],
            "updatedAt":  issue["updatedAt"],
        })

    print(json.dumps({"project": project_name, "by_state": by_state}, ensure_ascii=False, indent=2))

# ── issue ─────────────────────────────────────────────────────────────────────

def _linear_issue_id(identifier: str) -> str:
    """Resolve identifier (e.g. GEO-123) to Linear internal UUID."""
    data = linear_query(
        'query($id: String!) { issue(id: $id) { id } }',
        {"id": identifier}
    )
    return data["issue"]["id"]

def _linear_state_id(state_name: str) -> str:
    data = linear_query(
        'query($name: String!) { workflowStates(filter: { name: { eq: $name } }) { nodes { id } } }',
        {"name": state_name}
    )
    nodes = data["workflowStates"]["nodes"]
    if not nodes:
        raise RuntimeError(f"State {state_name!r} does not exist")
    return nodes[0]["id"]

def _linear_label_id(label_name: str) -> str:
    data = linear_query(
        'query($name: String!) { issueLabels(filter: { name: { eq: $name } }) { nodes { id } } }',
        {"name": label_name}
    )
    nodes = data["issueLabels"]["nodes"]
    if not nodes:
        raise RuntimeError(f"Label {label_name!r} does not exist")
    return nodes[0]["id"]

PRIORITY_MAP = {"urgent": 1, "high": 2, "medium": 3, "low": 4, "no priority": 0}

def cmd_issue(args):
    sub = args.issue_cmd

    if sub == "create":
        proj = find_project(args.project)
        if not proj:
            err(f"Project {args.project!r} not found in PROJECTS.json")
        try:
            data = linear_query(
                """
                mutation($title: String!, $projectId: String!) {
                  issueCreate(input: { title: $title, projectId: $projectId }) {
                    issue { identifier title }
                  }
                }
                """,
                {"title": args.title, "projectId": proj["linear_id"]},
            )
            issue = data["issueCreate"]["issue"]
            ok(f"{issue['identifier']}{issue['title']}")
        except Exception as e:
            err(f"Failed to create issue: {e}")

    elif sub == "view":
        try:
            data = linear_query(
                """
                query($id: String!) {
                  issue(id: $id) {
                    identifier title description priority
                    state { name }
                    assignee { name }
                    labels { nodes { name } }
                    comments { nodes { body createdAt user { name } } }
                  }
                }
                """,
                {"id": args.identifier},
            )
            print(json.dumps(data["issue"], ensure_ascii=False, indent=2))
        except Exception as e:
            err(f"Failed to query issue: {e}")

    elif sub == "move":
        try:
            issue_id = _linear_issue_id(args.identifier)
            state_id = _linear_state_id(args.state)
            linear_query(
                "mutation($id: String!, $stateId: String!) { issueUpdate(id: $id, input: { stateId: $stateId }) { success } }",
                {"id": issue_id, "stateId": state_id},
            )
            ok(f"{args.identifier}{args.state}")
        except Exception as e:
            err(f"Failed to move issue: {e}")

    elif sub == "label":
        try:
            issue_id = _linear_issue_id(args.identifier)
            label_id = _linear_label_id(args.label)
            linear_query(
                "mutation($id: String!, $labelIds: [String!]!) { issueAddLabel(id: $id, labelId: $labelIds) { success } }",
                {"id": issue_id, "labelIds": [label_id]},
            )
            ok(f"{args.identifier} label → {args.label}")
        except Exception as e:
            err(f"Failed to add label: {e}")

    elif sub == "priority":
        prio_val = PRIORITY_MAP.get(args.priority.lower())
        if prio_val is None:
            err(f"Invalid priority. Options: {', '.join(PRIORITY_MAP.keys())}")
        try:
            issue_id = _linear_issue_id(args.identifier)
            linear_query(
                "mutation($id: String!, $priority: Int!) { issueUpdate(id: $id, input: { priority: $priority }) { success } }",
                {"id": issue_id, "priority": prio_val},
            )
            ok(f"{args.identifier} priority → {args.priority}")
        except Exception as e:
            err(f"Failed to set priority: {e}")

    elif sub == "start":
        try:
            issue_id = _linear_issue_id(args.identifier)
            state_id = _linear_state_id("In Progress")
            linear_query(
                "mutation($id: String!, $stateId: String!) { issueUpdate(id: $id, input: { stateId: $stateId }) { success } }",
                {"id": issue_id, "stateId": state_id},
            )
            ok(f"{args.identifier} → In Progress")
        except Exception as e:
            err(f"Failed to start issue: {e}")

    elif sub == "cancel":
        try:
            issue_id = _linear_issue_id(args.identifier)
            state_id = _linear_state_id("Cancelled")
            linear_query(
                "mutation($id: String!, $stateId: String!) { issueUpdate(id: $id, input: { stateId: $stateId }) { success } }",
                {"id": issue_id, "stateId": state_id},
            )
            ok(f"{args.identifier} → Cancelled")
        except Exception as e:
            err(f"Failed to cancel issue: {e}")

    elif sub == "report":
        passed     = args.passed or []
        failed     = args.failed or []
        conclusion = args.conclusion or ""
        lines = ["## Acceptance Report\n"]
        if passed:
            lines.append("**Passed**")
            for p in passed:
                lines.append(f"- ✅ {p}")
        if failed:
            lines.append("\n**Failed**")
            for f_ in failed:
                lines.append(f"- ❌ {f_}")
        if conclusion:
            lines.append(f"\n**Conclusion:** {conclusion}")
        comment_body = "\n".join(lines)

        # Update validation retry count
        state = load_json(HB_STATE_JSON)
        retries = state.get("validation_retries", {})
        if failed:
            retries[args.identifier] = retries.get(args.identifier, 0) + 1
        else:
            retries.pop(args.identifier, None)
        state["validation_retries"] = retries
        save_json(HB_STATE_JSON, state)

        try:
            issue_id = _linear_issue_id(args.identifier)
            linear_query(
                "mutation($id: String!, $body: String!) { commentCreate(input: { issueId: $id, body: $body }) { success } }",
                {"id": issue_id, "body": comment_body},
            )
            if failed:
                warn(f"{args.identifier} acceptance failed (retries: {retries.get(args.identifier, 1)})")
            else:
                ok(f"{args.identifier} acceptance passed, comment written")
        except Exception as e:
            err(f"Failed to write acceptance report: {e}")

    else:
        err(f"Unknown issue subcommand: {sub}")

# ── CLI entry point ───────────────────────────────────────────────────────────

def build_parser() -> argparse.ArgumentParser:
    parser = argparse.ArgumentParser(
        prog="projects.py",
        description="flowchain — Unified Linear + GitHub operation executor",
    )
    sub = parser.add_subparsers(dest="cmd", required=True)

    # heartbeat
    sub.add_parser("heartbeat", help="Full scan, outputs structured JSON")

    # sprint
    sp = sub.add_parser("sprint", help="Sprint report")
    sp.add_argument("project_name", nargs="?", default=None, help="Project name (optional)")

    # issue
    ip = sub.add_parser("issue", help="Issue operations")
    isub = ip.add_subparsers(dest="issue_cmd", required=True)

    ic = isub.add_parser("create", help="Create issue")
    ic.add_argument("title")
    ic.add_argument("--project", required=True)

    iv = isub.add_parser("view", help="View issue details")
    iv.add_argument("identifier")

    im = isub.add_parser("move", help="Move issue state")
    im.add_argument("identifier")
    im.add_argument("state")

    il = isub.add_parser("label", help="Add label to issue")
    il.add_argument("identifier")
    il.add_argument("label")

    ipr = isub.add_parser("priority", help="Set priority")
    ipr.add_argument("identifier")
    ipr.add_argument("priority", choices=list(PRIORITY_MAP.keys()))

    ist = isub.add_parser("start", help="Start issue (→ In Progress)")
    ist.add_argument("identifier")

    ica = isub.add_parser("cancel", help="Cancel issue (→ Cancelled)")
    ica.add_argument("identifier")

    irp = isub.add_parser("report", help="Write acceptance report comment")
    irp.add_argument("identifier")
    irp.add_argument("--passed",     action="append", metavar="TEXT", help="Passed criteria (repeatable)")
    irp.add_argument("--failed",     action="append", metavar="TEXT", help="Failed criteria (repeatable)")
    irp.add_argument("--conclusion", metavar="TEXT",  help="Summary conclusion")

    return parser

def main():
    if not LINEAR_API_KEY:
        err("Environment variable LINEAR_API_KEY is not set")

    parser = build_parser()
    args = parser.parse_args()

    dispatch = {
        "heartbeat": cmd_heartbeat,
        "sprint":    cmd_sprint,
        "issue":     cmd_issue,
    }
    dispatch[args.cmd](args)

if __name__ == "__main__":
    main()

9. Project Management Core: Workflow System

WORKFLOW.md Index

# WORKFLOW.md — Workflow Index

_Last updated: YYYY-MM-DD_

> When receiving a task, first match trigger words; if matched, read the corresponding file and execute per SOP; if not matched, use free judgment.

## Workflow List

| # | Workflow | Trigger Words | File | Status | TTL | Description |
|---|----------|---------------|------|--------|-----|-------------|
| 00 | Create Workflow | new workflow, create workflow, build SOP | workflow/00-create-workflow.md | active | 180d | Meta-process for standardizing creation of any new workflow |
| 01 | Project Management & Automated Dev | project progress, there's a bug, create issue, sprint report, help me create issue, start development | workflow/01-project-management.md | active | 90d | Linear+GitHub full pipeline project management, AI tool roles, automated push |
| 02 | Email Monitoring | check email, email summary, any important emails | workflow/02-email-check.md | active | 30d | Heartbeat email check, Copilot PR notification auto-triggers acceptance gate |

## Execution Rules (Quick Reference)

- **draft** → Remind user it's unvalidated, proceed with caution
- **active + within TTL** → Execute directly
- **active + past TTL** → Execute while validating, update last_verified after
- **deprecated** → Refuse to execute, inform it's deprecated
- **skill missing** → Inform user, suggest install or create, don't auto-install

00-create-workflow.md

---
title: "Create Workflow"
created: YYYY-MM-DD
last_run: ~
last_verified: YYYY-MM-DD
ttl: 180d
status: active
skills: []
tags: [meta, workflow]
---

# Create Workflow Playbook

## Trigger Conditions
Say: new workflow, create workflow, build SOP, turn X into a workflow

## Pre-checks
- [ ] Confirm the name and purpose of the new workflow
- [ ] Confirm trigger words (2-4 keywords)
- [ ] Confirm required skills
- [ ] Confirm TTL (see type recommendations table)

## Execution Steps

### Step 1 — Determine Number
Check the workflow/ directory, take the current highest numeric prefix + 1 as the new file number.

### Step 2 — Create Playbook File
Filename format: workflow/NN-slug.md (slug in lowercase English + hyphens)

Frontmatter template:
---
title: "Workflow Name"
created: YYYY-MM-DD
last_run: ~
last_verified: YYYY-MM-DD
ttl: 30d
status: draft
skills: []
tags: []
---

New workflows default to status: draft; change to active after first execution and validation.

### Step 3 — Write Playbook Content
Must include the following sections:
- Trigger conditions
- Pre-checks (including parameters/conditions)
- Execution steps (step by step, with specific commands)
- Permission boundaries (what AI can decide autonomously, what must be asked)
- Output/deliverables
- Error handling

### Step 4 — Update WORKFLOW.md Index
Append a row to the table:
| NN | Workflow Name | Trigger Words | workflow/NN-slug.md | draft | TTL | Description |

### Step 5 — Notify User
New workflow created, status draft, can be upgraded to active after first execution.

## Proactive Identification
When something recurs ≥ 3 times with no corresponding workflow, proactively suggest:
> "I've noticed [X] has come up a few times — want to create a workflow for it?"

## TTL Recommendations

| Type | Recommended TTL |
|------|----------------|
| Depends on CLI/API tools | 30d |
| Depends on external platforms (scrapers, web) | 14d |
| Pure process (recruiting, research) | 90d |
| Meta-workflows | 180d |

10. Project Management Core: 01-project-management.md

This is the core Playbook of the entire system. Copy it directly to workflow/01-project-management.md, replace the placeholders, and it’s ready to use.

---
title: "Project Management & Automated Development"
created: YYYY-MM-DD
last_run: ~
last_verified: YYYY-MM-DD
ttl: 90d
status: active
skills: [linear-cli, github, coding-agent]
tags: [project-management, linear, github, automation, ai-coding, agile]
---

# Project Management & Automated Development

> Based on agile development cadence, with Linear + GitHub as the foundation and OpenClaw as the automation middleware.

⚠️ **Must read `status/PROJECTS.json` before executing**: get project IDs, GitHub repo mappings, and monitoring scope.

---

## Trigger Conditions

project progress, check XX, XX is done, there's a bug, create issue, sprint report, help me create issue, start developing XX, prioritize, break down tasks

---

## Configuration (Single Source of Truth)

> ⚠️ API keys, Team ID, Project IDs, State IDs are all stored in `credentials/linear.json`.
> Format reference: `credentials/linear.example.json`

### Label Conventions

| Label | Meaning |
|-------|---------|
| `Bug` | Bug fix |
| `Feature` | New feature |
| `Chore` | Miscellaneous tasks |
| `Docs` | Documentation update |
| `Auto` | AI auto-executable task (Heartbeat monitoring basis; must be applied when scheduling as auto task) |

### Issue Naming Conventions

- Feature: `[Feature] Short description`
- Bug: `[Bug] Short description`

### Branch Naming Conventions

```
feature/{ISSUE_ID}-short-description
fix/{ISSUE_ID}-short-description
```

> `{ISSUE_ID}` is the identifier returned by Linear (e.g., `GEO-123`).
> The branch name containing `{ISSUE_ID}` is OpenClaw's sole basis for determining PR-to-issue association.

### AI Tool Roles

| Phase | Tool | Triggered By | Purpose |
|-------|------|--------------|---------|
| Planning | Claude Code | OpenClaw background call | Technical planning, architecture, complexity analysis |
| Implementation (manual) | Cursor | User explicitly says "I'll do it myself" | Local IDE coding, OpenClaw does not intervene |
| Implementation (AI-driven) | Claude Code / Codex / Copilot Agent | **OpenClaw decision routing** | See tool decision rules below |
| Review | Claude Code | OpenClaw auto-triggers (PR open) | Diff analysis, issue annotation |

### OpenClaw Tool Decision Rules

| Scenario | Claude Code | Codex | Copilot Agent |
|----------|:-----------:|:-----:|:-------------:|
| Task description is vague, needs context understanding | ✅ | | |
| Cross-file analysis / architecture judgment | ✅ | | |
| Bug cause unclear, needs reasoning | ✅ | | |
| Finishing documentation | ✅ | | |
| Task is clear, small scope, explicit steps | | ✅ | |
| Generate test cases | | ✅ | |
| Generate PR description | | ✅ | |
| Bug fix (location already identified) | | ✅ | |
| GitHub issue assigned for AI full-pipeline execution | | | ✅ |

Default preference is Claude Code; use Codex when task is very clear; use Copilot Agent when native GitHub full-pipeline is needed.

### AI Invocation Conventions

```bash
# Claude Code (planning/review)
cd /path/to/{repo} && claude --permission-mode bypassPermissions --print 'Task description'

# Codex (tests/PR descriptions, requires PTY)
# exec(pty=true, workdir=/path/to/{repo}, command="codex exec --full-auto 'Task description'")

# GitHub Copilot Agent (autonomous execution on GitHub after issue assignment)
# gh issue edit {number} --repo your-org/{repo} --add-assignee @copilot
```

---

## Agile Process Overview

```
Requirements → Backlog Grooming → Sprint Planning → Development → Code Review → Acceptance Done → Retrospective
    ↑_________________________feedback loop___________________________________|
```

---

## Phase 1 · Requirements Gathering

**Entry:** Triggered at any time (Telegram / Linear direct creation)

### Three Sources

**A. Tell OpenClaw (most common)**
- State the requirement → OpenClaw runs `python3 flowchain/projects.py issue create "Title" --project <project name>` → push confirmation (with identifier)
- ⚠️ When creating an issue, the description must include an `## Acceptance Criteria` section (format: `- [ ] criteria item`)
- If acceptance criteria are not explicitly provided, OpenClaw infers and writes them, then pushes for confirmation along with the criteria

**B. Create directly in Linear**
- OpenClaw detects new Backlog issue on next heartbeat → enters Phase 2
- ⚠️ If `## Acceptance Criteria` section is missing, OpenClaw proactively adds it and pushes for confirmation

**C. Code first, document later**
- Say "xxx is done, help me log it" → OpenClaw creates the issue and immediately marks it Done

**Exit:** Issue exists in Backlog with `## Acceptance Criteria` section in description

---

## Phase 2 · Backlog Grooming (OpenClaw proactive, no trigger needed)

**Entry:** Issue enters Backlog

### 2.1 Apply Labels

```bash
python3 flowchain/projects.py issue label {ISSUE_ID} <Bug|Feature|Chore|Docs|Auto>
```

Default to `Feature` when uncertain.

### 2.2 Evaluate Priority


| Priority | Criteria |
| -------- | -------- |
| `Urgent` | Affects main flow / production incident |
| `High`   | Important feature / planned core work |
| `Medium` | General improvement |
| `Low`    | Deferrable optimization |


```bash
python3 flowchain/projects.py issue priority {ISSUE_ID} <urgent|high|medium|low>
```

Change directly if user disagrees, no explanation needed.

### 2.3 Complexity Pre-assessment

If any of the following apply, initiate Claude Code pre-assessment (analysis only, no code):

- Issue description exceeds 5 lines / contains multiple sub-requirements
- Keywords include "architecture", "refactor", "API design", "database change"

If estimated > 3 days of work → auto-split:

- Original issue becomes Epic
- Sub-issues created in `[Subtask] Description` format, linked to parent issue
- Push split result for confirmation (silence = approval)

Planning results storage:

- Current plan → appended to Linear issue description
- Long-term archive → Obsidian project directory

**Exit:** Label applied, priority set, complex issues split

---

## Phase 3 · Sprint Planning (OpenClaw executes proactively)

### 3.1 Two Task Types, Two Scheduling Logics

**🤖 Auto tasks** (`Auto` label, OpenClaw can independently schedule tools to complete)

- Urgent / High → immediately move to Todo, push notification
- Medium → auto-move when In Progress count < 2
- Low → don't proactively schedule, wait for instruction

**🧑‍💻 Manual tasks** (user explicitly says "I'll do it myself", or requires local IDE)

- OpenClaw does not proactively move to Todo, only pushes suggestion:
`📋 N manual tasks pending scheduling, highest priority: {ISSUE_ID} [Title], schedule for this Sprint?`
- Execute state change only after confirmation

### 3.2 Concurrency Control

- In Progress auto tasks: no more than **3**
- When auto + manual combined exceeds 5, push prompt: `⚠️ Currently N tasks in progress, recommend completing some before starting new ones`

**Exit:** Issue status is Todo, development mode confirmed (auto/manual)

---

## Phase 4 · Development

**Entry:** Issue is in Todo

### 4.1 Pick Up Issue

Say "start {ISSUE_ID}" →

1. `python3 flowchain/projects.py issue start {ISSUE_ID}`
2. Determine development mode

### 4.2 Development Mode Selection

**Mode A: AI-driven throughout**

1. Select Claude Code or Codex based on tool decision rules
2. Generate plan and push for confirmation (must confirm before implementation, no auto code writing)
3. Confirmed → execute; no reply after 30min → push reminder again

**Mode B: Manual development** (OpenClaw steps back to monitoring role)

1. Create branch locally: `git checkout -b feature/{ISSUE_ID}-description`
2. Develop in Cursor, OpenClaw does not intervene
3. If stuck, inform OpenClaw → OpenClaw analyzes per tool decision rules

### 4.3 AI Task Development Conventions

**Before development (required):**

1. `python3 flowchain/projects.py issue view {ISSUE_ID}` to confirm `## Acceptance Criteria` exists
2. If missing → infer and write, push for confirmation
3. Push: `🤖 {ISSUE_ID} AI implementation started`

**After development, hand off to acceptance (required):**

- Code has been `git add && git commit`
- Push: `✅ {ISSUE_ID} Code complete, awaiting acceptance`

### 4.4 In-Development Monitoring

In Progress for more than **3 days** with no associated PR → push:
`⏰ {ISSUE_ID} has been in progress for 3 days, still working? Need to split or assist?`

**Exit:** Code committed → enter Phase 4.5 Acceptance Gate

---

## Phase 4.5 · Acceptance Gate (Gatekeeper Mechanism)

> ⚠️ **All AI-implemented auto tasks must pass this Gate before updating Linear to Done status.**
> Manually completed tasks can skip this.

**Entry:** AI implementation completion notification, or heartbeat fallback trigger

### Acceptance Flow

**Step 1: Read acceptance criteria from Linear**

```bash
python3 flowchain/projects.py issue view {ISSUE_ID}
```

Extract `## Acceptance Criteria` section, list each check item.

**Step 2: Code quality check**

```bash
# Python
python3 -m py_compile <new files>
python3 -m pytest tests/ -v

# Swift/iOS
xcodebuild build -scheme <scheme> -destination 'platform=iOS Simulator,name=iPhone 16'
```

**Step 3: Functional verification**
Check each item against `## Acceptance Criteria`, record each result.

**Step 4: Submit acceptance report to Linear**

```bash
python3 flowchain/projects.py issue report {ISSUE_ID} \
  --passed "Criteria A: result description" \
  --failed "Criteria B: failed (reason)" \
  --conclusion "⚠️ Needs fix"
```

**Step 5: Branch handling**

✅ Acceptance passed:

```bash
python3 flowchain/projects.py issue move {ISSUE_ID} Done
```

Push: `✅ {ISSUE_ID} Acceptance passed`

⚠️ Acceptance failed → auto-fix loop (max 2 times):

1. Determine failure type, select fix tool
2. Dispatch Claude Code / Codex to fix, re-run Steps 2–4
3. Still failing on 3rd attempt → stop auto-fix, push for human intervention:
  ```
   🔴 {ISSUE_ID} Acceptance failed 3 consecutive times, human intervention required
   Failed items: [specific list]
  ```

### Acceptance Pass Criteria

1. All `## Acceptance Criteria` items have been checked
2. Code quality check passed
3. Core functionality verification passed (minor issues allowed if noted in report)
4. Acceptance report written to Linear issue comment

**Exit:** Linear status Done (passed), or human intervention (consecutive failures)

---

## Phase 5 · Code Review

**Entry (traditional PR flow only):** Heartbeat detects new PR (branch name contains `{ISSUE_ID}`)

> ⚠️ AI auto tasks skip this phase and go directly to Phase 4.5 Acceptance Gate.

### 5.1 State Sync

Linear API: **In Progress → In Review**

### 5.2 AI Review Auto-trigger

```bash
REVIEW_DIR=$(mktemp -d)
git clone https://github.com/your-org/{repo}.git $REVIEW_DIR
cd $REVIEW_DIR && gh pr checkout {pr_number}
claude --permission-mode bypassPermissions --print \
  'Review this PR. Focus on: logic bugs, security issues, performance problems. Be concise.'
```

Review conclusions are pushed to the user; **do not comment directly on GitHub** — wait for confirmation before deciding whether to post.

### 5.3 PR Status Monitoring

- PR unmerged for > **24h** → push reminder
- CI failure → immediate push: `🔴 {ISSUE_ID} PR #N CI failed`

**Exit:** PR merged

---

## Phase 6 · Acceptance Complete

**Entry A (traditional PR flow):** Heartbeat detects PR merged
**Entry B (AI auto tasks):** Phase 4.5 Acceptance Gate passed

### 6.1 State Sync

**PR merged scenario:**
Linear API: **In Review → Done**
Push: `✅ {ISSUE_ID} [Title] completed, PR #N merged`

**AI auto task scenario:**
Linear status already updated in Phase 4.5, only push here: `✅ {ISSUE_ID} Acceptance complete`

### 6.2 Canceled Acceptance Check

Initiating closure → push checklist, wait for confirmation before executing:

```
⚠️ {ISSUE_ID} preparing to Canceled, confirm the following:
- [ ] Any associated open PRs? (need to close or transfer first)
- [ ] Any committed but unrevereted code?
- [ ] Any other tasks depending on this issue?
Reply "confirm close" when ready
```

### 6.3 Archive Reminder

If this development produced documentation worth archiving → push: `📝 {ISSUE_ID} complete, write to documentation library?`

**Exit:** Issue status Done / Canceled

---

## Phase 7 · Retrospective

**Trigger:** Say "sprint report" / "project progress" / "let's review"

```bash
python3 flowchain/projects.py sprint [project name]
```

Push format:

```
📊 Weekly Progress ([Project Name])

✅ Completed: N
· {ISSUE_ID} Title

🔄 In Progress: N
· {ISSUE_ID} Title (In Progress · X days)

📋 Pending: N (Backlog)
· Highest priority: {ISSUE_ID} Title (High)

⚠️ Needs Attention:
· {ISSUE_ID} has been in progress 3+ days with no PR, need to split?
```

---

## Error Handling


| Error                       | Handling                                                       |
| --------------------------- | -------------------------------------------------------------- |
| Linear API failure          | Retry once with SSL bypass; inform user if still failing       |
| Issue not found             | Confirm with `linear issue list --all-states`                  |
| Branch missing `{ISSUE_ID}` | Remind to confirm naming, manually associate after ID provided |
| gh CLI failure              | Push alert, Linear API scan continues                          |
| gh CLI fails 2 consecutive times | Escalate alert: project monitoring degraded (Linear only) |


---

## Heartbeat Integration Notes


| Workflow Mechanism                    | Heartbeat Implementation                                                                  |
| ------------------------------------- | ----------------------------------------------------------------------------------------- |
| Phase 3 concurrency control (Auto ≤3) | Step A: count In Progress Auto issues, auto-move to Todo                                  |
| Phase 4.4 timeout alert (>3 days no PR) | Step A: compare linear list + gh pr, push if no associated PR                           |
| Phase 4.5 Acceptance Gate (fallback)  | Step A: check Linear comments for acceptance report, trigger if absent, store retries in heartbeat-state.json |
| Phase 5 PR Review trigger             | Step B: gh pr open new PR detection                                                       |
| Phase 6 PR merged → Done              | Step B: gh pr merged detection, reverse-lookup branch's {ISSUE_ID}, update Linear        |


Acceptance failure retry count stored in: `status/heartbeat-state.json → validation_retries.{ISSUE_ID}`, max 3. After exceeding limit, stop fallback and wait for human intervention; manually clear the corresponding key after handling.

11. Heartbeat Project Monitoring Section

Append the following to the end of the HEARTBEAT.md base template:

## 1. Project Monitoring (read status/PROJECTS.json, scan repos with heartbeat: true)

**Each heartbeat executes A → B → C in order:**

---

### Step A: Linear Auto Task Check

**1. Query all `In Progress` + `Auto` label issues:**
```bash
linear issue list --state "In Progress" --label "Auto" --json
```

**2. For each Auto issue, check in order:**

**Timeout alert (no associated PR):**
In Progress for > 3 days with no associated PR (branch name contains `{ISSUE_ID}`) → push:
`⏰ {ISSUE_ID} In Progress for N days, no PR seen`

**Acceptance fallback (AI completed but Gate not triggered):**
Check Linear comments for "acceptance report" text:

- Found → skip (Phase 4.5 already completed)
- Not found → read `status/heartbeat-state.json` `validation_retries.{ISSUE_ID}`
  - retries < 3 → trigger Phase 4.5 Acceptance Gate, retries +1, write back to state
  - retries ≥ 3 → skip (already reported for human intervention, waiting)

**3. Concurrency control:** Count all In Progress Auto issues

- Count < 3 and Backlog has `Auto + Urgent/High` issues → take the **1st** by priority and move to Todo, push:
`📅 {ISSUE_ID} auto-moved to Todo ({priority})`

---

### Step B: GitHub PR Scan + State Persistence

```bash
python3 flowchain/projects.py heartbeat
```

This command automatically:

- Loads `heartbeat: true` projects from `status/PROJECTS.json`
- Scans open/merged PRs for each repo, detects CI status
- Moves Linear issues for merged PRs to Done
- Reads/writes `status/heartbeat-state.json` (deduplication, persistence)

Push based on output JSON per these rules:

- `ci_failures` non-empty → **push immediately** (ignore quiet hours)
- `pr_open_stale` / `stale_in_progress` / `validation_retries` non-empty → aggregate push (except quiet hours)
- `pr_merged_to_done` non-empty → aggregate push (except quiet hours)
- All lists empty → **no push** (silent completion)

**Push format:**

```
🔔 Project Updates

[BackClaw] PR #12 "feat: plugin hot reload" waiting for review 25h
[HexPaw] CI failed: PR #5 "fix: crash on launch"
{ISSUE_ID} In Progress for 4 days, no PR seen
✅ {ISSUE_ID} PR #8 merged → Linear Done
```

---

### Step B2: Email Check → Copilot PR Notification → Trigger Acceptance Gate

Execute per `workflow/02-email-check.md`.

Core pipeline:

1. Read `status/MAILLIST.json` rules and state
2. Detect emails from `*@github.com` sender + Subject containing `github-copilot[bot]`
3. Extract `{repo}` and PR number from Subject
4. `gh pr view` → extract Linear issue ID from branch name
5. Linear issue → In Review
6. Trigger Phase 4.5 Acceptance Gate
7. Update `MAILLIST.json` `last_urgent_ids` and `last_run`

---

### Step C: NOW.md Overwrite

Required at the end of every heartbeat: use `write` tool (not edit/append) to overwrite `NOW.md`:

```markdown
# NOW.md — Current State Snapshot

_Last updated: YYYY-MM-DD HH:MM (Asia/Shanghai)_

## Current Focus
(One sentence describing what's currently being worked on)

## Recent Events
- HH:MM — Event title (max 5 items, extracted from today's log)

## Pending
- [ ] Incomplete todos
```

12. Email Monitoring Workflow (02-email-check.md)

Create workflow/02-email-check.md:

---
title: "Email Monitoring"
created: YYYY-MM-DD
last_run: ~
last_verified: YYYY-MM-DD
ttl: 30d
status: active
skills: [gmail, telegram]
tags: [email, monitor, heartbeat, github, copilot]
---

# Email Monitoring Playbook

## Trigger Conditions
- **Scheduled trigger**: automatically executed during heartbeat check
- **Manual trigger**: say: check email, email summary, any important emails

## Pre-checks
- [ ] Read `status/MAILLIST.json`, get runtime state and all rule configurations

## Execution Steps

### Step 1 — Load Configuration and State
Read `status/MAILLIST.json`, get:
- `config`: immediate push rules, summary rules, ignore rules
- `last_summary_date`: determine if today's summary has been sent
- `last_urgent_ids`: list of already-pushed urgent email IDs (for deduplication)

### Step 2 — Fetch Unread Emails
```bash
gog gmail list "is:unread -category:promotions -category:social" --limit 20
```

### Step 3 — Rule Matching (highest to lowest priority)

For each email, match in order, stop at first match:

1. **🤖 Copilot PR notification** (highest priority, triggers acceptance flow):
  - Sender is `*@github.com` AND Subject matches:
    - `"[owner/repo] Pull request opened by github-copilot[bot]"`
    - `"[owner/repo] Pull request submitted by github-copilot[bot]"`
  - AND `msg_id` not in `last_urgent_ids` (dedup)
  - **If matched, execute Step 3.1 (Copilot PR acceptance flow)**, skip normal push
2. **🔴 Immediate push**: sender in whitelist OR Subject contains keyword, AND `msg_id` not in `last_urgent_ids`
3. **📋 Daily summary**: Label matches summary rules AND sender not in blacklist
4. **🔇 Ignore**: all other emails

### Step 3.1 — Copilot PR Acceptance Flow

#### 3.1.1 Extract PR Info from Email

Extract `{owner}/{repo}` and PR number from Subject:

```
Subject example: [your-org/ProjectA] Pull request opened by github-copilot[bot] (#3)
Extract: repo = your-org/ProjectA, pr_number = 3
```

If extraction fails → downgrade to normal immediate push, don't trigger acceptance.

#### 3.1.2 Get PR Details and Associated Issue

```bash
gh pr view {pr_number} --repo {owner}/{repo} --json title,body,headRefName
```

Extract Linear issue ID from `headRefName` (branch name) (regex: `[A-Z]+-\d+`):

- Found → record `{ISSUE_ID}`, continue
- Not found → push for human confirmation, write `msg_id` to `last_urgent_ids`, terminate auto flow

#### 3.1.3 Update Linear Status

```bash
python3 flowchain/projects.py issue move {ISSUE_ID} "In Review"
```

#### 3.1.4 Trigger Phase 4.5 Acceptance Gate

Execute per `workflow/01-project-management.md Phase 4.5` flow.

Push notification:

```
🤖 Copilot PR #{pr_number} submitted
Repo: {owner}/{repo}
Associated: {ISSUE_ID}
Status: Acceptance in progress…
```

After acceptance completes, push:

```
✅ {ISSUE_ID} Copilot PR #{pr_number} acceptance passed → Done
```

### Step 4 — Push

**Urgent email** (rule 2 matched):

```
🚨 Important Email
From: xxx
Subject: xxx
Time: xxx
```

After pushing, append `msg_id` to `last_urgent_ids`, write back to `status/MAILLIST.json`.

**Daily summary** (push when `last_summary_date != today`):

```
📬 Email Summary (N unread)

🔴 Important
- [Sender] Subject

📋 Worth Reading
- [Sender] Subject
```

After pushing, update `last_summary_date` to today, write back to `status/MAILLIST.json`.

### Step 5 — Update State

Update `last_run` to current time, write back to `status/MAILLIST.json`.

## Configuration Maintenance

All rule configurations are maintained in the `config` field of `status/MAILLIST.json`:

- **Add whitelist sender**: append to `config.immediate.sender_whitelist`
- **Add keyword**: append to `config.immediate.subject_keywords`
- **Block a sender**: append to `config.ignore.sender_blacklist`

13. Skills Configuration

Google Workspace (gog)

After OAuth authentication is configured, you can directly operate Gmail, Calendar, Drive, Sheets, and Docs.

Install:

openclaw skills install gog

Common commands:

# Search unread emails from last 7 days
gog gmail list "is:unread newer_than:7d" --limit 10 --json

# Query this week's calendar events
gog calendar events primary \
  --from YYYY-MM-DD \
  --to YYYY-MM-DD \
  --json

# Read Google Sheets data
gog sheets get <sheet-id> "Sheet1!A1:D20" --json

obsidian-cli

npm install -g obsidian-cli
obsidian-cli set-default /path/to/your/vault

Common commands:

# View current default vault path
obsidian-cli print-default --path-only

# Search note titles
obsidian-cli search "keyword"

# Search note content
obsidian-cli search-content "keyword"

# Move note (also updates all WikiLinks)
obsidian-cli move "old-path/note" "new-path/note"

14. macOS Security Configuration

Port Check

sudo /usr/sbin/lsof -iTCP -sTCP:LISTEN -n -P
sudo /usr/sbin/lsof -iUDP -n -P

Firewall Configuration

# Check firewall status
/usr/libexec/ApplicationFirewall/socketfilterfw --getglobalstate

# Check stealth mode
/usr/libexec/ApplicationFirewall/socketfilterfw --getstealthmode

# Enable stealth mode
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setstealthmode on

# View firewall app whitelist
/usr/libexec/ApplicationFirewall/socketfilterfw --listapps

OpenClaw Security Audit

openclaw security audit
openclaw security audit --deep

Target state: 0 critical. Two common WARN items:

  • gateway.trusted_proxies_missing: ignore when accessing locally without a reverse proxy
  • gateway.nodes.deny_commands_ineffective: denyCommands only does exact command name matching; check that configured entries use the correct command IDs

SSH Management

If you don’t need external SSH access to this Mac:

# Stop SSH service
sudo systemsetup -setremotelogin off

# Confirm status
sudo systemsetup -getremotelogin

Quick Verification Checklist

After configuration is complete, verify each item:

Basic Configuration
- [ ] SOUL.md / IDENTITY.md / USER.md / AGENTS.md created
- [ ] memory/ directory structure established, memlog.sh executable
- [ ] MEMORY.md created (≤40 lines)
- [ ] HEARTBEAT.md configured (base template + project monitoring section)
- [ ] Cron tasks configured (23:45 reflection, Sunday distillation, Monday security check)

Channels and Models
- [ ] Telegram Bot created, pairing complete
- [ ] Streaming output configured (streaming: "partial")
- [ ] Primary model + fallback chain configured
- [ ] Model aliases set

Project Management
- [ ] status/PROJECTS.json filled in (at least one project, heartbeat: true)
- [ ] status/heartbeat-state.json initialized (empty JSON {})
- [ ] status/MAILLIST.json configured (config rules filled in)
- [ ] flowchain/projects.py executable (python3 flowchain/projects.py heartbeat produces output)
- [ ] credentials/linear.json configured (API key, Team ID, State IDs)
- [ ] WORKFLOW.md created
- [ ] workflow/00-create-workflow.md created
- [ ] workflow/01-project-management.md created, placeholders replaced
- [ ] workflow/02-email-check.md created

Verification
- [ ] Send a Telegram message, OpenClaw responds normally
- [ ] Manually trigger heartbeat, outputs HEARTBEAT_OK or project update push
- [ ] python3 flowchain/projects.py sprint produces normal JSON output
- [ ] openclaw security audit outputs 0 critical

Comments