Why this metadata matters
Once multiple coding agents are working in one repository, "the code changed" stops being enough information. Teams also need to know which agent spent the effort, how expensive a cycle was, which model produced the result, and where one owned boundary ended.
That is why token usage and commit trailers belong inside the workflow layer. They are not side notes. They help the repo stay understandable to both humans and agents.
The coordination problem behind the metadata
In a single uninterrupted chat, usage and trailer fields can look optional. In a real multi-agent repo, they answer operational questions: was a cycle unexpectedly expensive, did the agent really close the work it claimed to close, and which model and reasoning mode produced the landed change?
That is why we treat them as workflow surfaces, not just logging output.
The tools we use
The main repo-native tools in this part of the workflow are
scripts/agent_work_cycle.py,
scripts/codex_token_usage_summary.py,
scripts/agent_safe_commit.py, and
scripts/codex_thread_metadata.py.
agent_work_cycle.py records begin and end boundaries
for a user-command cycle and can surface a usage summary like
+53K this cycle est. during closeout. The token summary
helper turns raw session evidence into a reusable usage surface.
codex_thread_metadata.py exposes model and reasoning
metadata, and agent_safe_commit.py uses it when it
creates a guarded commit with trailers such as
Agent-Id, Model,
Reasoning-Effort, and Token-Spent when
usage evidence is available.
One real example from a landed Mycel commit looks like this:
Agent-Id: agt_78833582
Model: gpt-5.4
Reasoning-Effort: medium
Token-Spent: 37K
Why token usage belongs in the workflow
Token usage is not only a cost metric. It is coordination feedback. If one cycle burns far more tokens than expected, that can signal an unclear boundary, repeated failed attempts, or a context load that should have been split earlier.
That information helps human reviewers understand how work happened. It also helps later agents inherit better expectations about slice size and handoff discipline.
Why commit trailers matter
Commit trailers do similar work for landed history. Without them, a diff may show what changed while hiding who owned the slice, which model produced it, and how much token spend the cycle likely consumed.
With trailers, the commit boundary becomes more than a patch. It becomes a compact continuation record that later humans and agents can audit without guessing.
Human-readable and agent-usable at the same time
This is the same principle we use for checklists and handoffs. The metadata should stay useful to both audiences: humans can read timestamp lines and commit trailers directly, and agents can parse the same fields to preserve identity and continuation context.
If the data is only machine-friendly, humans stop trusting it because they cannot inspect it easily. If it is only prose for humans, agents cannot reuse it safely in automation.
The tradeoff
Adding token summaries and commit trailers creates more explicit process. It makes the workflow more opinionated, and it preserves metadata that lightweight demos would often skip.
But the upside is clarity: cost becomes visible, ownership becomes durable, model context survives beyond one chat, and landed history becomes easier to audit.
Takeaway
Token usage and commit trailers are not side channels. They are part of the coordination layer: workcycle timestamps make usage visible, helper scripts summarize token spend, and safe commits preserve agent plus model identity in landed history.
Important workflow context should not disappear just because a chat ends. That is the broader Mycel pattern these tools try to enforce.