MONITORING AND RECOVERY
It is also about whether the venue can detect degraded conditions early, communicate them honestly, and recover in an ordered way.
| Operational signal | Why it matters |
|---|---|
| Feed freshness and market posture | Prevent stale or degraded markets from presenting as normal |
| Root publication and liveness | Detect runtime or settlement lag before it reaches withdrawals |
| Gateway and auth health | Confirm that sensitive routes are behaving with the correct boundary |
| Treasury and reconciliation state | Surface fee, reserve, or accounting drift before it becomes opaque |
#What the venue watches continuously
Dexter tracks feed freshness, market posture, service health, funding history, publication liveness, and treasury reconciliation state so the venue can say when it is healthy and when it is not.
This is part of the trust model, not just part of observability.
A degraded venue that still presents itself as normal is already failing.
Runtime and product changes also move through regression, smoke, and security checks because operational drift can create risk even when contracts are untouched.
The protocol is not secured by code alone.
It is also secured by the discipline with which code is changed, published, and monitored in production.
health signal turns unhealthy
-> market, runtime, or service posture tightens
-> degraded state becomes visible across venue surfaces
-> recovery waits for fresh data, valid permissions, and healthy publication
-> only then do protected paths return to normal
#How recovery is supposed to work
When the venue is degraded, the goal is controlled recovery rather than silent restart.
Fresh prices, clean market posture, valid permissions, and verified service health should all be visible again before protected surfaces are treated as fully normal.
That makes recovery slower than pretending the issue disappeared.
It also makes recovery safer and easier to review after the fact.