We’ve released a small, fully open-source demo that adds runtime stability and observability to AI agents.
It’s called RSC Open Demo — a minimal framework that lets an agent see its own coherence and drift in real time.
The goal was simple:
Give AI systems a feedback loop for their own stability, without touching model weights or training code.
It’s not another “agent framework”. It’s a runtime layer — light, auditable, and designed for local or production use.
How it works
The agent (or any process) emits basic metrics →
RSC logs them as JSONL (append-only, rolling checksums).
Each cycle computes a simple stability state: lock / mini-lock / out-of-lock.
A built-in Prometheus exporter exposes KPIs like:
rsc_lock_rate, rsc_mean_Gamma, rsc_out_of_lock_rate.
A minimal FastAPI Web UI visualizes Δφ, Γ, and P in real time.
Comes with a DemoCore placeholder (no proprietary math).
cd app
python run_demo.py
python rsc_kpi_report.py --source ./logs --outdir ./reports
Why this exists
Most AI systems can generate text, code, or decisions —
but they don’t know when they’re drifting, unstable, or incoherent.
This project explores a small, transparent way to monitor that runtime behavior, using ordinary telemetry tools.
No LLM internals, no special hardware — just open engineering.
We’ve released a small, fully open-source demo that adds runtime stability and observability to AI agents. It’s called RSC Open Demo — a minimal framework that lets an agent see its own coherence and drift in real time.
The goal was simple:
Give AI systems a feedback loop for their own stability, without touching model weights or training code.
It’s not another “agent framework”. It’s a runtime layer — light, auditable, and designed for local or production use.
How it works
The agent (or any process) emits basic metrics → RSC logs them as JSONL (append-only, rolling checksums).
Each cycle computes a simple stability state: lock / mini-lock / out-of-lock.
A built-in Prometheus exporter exposes KPIs like: rsc_lock_rate, rsc_mean_Gamma, rsc_out_of_lock_rate.
A minimal FastAPI Web UI visualizes Δφ, Γ, and P in real time.
Comes with a DemoCore placeholder (no proprietary math).
Stack: Python 3.11, FastAPI, Prometheus, pandas, matplotlib.
Quick start git clone https://github.com/Freeky7819/rsc-open-demo.git cd rsc-open-demo docker compose up -d # Web UI: http://localhost:8008/ # Metrics: http://localhost:9108/metrics
Or run directly in Python:
cd app python run_demo.py python rsc_kpi_report.py --source ./logs --outdir ./reports
Why this exists
Most AI systems can generate text, code, or decisions — but they don’t know when they’re drifting, unstable, or incoherent. This project explores a small, transparent way to monitor that runtime behavior, using ordinary telemetry tools. No LLM internals, no special hardware — just open engineering.
Repo & License
GitHub: https://github.com/Freeky7819/rsc-open-demo
License: Apache-2.0
What it is / What it isn’t
Is: a minimal, transparent runtime layer for observing agent stability. Isn’t: a model, optimizer, or closed “AI brain”.
Made by
Damjan
Thanks for reading — happy to answer any questions about integration, metrics, or the reasoning behind the “lock / mini-lock / out-of-lock” model.
The goal wasn’t to build a framework, but to see if runtime self-monitoring for AI agents could be practical without model access.
Feedback from people working on observability, agent orchestration, or trust layers would be especially valuable.
– Damjan