Technical Overview
The Tribunal Consensus Engine
Section titled “The Tribunal Consensus Engine”The Tribunal is a high-performance orchestration framework for autonomous decision-making through multi-agent consensus. It decouples data preparation, specialist inference, observation, consensus, and verification so a single model never becomes the only source of truth.
read-only inference stream
Sanitized Consensus TraceArchitecture Philosophy
Section titled “Architecture Philosophy”The Tribunal is built on three core principles:
- Multi-Agent Orchestration — Specialized agents with decoupled roles reach consensus through structured voting
- Hardware-Accelerated Inference — Optimized for enterprise-grade GPUs (AMD ROCm, NVIDIA CUDA)
- Self-Improving Loop — Every verified decision becomes training data for the next generation
1. Multi-Agent Orchestration Layer
Section titled “1. Multi-Agent Orchestration Layer”At the core of the system is a hierarchical supervisor-agent architecture. Rather than relying on a single monolithic model, the framework employs specialized roles:
- The Prophet: Analyzes multi-modal data streams (sentiment, funding rates, derivatives) to generate high-level hypotheses
- The Knights (Specialized Analysts): Individual agents that process specific sub-sets of the market state. They provide independent “votes” based on specialized training
- The King (Consensus Leader): Aggregates Knight signals and Prophet hypotheses. Applies final validation to ensure alignment with global risk parameters
- The Scribe: Verifies patterns against historical data. The Prophet proposes; Python verifies. Canon grows only when code agrees
In the current Phase 2 design, Prophet is a silent observer. It records observations and correlation data but does not advise the King or cast a vote. That keeps the live decision path clean while building the corpus needed for later King fine-tuning.
The Consensus Flow
Section titled “The Consensus Flow”State Packet -> Knights (independent votes) -> King (consensus gate) -> Action / No Action | Prophet (silent observation)Each role is optimized for its specific function. Knights are fast, single-shot responders. Kings deliberate with full context. The system separates inference modes by role — speed for signals, depth for decisions.
2. Inference Optimization
Section titled “2. Inference Optimization”The Tribunal is built around practical inference constraints:
- Local Extraction: Optimized for workstation GPUs and repeatable GGUF execution
- Cloud Fine-tuning: Burst training on high-VRAM accelerators with artifact recovery and cost watchdogs
- Dual-Track Models: Small dense extractors for volume, larger deliberative models for supervisory passes
This architecture achieves the optimal balance between inference latency and decision quality.
3. The Self-Improving Feedback Loop (The Canon)
Section titled “3. The Self-Improving Feedback Loop (The Canon)”The system implements a “Dejavu” dataset architecture — every decision is recorded, indexed, and fed back into the training pipeline:
- Signal Extraction: Replays historical state with the current model
- Outcome Labeling: Market provides the ground truth (price movement)
- Fine-tuning: Successful patterns reinforce, failures are relabeled
- Iteration: Next cycle runs with improved weights
The Canon is the institutional memory — verified strategic knowledge that survives model generations. When the model is replaced, the Canon remains.
4. Infrastructure & Resilience
Section titled “4. Infrastructure & Resilience”Designed for RHEL-based and containerized environments:
- Watchdog Systems: Automated monitors detect process hangs or stalls and trigger recovery
- Consensus-Driven Execution: Actions only execute when quorum is reached
- Portfolio Isolation: Each agent maintains independent risk parameters
- No Lookahead: Backtesting uses only data available at decision time
The system is built for 24/7 autonomous operation with minimal human intervention.