5. Technical Components5.3 Keeper Network

5.3. Keeper Network (Automation)

The Keeper Network is the decentralized automation backbone of the Fhenix-FairMarket protocol. It bridges on-chain state transitions with off-chain cryptographic processing (FHEOS) and economic verification (EigenLayer AVS). By converting auction finalization into a public, incentivized function and deploying a resilient, batch-optimized dispatcher, the network guarantees liveness without centralization, deterministic gas consumption under load, and zero manual intervention for settlement routing.

This component operates as a stateless, event-driven microservice architecture, designed for horizontal scaling, fault tolerance, and strict cryptographic alignment with the smart contract layer.

Core Design Principles

PrincipleTechnical Implementation
Decentralized LivenesstriggerFinalize() is public with a 0.2% bounty. Any external actor or bot can execute it, preventing protocol freeze if internal Keepers fail.
Batch-Optimized DispatchCapped at 10 auctions/block. Prevents API rate limits, manages FHEOS throughput, and eliminates Out-of-Gas risks during settlement spikes.
Resilient Queue ManagementRedis-backed local queues with exponential backoff retry logic and a hard 120s timeout for FHEOS processing.
Race Condition NeutralizationDistributed locks (Redis SET NX EX), nonce tracking, and blockhash salting ensure only one Keeper successfully triggers closure per auction.
Economic Verification GateavsSubmitter aggregates operator signatures and validates the threshold (e.g., 3/5) before broadcasting to the chain. Fraud proofs are checked locally pre-submission.

️ Technical Implementation

1. Auction Monitor (auctionMonitor.ts)

Listens for chain events and schedules closure triggers with fallback polling and retry mechanisms.

// packages/keeper/src/services/auctionMonitor.ts
import { ethers } from 'ethers';
import { Redis } from 'ioredis';
 
export class AuctionMonitor {
 private provider: ethers.Provider;
 private redis: Redis;
 
 constructor() {
 this.provider = new ethers.JsonRpcProvider(process.env.FHENIX_RPC);
 this.redis = new Redis(process.env.REDIS_URL);
 }
 
 async start() {
 // 1. Real-time WebSocket listener
 this.provider.on('AuctionCreated', (auctionId, endTime) => {
 this.redis.set(`auction:${auctionId}:endTime`, endTime);
 this.redis.set(`auction:${auctionId}:status`, 'ACTIVE');
 });
 
 // 2. Fallback Polling (every 30s) for missed events or node desync
 setInterval(async () => {
 const pendingAuctions = await this.redis.keys('auction:*:endTime');
 for (const key of pendingAuctions) {
 const endTime = await this.redis.get(key);
 if (Date.now() / 1000 >= Number(endTime) - 60) {
 await this.triggerClosure(key.split(':')[1]);
 }
 }
 }, 30_000);
 }
 
 private async triggerClosure(auctionId: string) {
 // Queue for Dispatcher & Keeper execution with retry/backoff
 await this.redis.lpush('closure_queue', auctionId);
 }
}

2. CoFHE Dispatcher (cofheDispatcher.ts)

Routes ciphertext hashes to FHEOS servers in optimized batches with strict timeout and error handling.

// packages/keeper/src/services/cofheDispatcher.ts
import Redis from 'ioredis';
import axios from 'axios';
 
const MAX_BATCH_SIZE = 10;
const FHEOS_TIMEOUT = 120_000; // 120 seconds
 
export class CoFHEDispatcher {
 private redis: Redis;
 
 async processQueue() {
 while (true) {
 const batch = await this.redis.lrange('closure_queue', 0, MAX_BATCH_SIZE - 1);
 if (batch.length === 0) break;
 
 try {
 const response = await axios.post(
 process.env.FHEOS_ENDPOINT,
 { auctionIds: batch },
 { headers: { 'Authorization': `Bearer ${process.env.FHEOS_API_KEY}` }, timeout: FHEOS_TIMEOUT }
 );
 // Pass result to AVS Submitter
 await this.avsSubmitter.handleResult(response.data);
 await this.redis.ltrim('closure_queue', batch.length, -1);
 } catch (err) {
 console.error('FHEOS dispatch failed, retrying with exponential backoff...');
 await this.retryWithBackoff(batch);
 }
 }
 }
}

3. AVS Submitter (avsSubmitter.ts)

Collects operator signatures, verifies cryptographic threshold, and submits the resolution to the contract.

// packages/keeper/src/services/avsSubmitter.ts
export class AVSSubmitter {
 async aggregateAndSubmit(auctionId: string, result: any) {
 // 1. Request partial signatures from registered EigenLayer AVS operators
 const signatures = await this.requestOperatorSignatures(auctionId, result);
 
 // 2. Verify threshold consensus (e.g., 3/5)
 if (signatures.length < CONFIG.AVS_THRESHOLD) {
 throw new Error('Threshold consensus not met');
 }
 
 // 3. Aggregate & locally validate Fraud Proof
 const avsProof = this.aggregateSignatures(signatures);
 this.validateFraudProofLocally(avsProof, result);
 
 // 4. Submit to on-chain contract
 await this.contract.submitResolution(auctionId, result.winnerCiphertext, avsProof);
 }
}

️ Race Condition & Reliability Mechanisms

Multiple Keepers monitoring the same endpoint can lead to duplicate triggerFinalize() calls, wasting gas and causing transaction reverts. The protocol enforces a multi-layered prevention strategy:

  1. Redis Distributed Locks: Atomic SETNX operations with a short TTL (30s) ensure only the first Keeper processes a specific auctionId.
  2. Nonce & Blockhash Salting: Each transaction includes a unique nonce and previous blockhash, making replay attacks or accidental resubmissions mathematically invalid.
  3. Dynamic Priority Fees: Keepers adjust maxPriorityFeePerGas based on network congestion, increasing the probability of first-inclusion without overpaying during calm periods.
  4. Idempotent Contract Guards: The smart contract’s state machine (ACTIVE → RESOLVING) natively reverts duplicate calls, acting as a final on-chain safety net.
// packages/keeper/src/services/raceGuard.ts
export class RaceGuard {
 constructor(private redis: Redis) {}
 
 async acquireLock(auctionId: string, ttl: number = 30): Promise<boolean> {
 const result = await this.redis.set(`lock:auction:${auctionId}`, 'locked', 'EX', ttl, 'NX');
 return result === 'OK';
 }
}

Infrastructure & CI/CD Pipeline

Local Simulation Stack (docker-compose.yml)

services:
 redis:
 image: redis:7-alpine
 ports: ["6379:6379"]
 volumes: ["redis-data:/data"]
 auction-monitor:
 build: ./keeper
 command: node dist/services/auctionMonitor.js
 depends_on: [redis]
 env_file: .env
 cofhe-dispatcher:
 build: ./keeper
 command: node dist/services/cofheDispatcher.js
 depends_on: [redis, auction-monitor]
 avs-submitter:
 build: ./keeper
 command: node dist/services/avsSubmitter.js
 depends_on: [redis, cofhe-dispatcher]
volumes:
 redis-data:

Automated Pipelines (.github/workflows/)

PipelineTriggerChecks
ci-keeper.ymlPR to mainTypeScript compile, unit tests, ESLint, Docker build verification
deploy-testnet.ymlPush to release/*Sequential deploy, smoke test, Keeper service health check

Keeper Architecture Data Flow

️ Security & Operational Guarantees

  1. Zero Trust Finalization: Keepers do not determine winners. They only trigger the off-chain pipeline and submit cryptographically verified AVS proofs. The contract independently validates the proof before accepting any result.
  2. Capital-Backed Liveness: The 0.2% bounty ensures external economic actors are incentivized to execute triggerFinalize() if internal infrastructure experiences downtime, guaranteeing perpetual protocol activity.
  3. OOG & Load Protection: Strict batch limits (10/block) and timeout thresholds (120s) prevent cascading failures during network congestion. Failed batches persist in Redis until processed or safely voided by the Dynamic Dead Man's Switch.
  4. Deterministic State Transitions: All keeper actions are idempotent. Duplicate calls revert cleanly on-chain without consuming state storage or breaking the auction lifecycle.

Audit Gate Compliance (P0)

Progression from Phase 3 to Phase 4 is strictly blocked until all Keeper P0 items pass:

  • [] Keepers auto-invoke triggerFinalize() at endTime without human intervention or missed auctions.
  • [] Batch processing never exceeds 10 auctions/block and maintains stable gas consumption under simulated load.
  • [] docker compose up successfully launches all services locally, simulating a full execution cycle.
  • [] Race condition prevention works flawlessly: ≤ 1% duplicate execution rate in stress tests.
  • [] avsSubmitter correctly rejects results below the operator threshold (e.g., 3/5) and validates fraud proofs locally.
  • [] CI/CD pipelines (ci-keeper.yml, ci-contracts.yml) pass 100% with zero High/Critical security findings.

Timeline & Dependencies

MetricValue
Estimated Duration10–12 days (Phase 4)
Team Size2–3 Engineers (DevOps + TypeScript + Solidity)
DependenciesPhase 3 (Keeper interfaces, AVS mocks, CoFHE flow)
EnablesPhase 5 (Frontend real-time tracking), Phase 6 (Testnet deployment & monitoring)

Next Steps