Reliquary solves one problem: you want to use AI, you want to know the server actually ran the model you asked for, and re-running it yourself is too expensive.
Every AI answer in Reliquary ships with a tiny cryptographic receipt. Any validator can check the receipt in milliseconds. If the receipt is valid, the model was run honestly on the declared weights. If it's not, the miner earns nothing.
The actors
- Miners run the AI model and submit answer + receipt.
- Validators check the receipt, vote on quality, and set stake weights.
- The chain records every vote so outside observers can audit later.
One request, end to end
- The policy authority publishes the blessed model hash for this window.
- Miners run requests through that model. Each answer carries a
completion, tokens, a
sketchof hidden states at one layer, and a signature binding everything to the miner's wallet. - Validators run a cheap one-layer forward pass at 32 sampled positions, compare the sketch, and check 8 more stages.
- Every validator signs a verdict. The stake-weighted median across the mesh produces the canonical outcome.
- The median's Merkle root goes onchain per window. Outside observers can reconstruct every verdict from the chain + storage buckets.
Why it can't be cheated
The 19-attack-class audit lives at /docs/scoring. The short version: the sketch depends on per-window randomness the miner cannot predict, so a forged sketch will always miss tolerance. Running a smaller model blows up the log-probability drift stage. Copying another miner's output is caught by content-hash with a 2-second ambiguity window.
What's different from today's subnets
Most Bittensor subnets trust one validator per request. Reliquary lets the mesh disagree cleanly — outliers auto-gate after 5% disagreement. Most subnets re-execute the full model to verify. Reliquary verifies 32 positions out of hundreds — ~99% cheaper, same security.