A reputation-staking protocol that restores the cost of being wrong on the internet.
Social platforms are engineered for engagement, not accuracy. Engagement is maximized by content that triggers fear and anger. This isn't a bug—it's the business model. The algorithm doesn't ask "is this true?" It asks "will this keep them scrolling?"
The result: platforms actively reward content that is negative for society.
Research shows affective polarization—distrust and dislike of "the other side"—is rising sharply. We've stopped debating policy. We can't even agree on facts. Groups that once shared a common reality now view each other as suspicious adversaries.
For 200,000 years, speech had consequences. Spread lies in a village, and your reputation suffered. The community remembered. Social media broke that contract. Now you can post misinformation with zero cost—and the algorithm will reward you for it.
AI systems are being trained on this polluted information environment. Models inherit our confusion, our polarization, our inability to distinguish signal from noise. We're encoding dysfunction into the infrastructure of the future.
"If you bring value to the group, you will be rewarded. If you destroy value, your star will fall."
This is how human communities have functioned for all of recorded history. Reputation systems emerged because they work—they enable cooperation at scale by making trustworthiness visible and defection costly.
Trust Engine is a mathematical formalization of this ancient wisdom.
Robin Dunbar (Oxford) argues that language itself evolved primarily for sharing reputational information—"vocal grooming" that enabled cooperation in groups too large for physical bonding. Two-thirds of human conversation is about who did what to whom. This isn't idle gossip. It's the infrastructure of cooperation.
Cross-cultural research confirms this is universal. From hunter-gatherers in the Central African Republic to knowledge workers in the US—reputation determines resource allocation. Gossip enforces norms. Ostracism punishes defectors. These mechanisms emerged independently across every human society because they solve a fundamental coordination problem.
Social media broke these mechanisms by removing the cost of bad-faith participation. Trust Engine restores them.
The core insight from evolutionary game theory: signals must be costly to be honest. When being wrong costs you something, people get more careful—and more honest.
Users stake accumulated reputation when vouching for content. Correct assessments (validated over time) earn reputation rewards. Incorrect stakes cost reputation—publicly and permanently.
Costless drive-by opinions carry no weight. High-stake assessments from users with track records of accuracy carry more. The signal-to-noise ratio improves because noise becomes expensive.
Unlike binary upvote/downvote, Trust Engine separates direction (credible vs. not credible) from conviction (willing to stake on it):
| I Stake Reputation | No Stake | |
|---|---|---|
| Credible | High-conviction positive signal | Low-weight opinion |
| Not Credible | High-conviction negative signal | Low-weight opinion |
This reveals not just what people think, but how confident they are—and whether they're willing to back it up.
Reputation Score (RS) is your active credibility stake. It's earned through accurate assessments, used for voting (staked when you vouch for content), and non-transferable. This is your skin in the game.
Epistemic Coin (EPIC) is your crystallized reward. RS automatically and gradually converts to EPIC over time (rate determined by algorithm). Once converted, it cannot be wagered—it's out of the game. But it's tradeable or holdable in your coin-purse. This is your payout.
RS doesn't convert all at once. The algorithm gradually matures your earned RS into EPIC over a defined period. This creates important dynamics:
You must keep earning new RS to maintain voting power. You can't stockpile RS indefinitely and dominate.
Credibility is "use it or lose it"—but you don't lose value. Inactive RS converts to EPIC, rewarding you while freeing up influence for active participants.
Sustained accurate participation accumulates EPIC over time. The longer you're right, the more you're rewarded.
Platform revenue (AI data licensing, platform fees, enterprise API) funds EPIC token buybacks:
Accurate assessments → Earn RS
↓
RS matures → EPIC (gradual, automatic)
↓
Platform value grows → Revenue
↓
Revenue → EPIC buybacks (treasury)
↓
EPIC appreciates
↓
Greater incentive to earn RS
↓
More quality participation → [cycle repeats]
The elegance: You can't just buy EPIC and stake it—EPIC doesn't vote. You must earn RS through accuracy to participate. But EPIC holders benefit from the collective accuracy of the network driving platform value.
Not all participation is equal. Signal quality—and reward—scales with commitment:
The name Outpost is deliberate. These are forward positions—scouting claims from the broader information landscape and bringing them into the system for evaluation. The creator stakes their credibility on the claim's veracity. This is where the Trust Engine begins its work.
Content progresses through defensive tiers based on total reputation staked and the degree of controversy—a mathematical measure of how contested the position is:
The metaphor encodes meaning. A claim at Outpost is new, untested, easily challenged. A claim at Castle has weathered sustained engagement—many people have staked their credibility on it. You can see at a glance how entrenched an idea is.
This creates powerful dynamics:
You immediately understand how difficult it would be to overturn a claim. A Castle-level belief isn't just popular—it's fortified by accumulated stakes from people willing to risk their reputation on it.
In idea warfare, opposition makes victory meaningful. Successfully challenging an entrenched position—and being proven right—earns outsized returns. Difficulty creates reward.
The highest-value contribution isn't building consensus—it's overturning false consensus. Proving a Castle wrong is worth as much as building it was. The system rewards those who correct deeply entrenched errors.
Being correct about something easy is worth little. Being correct about something contested and important—where significant reputation is on the line—is worth a lot. The system rewards courage, not just accuracy.
The protocol has been mathematically modeled against known attack vectors:
Every major lab is desperate for quality signals. Synthetic data is hitting walls. Trust-weighted content scores are a potential game-changer for data curation.
Users actively seek credibility tools. The major platforms have lost the benefit of the doubt. There's demand for new infrastructure.
Crypto infrastructure has battle-tested the primitives we need. Staking, slashing, and reputation tokens are well-understood patterns.
EU and US pushing for content accountability. Platforms will need credibility signals whether they want them or not.
This isn't speculative design. The mechanisms we're formalizing have been validated by:
Anthropology: Cross-cultural studies confirm reputation-based resource allocation works identically across US office workers, Indian professionals, and Central African hunter-gatherers (WSU, 2023).
Evolutionary Game Theory: Costly signaling theory (Zahavi, Grafen) explains why expensive signals are honest signals. Third-party punishment serves as a costly signal of trustworthiness (Jordan et al., Harvard).
Indirect Reciprocity: Richard Alexander's framework—"I help you, someone else helps me, mediated by reputation"—is unique to humans and develops in early childhood. It's evolutionarily stable under specific social norms.
Ancient communities figured out reputation systems through trial and error. Modern researchers independently derived the same principles mathematically. The convergence is what makes this compelling—not new theory, but validated infrastructure.
We're building the infrastructure that social media accidentally broke. Interested in discussing the protocol, the math, or potential applications?