P2P Proof-of-Inference Network
Building towards fully decentralized AI inference with cryptographic verification over libp2p
⚠️ The Cost of Centralization: October 2025 AWS Outage
On October 20, 2025, a single DNS error in Amazon Web Services' US-EAST-1 data center in Virginia brought down a significant portion of the internet. This wasn't a cyberattack or natural disaster—just a routine technical update gone wrong.
The result? Millions of people couldn't access essential services for over 12 hours. This is exactly the problem decentralized infrastructure solves.
Impact by the Numbers
Duration
Over 12 hours
Services were down from 7:11 GMT on October 20, 2025
Scale
1,000+ businesses
Affected companies ranging from startups to Fortune 500
Cost
Hundreds of billions
Estimated total economic impact across all affected services
What Stopped Working
When AWS went down, it wasn't just tech companies that suffered. Critical infrastructure across every sector failed:
Healthcare
Hospital communications systems
Education
Canvas (college instructional platform)
Banking
Chime mobile banking, Coinbase
Airlines
Delta and United booking systems
Social Media
Snapchat, Facebook, Reddit
Gaming
Fortnite, Pokémon GO, Roblox
E-commerce
Shopify, Starbucks mobile orders
Smart Home
Ring doorbells, Alexa voice assistant
AI Services
Perplexity AI
Payments
Venmo
The Root Cause
According to Al Jazeera , the problem started after "a technical update to the API of DynamoDB, a key cloud database service" caused a DNS error. One mistake in one data center took down over 1,000 businesses globally.
What This Means for You
As CNN reported : "It took a day without Amazon Web Services for Americans to realize how reliant the internet is on a single company."
- •Hospitals couldn't access crucial communications systems
- •Teachers couldn't access their lesson plans for the day
- •People couldn't access their bank accounts or make payments
- •Small businesses using Shopify couldn't process orders
This is why we're building P2P decentralization.
The vision: With a fully P2P network (Infraxa's Phase 5 goal), there would be no single point of failure. If one provider goes down, the network automatically routes to others. No 12-hour outages. No hundreds of billions in losses. Just resilient, always-available infrastructure.
Current state: Infraxa currently operates as a distributed system with multiple servers. This P2P network is our roadmap to eliminate even those remaining central coordination points.
How we get there: By combining battle-tested P2P protocols (libp2p), cryptographic verification (Merkle proofs, ZK-SNARKs), and economic incentives (staking), we can create an AI infrastructure that's as resilient as the internet itself. Read on to see how it works.
🤔 The Trust Problem in Decentralized AI
Decentralization solves the outage problem, but creates a new one: How do you trust a random stranger's GPU?
Say you want to run Qwen3-Next-80B but don't have the GPU memory. You could use AWS, but we just saw how that ends. So you turn to a decentralized network where anyone can be a provider. But here's the catch: How do you know they're actually running Qwen3-Next-80B and not Qwen3-0.6B? How do you know they didn't just make up the response?
This is the proof-of-inference problem: In a permissionless network, providers can claim to run expensive models while actually running cheap ones. They can substitute models, forge outputs, or skip computation entirely. Without verification, decentralized AI is just "trust random strangers"—which defeats the whole point.
✅ Our Solution: Cryptographic Verification
Don't trust, verify: Instead of trusting providers, we verify them with cryptographic proofs. Providers must prove they ran the correct model and computation—mathematically, not with promises.
Our P2P proof-of-inference network combines libp2p (battle-tested P2P protocol from IPFS) with multiple verification schemes: Merkle proofs for audit-based verification, fraud proofs for optimistic verification, and ZK-SNARKs for zero-knowledge verification. Each has different trade-offs between speed, cost, and security.
The result: A decentralized AI network where you don't need to trust anyone. Math guarantees correctness. This is Phase 5 of Infraxa's roadmap—moving from distributed to truly decentralized and verifiable.
Core Components
Provider Nodes
Run AI models (like Qwen3, Llama, etc.) and serve inference requests over P2P
Why it matters: Democratizes AI infrastructure—anyone with a GPU can join and earn rewards, not just big tech companies
Router Nodes
Discover providers via DHT/mDNS, route requests, and verify cryptographic proofs
Why it matters: Acts as your gateway to the network, ensuring you get correct results without trusting any single provider
libp2p Transport Layer
Battle-tested P2P protocol (used by IPFS, Filecoin) with Noise encryption and peer discovery
Why it matters: Proven technology that makes true decentralization possible—no DNS, no central servers, just peer-to-peer connections
Multi-Tier Verification
Merkle proofs (audit-based), Fraud Proofs (optimistic), and ZK-SNARKs (zero-knowledge)
Why it matters: Different security models for different needs—from fast optimistic verification to mathematically perfect zero-knowledge proofs
Infraxa's Path to Full Decentralization
Phase 5 GoalCurrent state: Infraxa operates as a distributed system (Phases 1-4) with multiple servers, but still has central coordination points.
Phase 5 goal: This P2P proof-of-inference network is our roadmap to true decentralization, where:
- • Anyone can contribute compute power and earn rewards
- • No single entity can censor or manipulate AI outputs
- • Cryptographic proofs guarantee correctness without trust
- • The network scales naturally as more providers join
⚙️ How It Works
┌─────────────────────────────────────────────────────────────┐ │ LIBP2P NETWORK │ │ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │ Provider A │◄───────►│ Provider B │ │ │ │ (Qwen3-4B) │ │ (Qwen3-1.7B) │ │ │ └──────────────┘ └──────────────┘ │ │ ▲ ▲ │ │ │ │ │ │ │ P2P Discovery │ │ │ │ (mDNS/DHT) │ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────────────────────────────┐ │ │ │ Router Node │ │ │ │ - Discovers providers │ │ │ │ - Requests inference │ │ │ │ - Verifies proofs │ │ │ │ - Selects best provider │ │ │ └──────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────┘
The Inference Flow
Discovery
Router uses mDNS (local network) or Kademlia DHT (internet) to find providers advertising specific models. Providers publish their capabilities: model type, quantization, available compute.
Request
Router sends inference request with prompt and verification mode (Merkle/Fraud/ZK). Request includes deterministic parameters (temperature, seed) so results can be reproduced.
{
"model": "Qwen3-4B-4bit",
"prompt": "What is 2+2?",
"verification_mode": "merkle",
"temperature": 0.7,
"seed": 42
}Computation
Provider runs the model and builds a Merkle tree of computation steps (each token generation). This creates a cryptographic commitment to the entire inference process.
Verification
Router verifies the proof cryptographically. For Merkle proofs, it randomly challenges specific steps. Provider must reveal those steps with authentication paths proving they're in the Merkle tree.
If verification fails, provider is flagged and reputation drops. If it succeeds, payment is released.
Key Technologies
Battle-tested P2P protocol (powers IPFS, Filecoin). Handles peer discovery, routing, and encrypted communication. Peer IDs derived from Ed25519 keys provide cryptographic identity.
Distributed hash table for global peer discovery. Content-addressable routing means you find providers by capability (e.g., "Qwen3-4B") not IP address. Self-healing and resistant to churn.
Modern cryptographic framework for encrypted transport. Used by WhatsApp, WireGuard, Lightning Network. Provides forward secrecy and mutual authentication.
Cryptographic data structure for efficient verification. Each inference step is a leaf. Router can verify any step without downloading the entire computation.
🔐 Security & Verification
In a permissionless network, providers can cheat. Our defense: cryptographic proofs that make cheating either impossible or immediately detectable.
Currently Protected Against
ActiveThese attacks are already defended by our current implementation:
Future Security Work
PlannedThese protections are planned for future phases:
Three Verification Approaches
We support multiple proof systems because different use cases need different trade-offs between speed, cost, and security:
How it works: Provider builds a Merkle tree of all computation steps. Router randomly challenges specific steps, and provider must reveal those steps with authentication paths. Fast and efficient, but requires interaction.
Best for: High-throughput scenarios where occasional audits are acceptable
How it works: Assume providers are honest by default. If someone suspects fraud, they can challenge and force the provider to prove correctness. Similar to optimistic rollups in blockchain. Economic incentives (staking) make fraud expensive.
Best for: Cost-sensitive applications where most providers are honest
How it works: Provider generates a cryptographic proof that the inference was computed correctly, without revealing model weights or intermediate states. Uses frameworks like EZKL to convert neural network forward passes into arithmetic circuits. Verification is fast (~milliseconds) but proof generation is expensive.
Best for: Privacy-critical applications or when you need mathematical certainty without trust
Cryptographic Building Blocks
These are the core cryptographic technologies that make the security guarantees possible:
Transport Security
Noise ProtocolEnd-to-end encryption for all peer connections (libp2p default)
Peer Identity
Ed25519Cryptographic peer IDs derived from public keys
Message Integrity
HMAC/ECDSAAll messages signed to prevent tampering
Proof Validity
Merkle/Fraud/ZK-SNARKsMultiple verification schemes: Merkle trees for audits, optimistic fraud proofs, and ZK-SNARKs (via frameworks like EZKL) for zero-knowledge ML verification
📊 Practical Considerations
P2P adds latency compared to centralized systems. Here's the honest breakdown:
| Metric | Centralized (AWS) | P2P (Local Network) | P2P (Internet) |
|---|---|---|---|
| Connection Setup | ~10ms | ~50ms | ~200ms |
| Inference Time | Same | Same | Same |
| Proof Generation | N/A | ~10ms | ~10ms |
| Verification | N/A | ~5ms | ~5ms |
| Total Overhead | ~10ms | ~65ms | ~215ms |
Note: Actual inference time (model computation) is the same across all approaches. The overhead is just network + verification.
Good Use Cases
- • Censorship resistance: No single entity can block your access
- • Privacy-critical applications: Don't want to send data to big tech
- • Cost-sensitive workloads: Community providers can be cheaper
- • Local AI clusters: Multiple GPUs in your lab/company
- • Research & experimentation: Testing decentralized AI concepts
Not Ideal For
- • Ultra-low latency needs: If every millisecond counts, centralized is faster
- • Simple prototyping: P2P adds complexity you might not need yet
- • Regulated industries: May need centralized audit trails
- • When you trust your provider: If you're running your own servers, P2P is overkill
The Honest Trade-offs
You gain: Censorship resistance, no single point of failure, cryptographic verification, permissionless participation, and independence from big tech.
You pay with: Higher latency (~200ms extra for internet), more complexity (P2P networking, proof verification), and less mature tooling compared to AWS.
The verdict: If you need the guarantees of decentralization and verifiability, the trade-offs are worth it. If you just need fast AI inference and trust your provider, stick with centralized systems.
🚀 Quick Start Guide
Get your P2P inference network running in minutes. Follow these steps or jump to the full demo.
bash# Install libp2p for Python pip install libp2p aioquic # Or use the requirements file pip install -r requirements.txt
🛠️ Implementation
We're not reinventing the wheel—we're combining battle-tested P2P protocols with cutting-edge cryptographic verification.
- libp2p (py-libp2p): The same P2P networking stack powering IPFS and Filecoin. Handles peer discovery, routing, and encrypted communication. Mature, well-tested, and used in production by thousands of nodes.
- Noise Protocol: Modern cryptographic framework for encrypted transport. Used by WhatsApp, WireGuard, and Lightning Network. Provides forward secrecy and mutual authentication.
- Kademlia DHT: Distributed hash table for peer discovery. Content-addressable routing means you find providers by capability, not IP address. Self-healing and resistant to churn.
- Inspired by LocalAI: LocalAI already implements P2P distributed inference with libp2p. We extend this with cryptographic verification and multi-tier proof systems.
- EZKL for ZK-SNARKs: Framework for generating zero-knowledge proofs of ML inference. Converts neural network computations into arithmetic circuits that can be proven cryptographically.
- DCIN Research: Building on academic work in Decentralized Collaborative Intelligence Networks for heterogeneous device support and model sharding strategies.
variants/p2p_network/ ├── provider_node.py # Provider node implementation ├── router_node.py # Router node implementation ├── p2p_protocol.py # Protocol definitions ├── peer_discovery.py # Discovery mechanisms ├── message_handler.py # Message routing ├── demo_p2p_network.py # Full demo ├── test_p2p_integration.py # Integration tests ├── test_p2p_attacks.py # Security tests └── README.md # Documentation
- 1.Python libp2p: Less mature than Go/Rust implementations
- 2.NAT Traversal: May need relay nodes for home networks
- 3.Bootstrap Nodes: Needs initial peers for DHT
- 4.Economic Layer: No staking/payment yet (future work)
🌟 Development Roadmap
We're building this in phases, starting with the core network and gradually adding more sophisticated features. Each phase builds on the previous one to create a production-ready decentralized AI infrastructure.
Overall Progress
Phase 1: Core Network
CompleteBasic peer-to-peer infrastructure with cryptographic verification
- Basic P2P networking
- Provider discovery
- Inference request/response
- Merkle proof verification
Phase 2: Advanced Proofs
In ProgressMultiple verification schemes and reputation tracking
- Fraud proof integration
- ZK-SNARK integration
- Reputation system
- Provider selection algorithm
Phase 3: Economic Layer
PlannedIncentive mechanisms to reward honest providers and punish cheaters
- Economic incentives (staking)
- Payment channels
- Slashing for fraud
- Multi-provider consensus
Phase 4: Production Ready
FutureEnterprise features and cross-chain integration for real-world deployment
- Cross-chain integration
- TEE attestation
- Privacy-preserving routing
- Production deployment
Want to Learn More?
Dive deeper into the underlying technologies: libp2p.io (P2P networking), py-libp2p (Python implementation), and Noise Protocol (encryption).
Built with ❤️ for decentralized AI
Part of the Infraxa roadmap - Phase 5 Goal: Full Decentralization