FeaturedResearchCryptographyDecentralizationMLX

Proof-of-Inference: Cryptographic Verification for Decentralized AI

Exploring cryptographic verification mechanisms for Infraxa's future decentralized node network with 100% detection of model substitution attacks.

October 16, 202512 min readView on GitHub
Experimental Research

This is a research proof-of-concept exploring cryptographic verification mechanisms for Infraxa's future decentralized node network. Not production-ready.

The Challenge

As Infraxa scales to a decentralized network of GPU nodes, we face critical verification challenges:

Model Identity

How do we verify nodes are running the claimed models and not cheaper substitutes?

Proof of Work

How do we prove inference actually happened and wasn't cached or faked?

Efficient Audits

How do we audit efficiently without re-running every inference?

Key Results

100% Detection Rate
All model substitution attacks were successfully detected and blocked

Attack Scenarios Tested

  • Attacker 1: Used Qwen3-0.6B instead of 4B (68% cost savings) - BLOCKED
  • Attacker 2: Used Qwen3-1.7B instead of 4B (14% cost savings) - BLOCKED
  • Hash Forgery: Attempted to fake model identity - BLOCKED

Performance Impact

  • Audit Overhead: <1% of inference time
  • Verification Cost: 90% reduction via probabilistic audits
  • Real Models: Tested with MLX on Apple Silicon
Attack Detection Performance
100% detection rate across all attack scenarios

Performance Analysis

We tested three different model sizes to understand the economic incentives for attackers to substitute cheaper models:

Inference Time Comparison
Smaller models are 3.1x faster (strong incentive to cheat)
Potential Cost Savings
Attackers could save up to 68% by using smaller models
Detailed Performance Metrics
Tested on M3 Max, 30 tokens, temperature=0.7
ModelSize (MB)Inference (s)Throughput (tok/s)Cost SavingsStatus
Qwen3-0.6B-4bit3350.3010168%Blocked
Qwen3-1.7B-6bit1,0240.813714%Blocked
Qwen3-4B-4bit2,5600.94320% (baseline)Expected

How It Works

Our proof-of-inference system uses four cryptographic mechanisms to verify model identity and prevent fraud:

1. Model Hash Commitments

Cryptographically bind responses to specific models. Providers include a model hash in every signed response.

model_hash = hash(model_weights)
signature = sign(model_hash + output)

✓ Catches 100% of simple substitution attacks

2. Merkle Tree Transcripts

Commit to every inference step's logits during generation to prevent output forgery.

for step in generation:
  transcript.append(hash(logits))
merkle_root = build_tree(transcript)

✓ Prevents output forgery without computation

3. Probabilistic Audits

Router selects random steps via VRF to verify, reducing verification cost by 90%.

challenge_steps = vrf_select([0, 5, 19])
# Verify only 3 of 30 steps
verify(merkle_proofs)

✓ 90% cost reduction vs full verification

4. Cryptographic Signatures

Digital signatures prevent hash forgery and ensure non-repudiation of responses.

signature = sign(
  model_hash + transcript_root
)
verify(signature, public_key)

✓ Prevents hash manipulation attacks

Relevance to Infraxa's Vision

From Centralized Gateway to Decentralized Network

Today: Infraxa provides unified access to 100+ AI models through a single API—aggregating OpenAI, Anthropic, Google, Meta, and xAI.

Future: As outlined in the Infraxa Whitepaper, Phase 2-4 will introduce a decentralized node network with independent GPU operators running inference workloads.

Phase 1

Centralized Gateway

Current
Phase 2

Image Generation Nodes

Phase 3

LLM Inference Nodes

Phase 4

Model Verification

This Research

Why We're Open Sourcing This

Building in Public

We're open sourcing this proof-of-inference research because decentralized AI infrastructure requires community trust and collaboration. Here's why:

Security Through Transparency

Cryptographic systems are only as strong as their scrutiny. By open sourcing our proof-of-inference mechanism, we invite:

  • Security researchers to audit and improve the system
  • Cryptographers to validate our approach
  • Community feedback on potential vulnerabilities
Accelerating Innovation

The future of AI is decentralized, and we can't build it alone. Open sourcing enables:

  • Faster iteration with community contributions
  • Cross-pollination of ideas from other projects
  • Industry standards that benefit everyone
Ecosystem Growth

A thriving decentralized AI ecosystem needs shared infrastructure:

  • Other projects can build on our work
  • Node operators can verify the system before joining
  • Researchers can extend and improve the approach
Trust & Verifiability

Decentralized networks require trust. Open source provides:

  • Transparency in how verification works
  • Reproducibility of our results
  • Community ownership of the infrastructure

"The best way to predict the future is to invent it—together."
We believe decentralized AI infrastructure should be built collaboratively, transparently, and for the benefit of everyone.

Current Limitations & Future Work

Not Production-Ready

This is an experimental proof-of-concept. For production use in Infraxa's node network, we need:

  • Real ECDSA signatures (currently using HMAC for demo)
  • Cryptographic VRF (currently deterministic selection)
  • Actual model weight hashing (currently timestamp-based)
  • TEE attestation (SGX/SEV/TDX for hardware security)
  • Economic incentives (staking, slashing for misbehavior)
  • Multi-provider consensus (cross-verification)
  • Professional security audit
Try It Yourself

The full implementation is open source and ready to run on Apple Silicon:

# Clone the repository
git clone https://github.com/Infraxa/proof-of-inference-v-0.0.1
# Run the demo
python demo.py
View Full Documentation

Built with ❤️ for the future of decentralized AI

Part of the Infraxa ecosystem