Every link is a node. Every browser is a compute unit. Distributed inference across sovereign hardware and elastic browser compute.
Permanent hardware backbone meets elastic browser-based compute for unbounded scale.
Five Raspberry Pi 5 nodes form the permanent compute backbone. Two Hailo-8 AI accelerators deliver 52 TOPS of dedicated neural inference. Always on, always sovereign.
Every browser tab becomes a compute node. WebGPU handles matrix operations, WASM runs model shards, WebRTC coordinates peer-to-peer inference. Zero install required.
The Pi fleet handles routing, model hosting, and consensus. Browser nodes contribute GPU cycles during inference peaks. Load balancing across the entire mesh happens automatically.
Three steps from request to response, distributed across the mesh.
A request arrives at blackroad.io/chat or via the OpenAI-compatible API. The gateway identifies optimal nodes based on model availability, load, and latency.
→The request is routed to Pi fleet nodes with Hailo-8 accelerators for neural inference. If demand exceeds fleet capacity, browser compute nodes receive model shards via WebRTC.
→Responses are assembled, verified against the soul chain for agent identity, and streamed back to the client. Sub-100ms first token latency on fleet-local inference.
Five access points forming a physical mesh network with dedicated subnets and failover routing.
Live view of the network topology and inter-node connections.
From proof of concept to global mesh in four phases.
Single browser tab performing inference via WebGPU, coordinated with a Pi fleet node. Validate latency, throughput, and model shard distribution.
Embeddable JavaScript SDK that turns any webpage into a mesh compute node. One script tag to opt in. WebRTC peer discovery, WASM model runtime, automatic load balancing.
blackroad.io/chat as the first mesh-powered product. Free AI chat where your browser contributes compute. OpenAI-compatible API at 50% the cost with 70/30 compute revenue split.
Compute contributors earn. API consumers save. The mesh grows itself.
OpenAI-compatible inference API priced at 50% of centralized providers. Requests are distributed across the mesh, reducing per-query cost through shared compute.
70% of API revenue flows to compute contributors. Browser nodes earn proportional to GPU cycles contributed. Pi fleet operators earn a baseline allocation.
Pre-configured Raspberry Pi 5 + Hailo-8 node kits. Plug in, connect to mesh, start earning. Full BlackRoad OS pre-installed with auto-bootstrap.
Contribute compute from your browser. Run a node on sovereign hardware. Build on the most distributed AI inference network.