A multi-project workspace for W3C WebNN (Web Neural Network) implementation, specification tooling, and GPU acceleration.
# Clone all repositories and set up the workspace
make setup
# Build all projects
make build
# Run tests
make testThis workspace implements a complete WebNN toolchain, from model conversion to multi-backend execution.
graph TB
subgraph "Input & Authoring"
ONNX["ONNX Models"]
DSL[".webnn DSL Files"]
PY["Python Code"]
end
subgraph "Python Bindings Layer"
PYWN["pywebnn<br/>Python WebNN API"]
end
subgraph "Conversion & Validation Layer"
WG["webnn-graph<br/>DSL Parser & Converter"]
WOU["webnn-onnx-utils<br/>Shared Utilities"]
WG -.uses.-> WOU
end
subgraph "Core WebNN Implementation"
RN["rustnn<br/>WebNN Runtime"]
VAL["Graph Validator"]
CONV["Format Converters"]
RN --> VAL
RN --> CONV
RN -.uses.-> WOU
end
subgraph "Execution Backends"
TRT["trtx-rs<br/>TensorRT FFI"]
ORT["ONNX Runtime"]
CML["CoreML"]
end
subgraph "Utilities"
SB["search-bikeshed<br/>Spec Search Tool"]
end
ONNX --> WG
DSL --> WG
PY --> PYWN
PYWN --> RN
WG --> RN
RN --> TRT
RN --> ORT
RN --> CML
style RN fill:#4a90e2
style PYWN fill:#9b59b6
style WG fill:#7b68ee
style WOU fill:#50c878
style TRT fill:#76b900
style SB fill:#ffa500
| Component | Purpose | Key Features |
|---|---|---|
| rustnn | Core WebNN runtime | • W3C spec-compliant implementation • Multi-backend execution • Rust library crate • 88/105 operations (84% coverage) |
| pywebnn | Python bindings | • Full W3C WebNN Python API • PyO3 bindings to rustnn • NumPy integration • PyPI package distribution |
| webnn-graph | DSL & visualization | • .webnn text format parser • ONNX → WebNN conversion • Graph validation & visualization • JavaScript code generation |
| webnn-onnx-utils | Shared utilities | • Type/operation mapping • Attribute parsing • Shape inference • Used by rustnn & webnn-graph |
| trtx-rs | GPU acceleration | • Safe TensorRT-RTX bindings • RAII-based API • Mock mode for dev without GPU • AOT compilation + runtime inference |
| search-bikeshed | Spec tooling | • Index W3C WebNN specs • Full-text search (SQLite FTS5) • Offline spec browsing |
sequenceDiagram
participant User
participant WebNNGraph as "webnn-graph"
participant Utils as "webnn-onnx-utils"
participant RustNN as "rustnn"
participant Backend as "Execution Backend"
alt ONNX Model Input
User->>WebNNGraph: ONNX model file
WebNNGraph->>Utils: Map operations & types
Utils-->>WebNNGraph: WebNN equivalents
WebNNGraph->>User: .webnn DSL / JSON
end
alt Python WebNN API
User->>RustNN: Python API calls
Note over RustNN: MLContext.create()
Note over RustNN: MLGraphBuilder.build()
end
User->>RustNN: graph.build()
RustNN->>RustNN: Validate graph structure
RustNN->>Utils: Type & shape inference
Utils-->>RustNN: Validated types
RustNN-->>User: Immutable MLGraph
User->>RustNN: graph.compute(inputs)
RustNN->>RustNN: Select backend (context hints)
alt TensorRT Backend
RustNN->>Utils: Convert to ONNX protobuf
RustNN->>Backend: Execute via trtx-rs
else ONNX Runtime Backend
RustNN->>Utils: Convert to ONNX protobuf
RustNN->>Backend: Execute via ORT
else CoreML Backend
RustNN->>Backend: Convert to CoreML protobuf
RustNN->>Backend: Execute via CoreML
end
Backend-->>RustNN: Tensor outputs
RustNN-->>User: Results
rustnn follows the W3C WebNN specification with a multi-layered architecture:
┌─────────────────────────────────────────────────────────────┐
│ API Layer │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ CLI Tool │ │ Rust Crate │ │Python Bindings│ │
│ │ (main.rs) │ │ (lib.rs) │ │ (PyO3) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└────────────────────────┬────────────────────────────────────┘
│
┌────────────────────────┼────────────────────────────────────┐
│ Core Graph Processing │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ MLGraphBuilder (builder.rs) │ │
│ │ • input(), constant(), conv2d(), relu(), etc. │ │
│ │ • Builds backend-agnostic graph │ │
│ └─────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌─────────────────────▼───────────────────────────────┐ │
│ │ GraphValidator (validator.rs) │ │
│ │ • Shape inference (shape_inference.rs) │ │
│ │ • Type checking │ │
│ │ • Dependency analysis │ │
│ └─────────────────────┬───────────────────────────────┘ │
│ │ │
│ ┌─────────────────────▼───────────────────────────────┐ │
│ │ GraphInfo (graph.rs) - Immutable │ │
│ │ • Operations + Operands │ │
│ │ • Input/Output descriptors │ │
│ │ • Constant data (weights) │ │
│ └─────────────────────┬───────────────────────────────┘ │
└────────────────────────┼────────────────────────────────────┘
│
┌────────────────────────┼────────────────────────────────────┐
│ Backend Selection (Runtime) │
│ ┌─────────────────────▼───────────────────────────────┐ │
│ │ MLContext::select_backend() │ │
│ │ • accelerated: bool │ │
│ │ • power_preference: "low-power" | "high-perf" │ │
│ │ • Platform capabilities check │ │
│ └─────────────────────┬───────────────────────────────┘ │
└────────────────────────┼────────────────────────────────────┘
│
┌────────────────┼────────────────┐
│ │ │
┌───────▼──────┐ ┌──────▼──────┐ ┌─────▼──────┐
│ TensorRT │ │ ONNX Runtime│ │ CoreML │
│ Backend │ │ Backend │ │ Backend │
├──────────────┤ ├─────────────┤ ├────────────┤
│• ONNX Conv │ │• ONNX Conv │ │• CoreML │
│• trtx-rs FFI │ │• ORT C API │ │ Conv │
│• GPU Exec │ │• CPU/GPU │ │• macOS GPU │
│ │ │ Exec │ │ /Neural │
│ │ │ │ │ Engine │
└──────────────┘ └─────────────┘ └────────────┘
rustnn implements the W3C WebNN Device Selection Explainer:
# User provides hints, platform selects optimal backend
context = ml.createContext({
'deviceType': 'gpu', # accelerated=true
'powerPreference': 'high-performance'
})Selection Logic (rustnn/src/python/context.rs:473):
| Hints | Platform | Selected Backend | Device |
|---|---|---|---|
accelerated=false |
Any | ONNX Runtime | CPU only |
accelerated=truepower=low-power |
Any | ONNX Runtime / CoreML | NPU > GPU > CPU |
accelerated=truepower=high-performance |
Linux/Windows | TensorRT | NVIDIA GPU |
accelerated=truepower=high-performance |
macOS | CoreML | GPU / Neural Engine |
accelerated=truepower=default |
Any | TensorRT / CoreML / ORT | Best available |
Key Principles:
- Backend selection at context creation (not compile-time)
- Same graph can execute on multiple backends
- Platform autonomously selects actual device
- Feature flags control availability, not selection
webnn-onnx-utils provides the single source of truth for ONNX/WebNN interoperability:
webnn-onnx-utils/
├── data_types.rs Type mapping (WebNN ↔ ONNX)
├── operation_names.rs 90+ operation mappings
├── attributes.rs ONNX attribute parsing/building
├── shape_inference.rs Operation shape rules
├── tensor_data.rs Tensor serialization
└── identifiers.rs Naming conventions
Used by:
rustnn/src/converters/onnx.rs (WebNN → ONNX export)
webnn-graph/src/onnx/convert.rs (ONNX → WebNN import)
This ensures consistent behavior across:
- ONNX Import (webnn-graph): ONNX model → WebNN graph
- ONNX Export (rustnn): WebNN graph → ONNX execution
- Type System: Unified data type handling
- Operation Semantics: Consistent operation mappings
# 1. Convert ONNX model to WebNN
cd webnn-graph
cargo run -- convert model.onnx --output model.webnn
# 2. Validate and visualize
cargo run -- validate model.webnn
cargo run -- visualize model.webnn --html
# 3. Execute with Python API (using rustnn)
python3 << EOF
import pywebnn as ml
import numpy as np
# Create context (selects backend)
context = ml.MLContext()
# Load graph
with open('model.webnn') as f:
graph = context.load(f.read())
# Execute
inputs = {'input': np.random.randn(1, 3, 224, 224).astype(np.float32)}
outputs = graph.compute(inputs)
print(outputs)
EOF
# 4. Or use TensorRT explicitly (Linux/Windows with NVIDIA GPU)
cd rustnn
cargo run --features tensorrt -- execute model.webnn input.npyThis workspace includes the following projects:
- rustnn - Core WebNN implementation (Rust library)
- pywebnn - Python bindings for rustnn (PyO3)
- trtx-rs - TensorRT integration for Rust
- webnn-graph - WebNN graph DSL and visualizer
- webnn-onnx-utils - ONNX utilities for WebNN
- search-bikeshed - WebNN specification search tool
Run make help to see all available commands:
make setup- Clone all repositories and set up the workspacemake clone- Clone all project repositoriesmake build- Build all workspace membersmake build-release- Build in release mode with optimizationsmake test- Run all testsmake check- Run cargo checkmake fmt- Format all codemake clippy- Run clippy lintermake clean- Clean build artifactsmake update- Pull latest changes from all repositoriesmake status- Show git status for all projects
- Initial setup:
make setup - Make changes in any of the sub-projects
- Build:
make build - Test:
make test - Format:
make fmt - Check:
make clippy
Each project is a git repository. You can work on them independently:
cd rustnn
git checkout -b my-feature
# make changes
git commit -m "Add feature"
git push origin my-feature- Unified dependency management: Shared dependencies across projects
- Cross-project refactoring: Changes can span multiple crates
- Single build command: Build all projects together
- Consistent tooling: Shared formatting and linting rules