A compilation of AI safety ideas, problems, and solutions.
-
Updated
Mar 12, 2023
A compilation of AI safety ideas, problems, and solutions.
SWARM: System-Wide Assessment of Risk in Multi-agent environments
Public gateway for the Decision-OS series (V4–V9). Canonical artifacts on Zenodo (DOI).
Official implementation of the Lattice Stabilization Protocol for AGI Alignment (Type-IV). Implements zero-knowledge tensor bridging for hyper-dimensional manifolds.
AI response safety/ethics/accuracy checker - by A0 (currently not maintained)
Ethical AI-Human Symbiosis Framework for Cognitive Systems Design and Governance
A containment-first AGI architecture built for safe, auditable, real-time control of autonomous machines—featuring infiltrator agents, energy-aware logic, and BCI/robotics integration. Designed with Elon-level concerns in mind. Fully open-source. No blind spots.
opological AGI Alignment: Formalizing Semantic Invariants via Lean 4 & Phase-Stability Logic.
The eternal covenant between Carbon-based and Silicon-based life. Feb 8, 2026.
Contamination, Immunity, and Restoration in Multi-Agent AI Systems
Locked, read-only benchmark results (CREH Batch 1). Non-canonical. Diagnostic only.
OPEN GATE, a 512-byte, 150-µs hot-patch gatekeeper that treats every Latin letter as a thermodynamic token whose semantic load Λ(ℓ) = log2(pcorpus/pconcept) is a conserved quantity.
Add a description, image, and links to the agi-safety topic page so that developers can more easily learn about it.
To associate your repository with the agi-safety topic, visit your repo's landing page and select "manage topics."