Back To The Future of Software: How to Survive The AI Apocalypse with Tests, Prompts and Specs

DEVNEXUS Video Coming Soon

Slides

A presentation at DEVNEXUS in March 2026 in Atlanta, GA, USA by Baruch Sadogursky and Leonid Igolnik

Abstract

AI writes 42% of committed code, yet 96% of developers don’t trust it. Every abstraction leap in software history — compilers, VMs, cloud, serverless — was deterministic. AI isn’t. We’ve been giving monkeys GPUs and hoping for Shakespeare. The industry responded with context engineering, skills, and spec-driven development — $322M+ in funding, 145k+ GitHub stars. But all of it optimizes the edges of a chasm: better inputs on one side, better outputs on the other, and stochastic monkeys in the middle. The Intent Integrity Chain bridges that gap: human intent in, human validates the spec, machine creates deterministic tests (locked), monkey writes code, machine validates against locked tests. No monkey ever validates its own work. We wrapped a non-deterministic process in a deterministic chain — the same trust mechanism that made every previous abstraction leap work. We’ll show the chain in action with iikit (intent-integrity-kit), a fork of GitHub Spec Kit that adds the missing piece: test verification. Live demo included.