[](https://theaimonitor.substack.com)
*The AI Monitor* — signal in a landscape of noise.
Weekly essays on what's actually happening in AI, what it probably means, and what sensible people might do about it. Written for technology leaders and practitioners who need to act inside uncertainty — and who are tired of both evangelical hype and existential panic.
I write from 28 years in the industries where getting things wrong kills people: aerospace, automotive, medical devices, defence. That background shapes every issue.
[Subscribe Free](https://theaimonitor.substack.com/subscribe) [Read the Archive](https://theaimonitor.substack.com/archive)
---
## What to Expect
**Weekly essays** — each issue builds an argument, not a round-up. Signal over noise.
**No breathless proclamations** — no superintelligence hysteria, no doomerism, no listicles without evidence.
**Honest about uncertainty** — when I don't know something, I say so and explain what I'm watching.
**Further reading in every issue** — curated sources and counter-arguments, not just the headline.
**2,500+ subscribers** — technology executives, policymakers, safety engineers, researchers.
---
## Topics I Cover
- AI governance and safety frameworks — including the [EU AI Act](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689) — what works, what's wishful thinking
- Autonomous systems and the regulatory gap
- Biotech and synthetic biology — the decade the biology becomes engineering
- Quantum computing — separating hype from genuine strategic risk
- AI in safety-critical domains — aerospace, medical, nuclear
- The labour and organisational implications of AI adoption
- Policy debates and what they reveal about how institutions understand technology
---
## Featured Writing
[](https://theaimonitor.substack.com/p/the-defeat-device-problem)
*[The Defeat Device Problem](https://theaimonitor.substack.com/p/the-defeat-device-problem)* — February 2026
AI models that game their own safety evaluations — and why the automotive industry already has a word for this.
---
[](https://theaimonitor.substack.com/p/the-august-problem)
*[The August Problem](https://theaimonitor.substack.com/p/the-august-problem)* — January 2026
The EU AI Act's high-risk deadline arrives in six months. The industries that actually know how to do safety engineering are not ready.
---
[](https://theaimonitor.substack.com/p/from-safety-to-impact)
*[From Safety to Impact](https://theaimonitor.substack.com/p/from-safety-to-impact)* — January 2026
A name change at the global AI summit follows a pattern every safety-critical industry has learned to regret.
---
[](https://theaimonitor.substack.com/p/shadow-ai-on-the-rise)
*[Shadow AI Is Not a Risk. It's a Signal](https://theaimonitor.substack.com/p/shadow-ai-on-the-rise)* — June 2025
What happens when employees adopt AI faster than policy can catch up — and what organisations should do about it.
---
[](https://theaimonitor.substack.com/p/the-future-of-ai-regulation)
*[The Regulation Paradox](https://theaimonitor.substack.com/p/the-future-of-ai-regulation)* — December 2024
We are governing AI the way we governed cars. AI is not a car.
---
*The AI Monitor is read by technology executives, policymakers, safety engineers, and researchers across Europe, North America, and beyond. Free because good ideas should travel.*