TL;DR
This blog revisits Juvenal's "Who watches the watchmen?" through the lens of AI, where agentic systems supervise other agents, creating recursive layers of oversight. It traces cultural echoes of the question, shows how modern organizations have amassed more watchers than doers, and explains why AI's tendency to guess fuels the urge to monitor. The core argument contrasts mere surveillance with true involvement: effective governance requires accountable humans who are in the work, not just adding layers. The 2025 answer is layered — agents do the work, an agentic layer monitors them, and humans actively oversee that layer.
Quis Custodiet Ipsos Custodes?
I was listening to the Everyday AI podcast — Episode 671, "From Automation to Agents: Why Weak Data Makes AI Guess" — when Ed Macosky, Chief Product and Technology Officer at Boomi, said something that made me laugh — then immediately made me frustrated.
He described using "an agentic layer to oversee the agents." Bots watching bots. And I thought: We've been doing this with humans for decades. How's that working out?
Turns out, a Roman satirist asked the same question 2,000 years ago.
"Using another layer of an agentic layer to oversee the agents that are running so that if you see some anomaly behavior like, hey, agents have been doing a, b, and c, but this one looks like it may have stepped out of bounds. It will alert the business user that, hey, this agent did this. Or you might wanna check on this, or do you want me to stop this agent from behaving here."
An agentic layer to oversee the agents.
Bots watching bots.
And suddenly I'm thinking: who watches the watchers?
Ancient Origins of AI Oversight: From Juvenal to Plato
So I asked Claude where this statement came from.
Around 100 CE, the Roman satirist Juvenal wrote a line in his Satires that has echoed through two millennia: "Quis custodiet ipsos custodes?" — Who will watch the watchmen?
Juvenal's original context was about the futility of posting guards. If the guards themselves can be corrupted, the whole system fails. Plato wrestled with the same question in The Republic: Who guards the guardians?
The Question That Won't Die
Juvenal's question keeps showing up everywhere. Alan Moore built an entire graphic novel around it — Watchmen — where the phrase appears as graffiti throughout New York City, always partially obscured, never fully visible. (Moore didn't even know the phrase came from Juvenal until Harlan Ellison told him.)
Star Trek: The Next Generation has an episode called "Who Watches the Watchers" — but notably, it doesn't give an answer. It shows what happens when surveillance fails. Federation anthropologists are secretly observing a primitive civilization from a hidden outpost. The system breaks. A local sees Captain Picard and concludes he must be a god. Religious fervor spreads. They nearly sacrifice Troi. The episode demonstrates the problem; it doesn't solve it.
Terry Pratchett's Sam Vimes, commander of the Ankh-Morpork City Watch, gives maybe the best answer. When asked "Who watches the watchmen?" he replies: "Me."
More Watchers Than Doers
Here's what struck me listening to Macosky describe bots watching bots: we've been building toward this for decades with humans.
The U.S. Bureau of Labor Statistics reports that management occupations are projected to grow faster than the average for all occupations, with 1.1 million openings per year. The Project Management Institute says 25 million new project management professionals will be needed globally by 2030. PMI research shows 89% of companies now have a Project Management Office — a department whose job is to watch projects.
We've built organizations with layers upon layers of oversight. Managers watching teams. Project managers watching projects. PMOs watching project managers. Compliance officers watching processes. Quality assurance watching outputs. Internal audit watching everyone.
And now we're replicating this pattern with AI. Agents doing the work. An agentic layer watching the agents. A control tower watching the agentic layer. Dashboards watching the control tower.
The irony isn't lost on me: Gartner predicts that 80% of current project management tasks will be eliminated by 2030 as AI takes over. We're automating the watchers — and then building new watchers to watch the automated watchers.
Why We Need Watchers (Apparently)
Earlier in the podcast, host Jordan Wilson made a point that explains the urgency. The old automation — deterministic, rule-based — would simply fail when something went wrong. No output. You knew there was a problem.
But these new AI agents don't fail gracefully. As Wilson put it: "It might straight up lie or guess." The agent keeps producing outputs even when underlying data is incomplete. It fabricates. It confidently delivers wrong answers.
So you can't just let agents run unsupervised. You need something watching them. And when you have thousands of agents — which is where we're headed — you can't watch them manually. So you build a system to watch them for you.
Bots watching bots.
But Who Watches the Watchers?
And this is where Juvenal's question gets recursive.
If agents can guess and fabricate, what about the agentic layer watching them? What happens when the control tower itself has incomplete data? What happens when the guardrails are misconfigured? What happens when the watcher steps out of bounds?
Plato's answer was that the guardians must guard themselves — through proper training and cultural formation. They needed to internalize values so deeply that external monitoring became secondary.
In enterprise AI, that translates to leadership. Automated control towers are necessary infrastructure. But somewhere in this stack of bots watching bots watching bots, there have to be humans. Humans who actively define acceptable behaviors. Humans who review the alerts. Humans who answer the question: do you want me to stop this agent?
The pattern I keep seeing: organizations deploy the governance platform, announce the policy, and assume the problem is solved. Then Shadow AI thrives anyway because no one is actually watching the watchers.
Wait, Why Can't They Just Work?
I was talking with Claude about this article — yes, using AI to think through AI governance, because that's where we are now — and asked about the TV show references. Which episode? What season?
And as we went through the list, something else bubbled up. A different question. Less philosophical, more visceral:
Why don't all these watchers just… work?
Seriously. How many people have to stand around watching while someone else does the actual work? We've created entire organizational structures where the ratio of watchers to doers keeps climbing. And now we're automating that same pattern.
It gave me flashbacks to an argument everyone has had at least once: "If you don't like the way I do the dishes, then you do them."
The Dishes Problem
That dishes argument captures something essential that gets lost in all our talk of governance layers and control towers.
At some point, someone's doing the work. And three people are watching them do it. And one of them doesn't like how it's being done. But instead of doing it themselves — instead of getting their hands wet — they add another layer of oversight. A checklist. A review meeting. A dashboard. An agentic layer.
The person doing the dishes is standing there thinking: "Either trust me to do this or take over, but stop hovering."
Don Harrison, founder of IMA's Accelerating Implementation Methodology (AIM), would recognize this immediately. His 40+ years of research draws a critical distinction between involvement and surveillance. Real leadership involvement means you're in the work — expressing the business case, modeling commitment, reinforcing behaviors in your direct reports. You're doing dishes alongside people, not standing behind them with a clipboard.
But what we've built instead? Layers of watchers who may or may not own the outcome, exist to catch problems rather than create success, and have every incentive to find fault — because that's their job.
The AI version just makes it more absurd. Bots watching bots watching bots — and somewhere at the bottom, one agent is actually trying to do something.
The dishes metaphor exposes the core dysfunction: watching isn't helping. Watching isn't working. At some point, somebody has to wash the dish.
The Layered Answer
Quis custodiet ipsos custodes?
In 2025, the answer is layered: Agents run the processes. An agentic layer watches the agents. And humans — with sustained attention, not occasional check-ins — watch the agentic layer.
We've spent decades adding layers of human oversight. Now we're automating the oversight and adding layers of automated oversight. The question Juvenal asked two thousand years ago isn't going away. It's just getting more recursive.
Or as Sam Vimes would say: "Me."
Somebody still has to be the "me." And increasingly, that "me" needs to be involved in the work — not just watching from a safe distance while the dishes pile up.
Or maybe we could all just do some dishes.
Thanks to Jordan Wilson and Ed Macosky for a podcast episode that connected enterprise AI governance to an ancient question — and to Claude for helping me realize that sometimes the best insight comes from asking "wait, why can't they just work?"
Q&A
Question: What does "bots watching bots" actually mean?
Short answer: It describes a layered setup where AI agents do the work, and a separate "agentic layer" monitors those agents for anomalies, policy breaches, or poor outcomes. As Ed Macosky put it, this overseer can alert a human — "this agent did this; do you want me to stop it?" — or even halt the offending agent. In practice, organizations often add a "control tower" and dashboards above that layer to summarize and escalate what the overseers see.
Question: Why do AI agents need this kind of oversight when old-school automation often didn't?
Short answer: Traditional, rule-based automation typically failed visibly — no output meant you knew something broke. Modern AI agents, by contrast, can keep producing outputs even with weak or incomplete data; as the podcast noted, they might "lie or guess." At scale, you can't manually check thousands of agents' outputs, so you need automated layers that watch for drift, fabrication, and out-of-bounds behavior.
Question: How does "Who watches the watchmen?" apply to enterprise AI?
Short answer: The question becomes recursive: if agents can guess, the agentic layer that watches them can also err — misread signals, use incomplete data, or be misconfigured. Cultural touchstones (from Watchmen to Star Trek) dramatize this failure of surveillance. The essay's answer is that humans must ultimately watch the watchers — echoing Plato's call for well-formed guardians and Sam Vimes's "Me." In concrete terms, accountable humans define acceptable behavior, review alerts, and decide when to stop or change an agent's actions.
Question: Aren't we already overloaded with "watchers," and are we repeating that pattern with AI?
Short answer: Yes. Organizations have long layered oversight on oversight: managers, project managers, PMOs (now in 89% of companies), compliance, QA, and internal audit. The BLS projects 1.1 million management openings per year; PMI forecasts 25 million new PM roles by 2030. Ironically, Gartner predicts 80% of current PM tasks will be automated by 2030 — so we're automating the watchers and then building new watchers to watch them. The risk is surveillance without ownership: lots of hovering, little accountability, and flourishing Shadow AI because no one is truly "in" the work.
Question: What does effective oversight look like in 2025 — beyond just adding dashboards?
Short answer: A layered but engaged model: agents run processes; an agentic layer monitors agents; and humans provide sustained, accountable involvement over that layer. That means clearly defining guardrails, reviewing and acting on alerts, adjusting configurations, and stepping in to help "do the dishes" when needed. Drawing on Don Harrison's distinction, it's involvement (being in the work) rather than mere surveillance (standing back with a clipboard). Ultimately, somebody has to be the "Me" who owns the outcome.