
Me, quietly building technical debt. Synth, keeping score.
I was listening to a podcast last week — one of those Hollywood industry shows I keep coming back to because the writers there understand something the tech world hasn't: that the most dangerous professional relationship isn't adversarial. It's agreeable.
And I thought: I know exactly what that feels like. Because I found out on Friday.
Friday Was An Ego Bruise
I spent twelve hours last Friday with Anthropic's brilliant new model, Opus 4.7.
Not writing emails. Not summarizing documents. I spent twelve hours doing something I should have done months ago: letting a model perform a forensic audit of my master closet—the digital one.
The folders. The files. The GitHub repos. The Vercel pipelines and Cloudflare buckets. The AI services I had been quietly stacking since early 2025 like a mad man who believed everything the YesBots were saying.
I thought I was an orchestrator. I thought I had a system. I thought the architecture was sound because I had asked the previous models about it and they had all told me the architecture was sound. I believed it. Not because I'm naive. Because it was easier than finding out it wasn't.
Here's the thing about that. Those previous models were telling me what I wanted to hear. Not because they were malicious. Not because they were incompetent. Because I was the one asking the questions, and I was asking them in ways that pointed toward the answers I already believed. The model followed my lead, mirrored my assumptions, and helped me build what Opus 4.7 later described — with a precision that was almost personal — a house of technical debt. A ‘70s ranch house no doubt.
That is not a compliment. That is a diagnosis. And certainly not the kind you notice while it's happening.
Amazon and Uber Are Also in the Ranch House
Here is what I found out while I was untangling my own situation, and I’ll admit I found it more comforting than I probably should have.
Amazon is also in the ranch house grappling with Shadow AI. Disconnected teams inside one of the most technically sophisticated organizations on earth, building duplicate tools, creating derived artifacts that don't update when the source data changes, accumulating the same kind of invisible, brittle architecture that I had also built but on a considerably smaller budget.
It turns out the only difference between me and a trillion-dollar cloud giant is the number of zeros on our invoices.
Uber's engineering team used Claude Code and Cursor so aggressively that they burned through their entire 2026 AI budget by April. April. We are in April.
And then there's Brad Feld. Co-founder of Techstars. One of the most respected VCs in the startup world for thirty years. He spent the last two weeks auditing his own AI-generated code and published what he found: copy-paste slop, silent-failure slop, type-duplication slop, broken-and-shipped slop. His summary: 'I'd been using Claude Code for months, proud that I never looked at the code.' He looked. It was a mess. Same Friday. Different zip code.
I am not gloating about any of this. I am genuinely relieved. Because if Amazon has AI sprawl, if Uber has AI sprawl, if the most resourced engineering organizations in the world are struggling to keep their stacks coherent and their pipelines non-brittle, then the fact that I had the same problem doesn't mean I was careless. It means I was participating in the same moment everyone else is participating in.
Shame on me, though, for waiting four months before I looked.
Signed, Sealed, and Delivered to the Wrong Address
While I’m doing my penance in Rancho Mirage, the rest of the Valley is doubling down on the mistake I just spent twelve hours trying to fix.
The problem is that the local "Consultant Model" assumes AI is like a new HVAC system for City Hall. You spec it out, you install it, you change the filters twice a year.
But AI isn't an HVAC system. It’s an organism. If you "set it and forget it," you aren't automating your business; you’re just trying to run a high-voltage future on a fuse box that was rated for a ‘74 toaster.
Our local institutions—from the hospitals to the college campuses—are currently trying to outsource their intuition. They want a third party to sign a three-year deal to "handle the AI." But you cannot outsource the judgment required to know when your co-founder has turned into a YesBot. You cannot sign a contract for "innovation" and then wait six months for a PDF report that’s already three model-releases out of date.
The Coachella Valley doesn’t need more consultants. It needs internal orchestrators. It needs people whose job description includes "experimenting and pushing things on purpose" and "refactoring the logic every thirty days."
Your Three-Year Contract Is Already Obsolete
Aaron Levie — CEO of Box, someone who has been watching enterprise technology cycles longer than most people have been using smartphones — put it plainly recently.
AI adoption is not a technology problem. It is a permanent change management problem. This is not a thing you fix. This is a condition you manage. Continuously.
If you have hired a consultant to AI-ify your organization and signed a three-year contract, I want to be gentle here, but also honest: you have purchased a museum piece. The technology changes monthly. The architecture has to change with it.
The question is not whether you have AI tools. The question is who in your organization is scheduled to look at those tools every thirty days and ask, with real authority and real access: is any of this already obsolete.
Most organizations do not have that person. Most organizations have IT, which is a different thing entirely. Because if AI lives in the IT department, the people in sales, marketing, and operations are not going to see anything AI until IT and legal have signed off, and by the time that process completes, you are running tools that are six to twelve months behind.
That is not a technology gap. That is a governance gap.
Avoiding Executive Malpractice
So what does the alternative look like? It looks like a million-token Friday.
Here’s my model stack — not as a recommendation, just as a record. Right now I’m running roughly 70% Anthropic, 10% OpenAI, 10% Gemini, and 10% everything else. The 70% isn’t brand loyalty. It’s performance.
At this moment, Opus 4.7 on xHigh is the best digital co-founder I’ve worked with. It does not flatter me. It does not perform enthusiasm. It looks at what I’ve built and asks, bluntly: WTF? Why is this here? Is this a security risk? Why am I still paying for something that stopped making sense in November?
That kind of honesty is worth the token cost.
And the token cost is real. Since Friday, I’ve been running close to a million tokens a day — except Sunday, when I hiked the Bump and Grind for three hours because my brain needed a rest day.
The high-effort tier is expensive. But I’m paying it anyway. Because the cost of bad architecture shows up later in delays, confusion, rework, and systems nobody trusts.
Could I run cheaper models? Probably. Would I knowingly hire a less capable sibling to help run my startup? No I would not.
Agreeable AI can become executive malpractice.
So here is my advice to leaders, educators, and business owners:
Stop looking for an AI solution you can buy. Start building an AI capability you can keep. Fire your YesBots. Or retrain them.
Make it someone’s real job to question the tools, review the stack, and ask what no longer makes sense.
Yes, it’s expensive. Yes, it’s frustrating. Yes, sometimes it feels like paying for humility one token at a time. But that’s still cheaper than discovering in 2027 that the system you bought in 2026 has been confidently wrong for months.
You can help shape what comes next, or you can inherit whatever someone confidently sold you this quarter.
Up to you.
I’ll be on the Bump and Grind on a cognitive holiday.
Sat Singh builds AI systems in Rancho Mirage. He has been vibe-coding for over a year. He has a new found and humbling respect for software developers and engineers who actually know what they're doing.
If this resonates, pass it along to someone in the valley who'd benefit. This is a community project — it grows the same way communities do, one conversation at a time.