
Just me, three Mac minis, and a humanoid. Totally normal Tuesday in Palm Springs Coachella.
I've been covering AI in the Coachella Valley since 2024 — workshops, a daily radio show, and this newsletter. I talk about artificial intelligence every single day — what it's doing, where it's going, and why it matters to all of us living, working, playing in this desert.
And I have to be honest with you: I did not see this one coming.
Last week, the Pentagon blacklisted Anthropic — the company behind Claude, the AI model I use to run this show, build the intelligence platform behind it, and frankly think through most of the hard problems in my work. Not because Anthropic did anything wrong. Not because their technology failed. But because Anthropic refused to remove two guardrails from their military contract.
Guardrail one: no mass surveillance of American citizens.
Guardrail two: no fully autonomous weapons that can kill without a human making the decision.
That's it. Those were the red lines. And when Anthropic CEO Dario Amodei refused to remove them — even after the Pentagon threatened consequences — the President himself called the company "Radical Left AI" and "left-wing nut jobs."
So. If opposing mass surveillance of American citizens makes you a left-wing nut job, I want to go on record: I might be one too. Broadcasting from Rancho Mirage, building AI tools, drinking too much coffee. Apparently a full radical.
Let's Rewind: Here’s the Back-Story
Anthropic wasn't some protest company. They were the Pentagon's preferred AI partner. In late 2024, Claude became the first frontier AI model ever deployed inside U.S. classified military systems. By mid-2025, the Pentagon had signed a contract worth up to $200 million. Both parties agreed to the terms — including those two restrictions — before signing.
Then tensions came to a head over the military's use of Claude in the operation to capture former Venezuelan president Nicolás Maduro, conducted through Anthropic's partnership with Palantir. According to a senior administration official cited by Axios, an Anthropic executive contacted Palantir to ask whether Claude had been used in the raid — a raid in which people were killed. The Pentagon interpreted that as a provocation. Anthropic flatly denied the characterization, saying the company had not discussed the use of Claude for specific operations. Both sides told a different story. What happened next was not in dispute.
The Pentagon made one final offer. They framed it as a compromise — new contract language that appeared to address Anthropic's concerns. Anthropic read the fine print and found legalese that would allow the safeguards to be disregarded at will. It wasn't a compromise. It was a trap with better packaging.
Amodei published his response publicly: Anthropic "cannot in good conscience accede to their request."
On February 27, the moment the deadline expired, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk to national security. Here's the kicker: this designation had historically been reserved for foreign adversaries — Chinese tech firms, Russian state actors. It had never been used against an American company. And legal analysts immediately spotted the absurdity. The government was simultaneously arguing that Claude is so vital it can't tolerate any restrictions — yet so dangerous it must be purged from the entire defense supply chain. Amodei said it plainly: one position labels Anthropic a security risk, the other labels Claude as essential to national security.
Pick a lane, bro.
The Pentagon's reward for years of loyalty — and for turning down significant revenue by blocking Chinese Communist Party-linked firms from using Claude — was a blacklist and a presidential tweet. Cool deal.
Now Here's Where It Got Interesting
The public responded in a way that nobody on either side anticipated.
Claude had been sitting at around #41 on the App Store just before the Super Bowl, where Anthropic ran a pointed campaign mocking OpenAI's decision to introduce ads into ChatGPT. "Ads are coming to AI. But not to Claude." The ads pushed Claude into the top 10. Then the blacklist happened — and Claude shot to #1, overtaking ChatGPT entirely. Anthropic reported free users up more than 60% since January, daily signups at record highs, and server outages from unprecedented demand. Chalk art appeared outside Anthropic's San Francisco offices: "We are proud of you."
Meanwhile, OpenAI — whose CEO Sam Altman rushed a competing Pentagon deal to the table within hours of the blacklisting, accepting terms similar to what Anthropic had just been punished for — faced its own backlash. Altman admitted the timing looked bad, that OpenAI "shouldn't have rushed" the deal, and by Monday had already amended the contract to add explicit surveillance protections.
The corporate equivalent of “running into a burning building to look like a hero, then immediately calling the fire department because the building was, in fact, on fire” (credit” Milk Road). To be fair, Altman seems like the kind of person who would at least feel bad about it.
And it wasn't just users.
An open letter titled "We Will Not Be Divided" — signed by nearly 800 Google employees and close to 100 OpenAI employees — accused the Pentagon of trying to split the tech industry with fear. "They're trying to divide each company with fear that the other will give in," the letter stated. "That strategy only works if none of us know where the others stand." By Monday, nearly 900 people had signed. Separately, more than 100 Google AI engineers wrote an internal letter to Jeff Dean, head of Google DeepMind, urging the company to draw the same red lines as Anthropic. Dean appeared to agree. He wrote publicly that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression."
Why This Matters Beyond the Drama.
Anthropic's position in this fight wasn't contractual. It was foundational.
These weren't terms they inserted into a deal to protect their brand. They're the reason the company exists. Dario Amodei and his team left OpenAI in 2021 specifically because they believed the race to deploy AI was outpacing the work to make it safe. Everything Anthropic has built since — the research, the products, the business model, the Super Bowl ads, the refusal to take Chinese Communist Party money — flows from that original conviction.
So when the Pentagon demanded they remove those two guardrails, they weren't asking Anthropic to amend a contract. They were asking them to become a different company. Amodei said no. The market, at least for one remarkable week, said thank you.
And here's where it gets local.
The same values debate that just played out between a billion-dollar AI lab and the most powerful military on earth is coming to every small business owner, school administrator, city council member, and nonprofit director in the Coachella Valley. Not someday. Now. Not as an abstraction. As a real decision about which tools to trust, which platforms to build on, and what you're willing to hand over to something you can't fully see or control.
I've run over 30 AI workshops across this valley. I've watched real people — restaurant owners, educators, healthcare workers, chamber directors — reckon with what these tools mean for their work and their lives. Most of them are still in the "let's see what this can do" phase. Very few have gotten to the "and here's where we draw the line" phase.
Anthropic drew their line seven years ago. Last week, the Pentagon tested it. And the rest of us — here in the desert and everywhere else — are going to have to figure out where ours is.
The story is still unfolding so stay tuned.
Until next time, have a bright and sunshine-y sort of day.
Sat
Sat Singh hosts SunshineFM daily from Rancho Mirage. He uses Claude. He loves Claude. He doesn't want mass surveillance of fellow citizens. Possibly a left-wing nut job.