• Uncovering AI
  • Posts
  • 🤖 China's robot wolves are coming for Taiwan (and Washington just picked sides)

🤖 China's robot wolves are coming for Taiwan (and Washington just picked sides)

Wolf packs, Pentagon deals, and an AI godfather telling everyone to calm down. Sparks are flying.

My fellow AI explorers

Happy Star Wars Day. May the fourth be with you. And honestly, you're going to need the force this week because the AI news just got military-grade intense.

I'm not being dramatic. China just broadcast footage of autonomous robot wolves running coordinated urban combat drills. The Pentagon quietly signed AI deals with seven tech giants for classified military networks. And Anthropic, the AI safety company, got labeled a "supply chain risk" by the Department of Defense. That one is still making my head spin.

In today’s edition:

  • 🐺 China's AI wolf pack drones, and what they mean for the rest of us

  • 🏛️ The Pentagon takes sides, picking Google over Anthropic

  • 🛡️ Sam Altman's honest take on AI washing and the jobs debate

  • 🔌 North Carolina just proposed a bill that could reshape AI infrastructure

AI in China

China just showed the world its autonomous AI combat units, and they're operating in coordinated packs designed for urban warfare.

On March 26, China's state broadcaster CCTV released footage of AI-powered quadruped combat robots running simulated assault drills. A new Foundation for Defense of Democracies report makes clear these aren't just demos. They’re built for a very specific scenario.

Here's what you need to know:

  • The "wolf pack" robots are designed for dense urban terrain, coastal zones, and degraded communication environments: exactly the conditions of a Taiwan amphibious invasion

  • Armed variants have been shown operating alongside troops and drones during simulated assaults

  • The PLA is moving beyond demonstrations into coordinated battlefield units designed to operate at scale

  • On January 23, 2026, a PLA university broadcast a soldier supervising over 200 autonomous drones in 99 seconds for urban combat

The FDD report doesn't mince words: "China is not just modernizing its military. It is reimagining how future wars will be fought."

The strategic logic is cold and clear. A force built around expendable, networked machines can sustain offensive operations and reduce risks to personnel, potentially lowering the political and military cost of conflict for Beijing. When you don't have to worry about body bags coming home, the calculus for escalation changes entirely.

That said, these systems have real weaknesses. They rely on communications links and battery power, which makes them vulnerable to jamming and cyber interference. Video from state media showed several wolf robots struggling to conceal themselves in open terrain. They're not invincible… not yet. But they’re close.

The U.S. military has experimented with Boston Dynamics' Spot robot for surveillance and logistics, but has generally taken a more cautious and deliberate approach to autonomous lethal systems. Ethical debates and legal frameworks around the laws of war have slowed U.S. deployment. China appears to be operating with fewer such constraints. And the public release of this footage suggests Beijing is comfortable signaling that to the world.

🔮 Prediction: The drone and robot race isn't coming. It's here. Expect every major military power to accelerate autonomous systems spending over the next 24 months. The country that figures out swarm coordination at scale doesn't just win battles; it rewrites the rules of deterrence.

AI in Governments

This is one of the most consequential AI stories of 2026, and it’s barely making mainstream headlines.

The Pentagon has now signed AI deployment agreements with seven leading AI companies — including Google, Microsoft, Amazon, OpenAI, xAI, Nvidia, and Reflection — to deploy their models on classified military networks for "lawful operational use."

Here's the breakdown:

  • Google's Gemini 3.1 Pro is now live on GenAI.mil, the Pentagon's central AI platform, for classified defense use

  • Defense Secretary Pete Hegseth's push to build an "AI-first warfighting force" is accelerating

  • Anthropic refused to sign the "any lawful purpose" language, specifically objecting to autonomous weapons and domestic surveillance provisions

  • The Pentagon then designated Anthropic a "supply chain risk," a label normally reserved for foreign adversaries

The Anthropic situation is extraordinary. The company, founded on AI safety principles, drew a clear ethical line in the sand. And the Pentagon's response was to essentially blacklist them from military contracts. A judge has since issued an injunction on the actions taken against Anthropic, but the battle is ongoing.

600 Google employees signed an open letter opposing the deal when it leaked. The backlash echoes the 2018 Project Maven controversy, when employee revolt caused Google to abandon a drone surveillance contract.

The difference this time?

Every other major AI lab has already agreed to the same terms. Google's leadership argued that refusing could present significant legal and business risks.

The deeper issue here isn't which AI company gets the contract. It's the precedent being set. When powerful, general-purpose AI systems are handed to the Pentagon with minimal constraints, rolling that back later becomes exponentially harder.

🔮 Prediction: This Anthropic vs. Pentagon standoff is the opening act of what will become a defining debate: should AI companies have the right to place ethical limits on how their models are used by governments? That question is going to land in front of Congress sooner than most people think.

Money in AI

💸 Sierra Raises Again: Bret Taylor's AI Agent Startup Just Won't Stop Growing

Sierra, the enterprise AI agent company co-founded by OpenAI chairman Bret Taylor, is back in the fundraising spotlight. CNBC reports the company is raising fresh capital, adding to its existing war chest.

Here's the context:

  • Sierra hit $150M in ARR in January 2026, seven quarters after launching in February 2024, one of the fastest-growing enterprise software companies in history

  • The company last raised $350M at a $10B valuation in September 2025, led by Greenoaks Capital

  • Since then, Sierra has acquired three startups: Japan-based Opera Tech, voice agent company Receptive AI, and YC-backed French startup Fragment

  • Sierra's voice agents have already surpassed text as the primary interaction channel and are handling hundreds of millions of AI calls

It’s important to understand what Sierra actually does, because it doesn't just build chatbots. It builds fully autonomous customer service agents that can take real actions inside enterprise systems: process returns, update subscriptions, refinance mortgages, and troubleshoot hardware. Its agents reach over 90% of Americans in retail and over 50% of U.S. families in healthcare.

Taylor famously banned the word "chatbot" inside Sierra's offices. He sees each agent as a brand ambassador, customized in personality, tone, and capability for each client. Clothing company Chubbies gave theirs a sarcastic voice. Other brands went for British accents. SoFi, Ramp, and Brex are all on the roster.

What makes this fundraise notable isn't the headline number. It's the trajectory. Sierra is one of the only AI startups generating real, scalable revenue at this velocity while simultaneously expanding internationally and acquiring talent. That's a very different story from most AI companies right now.

🔮 Prediction: The agent era is producing its first real winners. Sierra is one of them. While AI giants like Anthropic and OpenAI compete for contracts and clout with their AI assistants, other companies like Sierra are taking an entirely different approach that is paying off big time. Watch for an IPO conversation to become very public before the end of 2026.

30-Second AI Play

How to Use the Google x Pentagon Story to Build Your LinkedIn Authority in 10 Minutes

The AI-military story is dominating newsrooms but is wildly misunderstood in most professional circles. Here's how to use it to build credibility fast:

  1. Open LinkedIn and click "Start a post."

  2. Lead with a hook: "Six hundred Google employees just tried to stop a Pentagon deal. Here's what they were actually fighting about."

  3. Summarize the two sides in 3 bullet points: Anthropic drew the ethical line (no autonomous weapons). Google signed. Pentagon is pushing for "any lawful purpose" language.

  4. Add your own hot take. Do you think AI companies should have veto power over military use? It's a genuinely open question that will generate real discussion.

  5. Hyperlink the primary source (the NBC News or NYT reports) to signal you're reading originals, not just aggregators.

  6. Tag 2–3 people in AI or defense who you know have opinions on this. Watch the comments roll in.

Why it works: This story has genuine ethical complexity, real stakes, and no clean "right answer." That's the formula for high-engagement LinkedIn content. People don't share obvious takes. They share things that make them think.

Other Relevant AI News!

🤥 Sam Altman just suggested out loud what everyone in the industry already suspected. He claims companies are blaming AI for layoffs they'd be making anyway, a practice now officially called "AI washing." But he would, right? Well, so far, the labor data doesn't yet show the mass displacement the doomsayers have been predicting.

🤖 Yann LeCun, one of AI's godfathers, has a calming message for anyone spiraling about the future: the idea that AI will erase 20% of jobs is "ridiculously stupid," he told Axios. He says doom narratives are actively harming teenagers' mental health, so yes, you still need to go to college, and yes, physics and electrical engineering degrees age well.

🔌 North Carolina lawmakers have proposed a bill that would force hyperscale data centers to cover their own infrastructure costs. So, no more passing billions in grid upgrades to household utility bills in The Tar Heel State. But it could also significantly slow the AI data center gold rush in one of the country's hottest data center markets.

🏛️ Democrats are pushing for an "affordability debate" around AI, arguing the real fight isn't just national security and innovation: it's who pays for AI and who actually benefits. Rising energy bills and job anxiety hit regular Americans harder than the Silicon Valley hype cycle suggests.

💰 DataVault AI priced a $60M public offering this week, a sign that AI infrastructure plays are still attracting serious capital markets attention, even as sentiment around overhyped AI valuations is becoming more cautious.

Golden Nuggets

  • 🐺 China's robot wolf packs aren't a future threat. They're a present-tense signal. The autonomous warfare race has officially started, and the U.S. is playing catch-up.

  • 🏛️ The Pentagon-Anthropic split is the most important AI ethics case study happening right now. The outcome will shape how AI companies negotiate with governments for the next decade.

  • 💡 "AI washing" is real, the labor data is inconclusive, and the smartest voices in AI are telling you not to panic. Use that clarity to your advantage while everyone else is spiraling.

Would love to hear your thoughts! Send me your thoughts by replying to this email (yes, I read them all :)

Until our next AI rendezvous,

Anthony | Founder of Uncover AI