AI Everywhere: Skynet Nightmares, Trust Dreams and the Race to Machine-Speed Governance

The February Full Crew addressed hot topics in AI ranging from evals, identity fabrics and Chinese AI doctrine to the best LLMs and whether Skynet will occur in 2025.

By: Dawn Zoldi

Artificial intelligence (AI) once merely a research topic now hums in the background of daily life. From smart assistants and copilots to targeting pods and court chatbots, the Episode 77 of the Full Crew newscast engaged in a fast‑moving conversation that analyzed AI as the infrastructure that shapes power, identity and even the meaning of trust.

Machine-Speed Warfare: “Move Fast But Obey the Rules”

Autonomy Global’s Ambassador for GenNext Tech, Capt. Fahad Ibne Masood framed the first article discussed, New York Times piece, Move Fast, but Obey the Rules: China’s Vision for Dominating A.I.,” noting how it “certainly highlights the tension between (1) aggressive innovation and (2) absolute state control.” Beijing wants AI to be as transformative as the steam engine or the internet, but also insists that the tech never “spiral out of control.” It pushes companies to race ahead, while threading a tightening regulatory needle. (Read Masood’s latest AG article on AI).

Jesse Hamel, a former AC‑130 Spectre pilot and now founder and CEO of Victus Technologies, doesn’t sugarcoat the stakes. “From a national security perspective, the CCP (Communist Chinese Party) is the pacing threat,” he said. “I believe we’re in the early kind of dawn of the age of robotic warfare and agentic warfare.” To him, the real danger is a world “where the CCP has better AI tools and super intelligence than the free West does. I don’t think we want to live in that world,” he warned. (Watch Jesse Hamel on the Dawn of Autonomy, Ep. 107).

Masood pressed the edge case: what happens when quantum computing blows past classical limits and kicks the AI arms race up another notch? He highlighted China’s “quantum chips…that’s really pushed the envelope for the world of AI,” and said he worries that when quantum arrives “it’s going to go completely out of control.” Hamel called quantum “a very big bet” with real opportunity costs, but zeroed in on a familiar U.S. weakness: the valley of death between lab breakthrough and real‑world deployment.

“What I think…this idea of harnessing laboratory‑based breakthroughs and then transitioning them to commercialization…is something that the US has an imperative that we have to do faster than the CCP,” he argued. “We have to do that faster than a centralized economy.” His prescription does not involve not more white papers, but funding “a diverse portfolio of…hardware and software…on classical AI, quantum AI,” and then to “rapidly transition that into operational use…into even commercial areas.”

Chris Sniffen, an Applied AI Engineer at Snorkel AI, expanded the lens to infrastructure. The CCP, he noted, can “centralize a lot of the things that are foundational to AI,” such as hardware, data and energy, in ways the West cannot. With data centers already chasing power into space, that stack‑level control matters. But he shared Hamel’s view that the U.S.’ edge lies in variety, not central planning. “We can make a thousand bets, whereas I think the temptation over in China is going to be to centralize on a smaller number.”

The Crew framed a nuanced reality that China is building a “move fast but obey the rules” AI regime that tries to fuse AI domination with centralized oversight. The United States, if it wants to stay ahead, has to marry its chaotic innovation culture with new public‑private mechanisms that can keep lab‑grade AI from dying on the vine…without importing Beijing’s model of total control. The combination of capital flows, compute and strategic intent will decide who gets to write the operating system of the 21st century.

Identity, Agents and Human‑Centric Trust

From geopolitics, Masood pivoted using a theme he said that he returns to again and again: trust at machine speed. That segue led straight into Dr. Marina Rozenblat’s chosen article, “Rethinking identity for the AI Era: CISOs Must Build Trust at Machine Speed.” The piece argues that current identity and access models, built around human users, “were never intended to handle the speed, scale and complexity of AI” and are likely to “collapse when faced with thousands of autonomous agents” hammering systems in real time.

Rozenblat, Chief Scientist of Data Management and Analytics at CNA, works with federal clients on exactly this boundary between government and industry systems. She chose the article because it forces a basic but uncomfortable question: “We’re all enjoying using LLMs and various AI agents with so many different tasks, but are we thinking about how it’s exposing all of the data that it’s really rapidly digesting?” (Watch Dr. Rozenblat on the Dawn of Autonomy, Ep. 77).

Full Crew/AGN YouTube
Episode 77 of the Full Crew featured Jesse Hamel, Founder & CEO , VICTUS Technologies, Dr. Marina Rozenblat, Chief Scientist of Data Management and Analytics, CNA and Chris Sniffen, Staff Applied AI Engineer, Snorkel AI, led by moderator and AG AMB for GenNext Tech Fahad Ibne Masood.

Traditional identity relies on “something you know, something you have, something you are,” she said. That framework, she suggested, “kind of breaks down…with an agent because how can…we prove what it is?” As AI agents impersonate humans, spawn sub‑agents and make decisions faster than governance processes can respond, the article calls for an “AI trust fabric,” a re‑architected identity layer that treats agents as first‑class entities with dynamic, cryptographically-enforced privileges.

Masood seized on a vivid analogy from the article, “random person off the street,” and asked Rozenblat to unpack it. Giving a general‑purpose agent sweeping access to your data, she explained, “is kind of like giving a person off the street access to all this information.” You’re not “granularly…looking through…is this a need to know?” Masood reiterated a baseline prescription aligned tightly with the article’s recommendations that “we need to move towards (1) dynamic, (2) cryptographic and (3) strong identity layers. There is no other way around it,” he said.

Sniffen brought the data angle back into the discussion. To him, “it all comes back to the data for AI always,” and not just for training. Trust is built in how systems are evaluated in real use. “The only way that you’re able to have confidence in how an AI or an agentic system behaves is having good evaluations where you can repeatably demonstrate that it’s going to do exactly what you think it’s going to do,” he explained. “That kind of evaluation is what builds trust in the system.”

Hamel, by contrast, offered a darker view. “I take a little bit of a black‑pilled approach to this,” he admitted. “I think a lot of the trust is mostly dead and it’s not coming back. Not anytime soon. Maybe never.” Zero‑knowledge architectures won’t save us. Instead, “it’s going to be all about the source,” about organizations that build reputational capital by consistently delivering reliable outputs. In his view, AI systems will be “black‑boxed at some level,” their internal agent swarms “almost impossible to ID,” and anyone promising perfect explainability is probably selling “snake oil,” in his assessment.

The article’s notion of an “identity trust fabric” is meant to stop that slide by embedding finer‑grained, just‑in‑time authorization and explicit delegation into every agent interaction. Rozenblat likes the direction, but remains cautious. “A bit risk‑averse,” she prefers secure, internal models like CNA’s own Morse Code, precisely because “you don’t really realize all the things that you could put in” when you use public LLMs.

Masood threaded these ideas into a broader concern about “blackboxing” as “one of the core challenges of AI right now,” from opaque training data to baked‑in human biases. He concluded that we need grand strategy and rule‑based order that can coexist with “machine speed operations,” or we risk a world where identity, attribution and accountability fall apart just as AI agents begin to run the network.

Hallucinating Courts, Swiss Cheese Safety and Skynet

Masood flipped from theory to practice with the final article, picked by Sniffen, that sounded a law‑tech alarm, “Alaska’s court system built an AI chatbot. It didn’t go smoothly.” Alaska’s court system tried to replace a human probate helpline with an AI chatbot. The project “was supposed to be a three‑month project,” but 15 months later it was still not reliably deployable. The core problem revolved around hallucinations. When asked where to get legal help, the bot confidently recommended tapping alumni of “a law school in Alaska” that doesn’t exist.

Ben_24/shutterstock.com
AI has become part of everyday life…and that’s just the beginning.

Sniffen was sympathetic. As an applied AI engineer who builds agentic systems for federal and commercial clients, he called it “a really really interesting case study” and tipped his hat to the Alaska team “for the attempt.” The article, he noted, highlights two hard lessons. First, expectations: “One of the stakeholders…says that this system needs to be 100% accurate. That’s a really high bar for an AI system. Let’s be honest, that’s a high bar for a human,” Sniffen said. Second, evaluation: they did the right thing in trying to build a rigorous evaluation harness, crafting 91 representative questions and repeatedly testing the system, but “it was a very expensive task and…they ultimately weren’t able to follow through just because of the investment costs.” For Sniffen, that last bit is the moral of the story: “When you’re building AI systems, those evals are the basis of trust for your users. And if you don’t have those costs built in from the get‑go, it’s going to be really hard to get to deployment,” he explained.

Masood turned that into an aviation analogy, contrasting “hallucinations versus heuristics,” warning that “near‑perfect solutions are not good enough in specifics like legalities and medical procedures.” He suggested a Swiss‑cheese stack of defenses, drawn from human factors safety models, may catch AI’s inevitable holes before they line up.

Rozenblat, a bit more cautious, when asked whether we can trust a chatbot with a probate form today, answered, “Right now I think it’s easier for me to say no.” She pointed out that what looks like a low‑risk administrative use case is, in fact, an emotionally and legally high‑stakes for people dealing with death and inheritance. When a system “is coming from the government…we expect it to be 100%” She prescribed a crawl‑walk‑run approach. Prove AI in lower‑risk applications, and “quantify risk…and make sure we mitigate all hazards before we move forward into the real world application.”

Hamel again pushed the other side of the coin, opportunity cost. He argued that “even the basic open models that are available right now are already smarter than the smartest human on earth,” at least when benchmarked across many tasks. The question, in his view, isn’t when we get to 100% accuracy, but “the moral imperative” of not withholding systems that are already “better than what’s being done with just humans” in domains like medicine, national security and infrastructure. Future generations, he predicted, will look back and say “there were things they could have done that would have saved a lot of lives…and they were just too hard, too stubborn, couldn’t figure it out, too old.”

Sniffen demurred with “everybody uses these tools a little bit differently” and suggested experimenting across systems rather than “pigeonholing yourself to one.” Rozenblat doubled down on secure, internal deployments. Hamel, for his part, pulled no punches on data policy and safety. He voiced “real concerns about Gemini and their data use,” and cited reports of harmful mental‑health outputs from GPT‑class systems even as he stressed that such issues “are fixable.”

The last question Masood posed is the one everyone is secretly asking: “Do we get Skynet by 2050?” Masood framed it in terms of “truly self‑generative” AI fused with quantum computing that goes off the rails. Sniffen’s answer: no. Generative AI is “based on human-generated data,” learning statistical patterns in that data to generate new outputs. We should be “very careful about attributing additional motives” to what are ultimately probabilistic systems, he advised. Rozenblat also said no, with a caveat. She believes “we have the ability…to self‑correct…if we don’t want to get there, I think we can stop ourselves.”

Hamel’s answer: yes, depending on the definition of Skynet. By 2050, he expects “the very word intelligence will mean something very different for [the] next generation,” as advances in manufacturing, materials and robotics enable “physical manifestations of agents in different domains” that “revolutionize human activity.” That future, he suggested, will arrive even sooner than 2050 “at our current pace.”

Masood closed the session noting that the next generation will have to live inside whatever AI future their elders build. Today’s debates about evals, identity fabrics and Chinese AI doctrine are not academic. Whether Skynet remains on the movie screen or becomes a metaphor for pervasive, semi‑opaque machine intelligence will depend on the choices policymakers, engineers and operators make now about where to push, where to pause and how much trust to place in systems that are already everywhere…and only getting faster.