Splashed across headlines, embedded in marketing pitches and invoked in almost every sector from healthcare to national security, artificial Intelligence (AI) has become a ubiquitous term. Yet, what does it even mean…and is it truly “changing the game” everywhere? In a recent Full Crew episode, three leading experts— Dr. Jacob Tyo (Raft), Michael Shrader (Carahsoft), and Halleh Seyson (CNA)—discussed what AI is, where it’s already making big impacts and some of the risks inherent in its deployment.
Defining The Defining Technology
To ground the discussion, the group defined AI’s key types. Traditional machine learning (ML) models are trained for specific tasks—think image recognition or predictive analytics. Large Language Models (LLMs), like GPT, can generate human-like text and code across a wide range of domains.
The latest buzz, however, surrounds agentic AI: systems that autonomously pursue goals, make decisions and adapt to new data or environments. These often integrate multiple models or agents to solve complex problems.
As Halleh Seyson, VP of Enterprise Systems and Data Analysis at CNA, explained, “Agentic AI is goal-oriented… It does everything to achieve that goal. That’s where governance and parameters set by humans become extremely important.”
Scientific Breakthroughs and the Power of Agentic AI
AI has leaped, in terms of utility, according to Dr. Jacob Tyo, Director of AI/ML at Raft, from “better search” and “workflow automation” to fundamental scientific discovery. “The future is one defined by this type of AI being applied such that they can contribute to our fundamental understanding. This is important for the military space too, think military planning,” Said Dr. Tyo.
Take, for example, Google DeepMind’s AlphaEvolve, an AI coding agent which orchestrates teams of LLMs to collaboratively tackle hard scientific problems, using iterative coding and robust evaluation frameworks to validate solutions. Its evolutionary framework means it doesn’t just generate code—it tests, refines, and selects the best solutions, allowing it to possibly go beyond what is known.
And those very frameworks just solved a previously unsolvable scientific problem: the 11-dimensional kissing number problem. In mathematics, the kissing number in n dimensions is the maximum number of non-overlapping unit spheres that can simultaneously touch another unit sphere. For 3 dimensions, the answer is 12. In higher dimensions, the problem becomes vastly more complex, and exact answers are known only for a few cases. For 11 dimensions, the exact kissing number was unknown…until AlphaEvolve cracked the code. The AI established that the kissing number is at least 593 spheres, improving on the previously known lower boundary.
Why does this matter? For several reasons. In terms of mathematical advancement, this result pushes forward knowledge in discrete geometry and sphere packing, a field with deep connections to coding theory, cryptography and theoretical physics. It also demonstrates that AI can contribute to open problems in pure mathematics, not just applied or computational domains, by discovering novel and non-intuitive solutions. Finally, this result set a new benchmark for both mathematicians and AI researchers by showing that machine intelligence can augment or even surpass traditional mathematical discovery in certain complex domains.
Beware The Entity?
As we learned, AI can be especially powerful for scientific discovery, where the search space is vast and traditional methods are slow or infeasible. But when it can tackle a complex mathematical problem that has stumped human mathematicians for centuries, should we be worried about it taking over the world? (Think: MI 8: Dead Reckoning)
The panelists emphasized that human-AI teaming remains essential. “You still need the humans to make sure it’s working right,” observed Michael Shrader VP, Innovative & Intelligence Solutions, Carahsoft.
Tyo added, “If AI is doing something not interpretable to us, there’s a huge question about its value. A criterion for these big advancements is that we can understand them as humans.”
Cybersecurity, Transparency and the Risks of Agentic AI
Connected to this enduring need for “human-in-the-loop” oversight, especially in high-stakes domains like cybersecurity operations, agentic AI presents both promise and peril there too.
In cybersecurity, agentic AI often operates as an “autonomous agent,” detecting and responding to threats in real time. However, the complexity and opacity of these systems can make it difficult to audit their actions, trace decision logic or ensure they haven’t been compromised themselves—a recursive risk.
Seyson, whose organization helps government agencies modernize mission systems safely and securely, evaluating and implementing emerging technologies like AI, pointed out, “Agentic AI can be targets for cyberattack, too.”
While agentic AI offers clear benefits in cybersecurity, by automating labor-intensive tasks, accelerating response times and handling data volumes beyond human capacity, Seyson further cautioned, “It’s hard to build trust in systems that operate with limited visibility. When they function like a black box… if they make a mistake, how do we correct course?”
“Historically, models were built for a specific purpose, so engineers knew a lot about the problem,” Tyo elaborated. “With agentic AI, we have LLMs trained for general use—they can do whatever you ask them to. That’s where problems come in: we’re way outside the realm of what anyone deeply evaluated this thing to do.”
Shrader added a business perspective. “The cybersecurity industry is ripe for disruption leveraging AI… but government adoption has lagged due to perceived lack of transparency and trust. There needs to be a middle ground between starting with trust and addressing issues as they arise, versus assuming issues are already there and never getting out of square one,” he said.
AI presents three core challenges in cybersecurity and otherwise:
- Transparency: Users and operators need to understand how AI systems make decisions.
- Accountability: If something goes wrong, who is responsible—the developer, the operator, or the organization?
- Governance: Clear standards are needed for testing, deploying, and overseeing AI systems.
Until these issues are addressed, Seyson argued, “We can use them (AI agents) as advisors, not full decision-makers.” Even so, she reminded us, “Humans can be black boxes, too. There’s a certain amount of questioning that has to be done, and you have to trust the answer. The same with machines.”
Army Transformation and Acquisition Reform
The Army is modernizing through AI, reforming outdated acquisition processes, and leveraging public-private partnerships to drive innovation and operational efficiency. A recent Secretary of Defense memo to the Secretary of the Army, reinforces this.
Issued in the wake of a major executive order on AI, it directs the Army to accelerate technology modernization, divest obsolete systems and reform acquisition to better integrate AI-driven command and control (C2), unmanned systems and rapid prototyping. AI-driven C2 systems ingest vast streams of sensor data, synthesize information and aid military leaders in making faster, more informed decisions.
Shrader explained, “This memo…calls out AI-driven command and control, increased reliance on drones and unmanned systems and a clear intent to divest resources away from obsolete systems. The old acquisition cycles just can’t keep up with the pace of innovation, especially with AI.”
Tyo, having spent over a decade in the Department of Defense (DoD) doing R&D, reflected on the challenge of transitioning from prototype to operational deployment. He noted, “The transition percentage is super low, but not because the technology isn’t good. The small percent of advancement that technology gives is just not worth the cost, time, and effort to implement it.” There’s a threshold for adoption of tech in the DoD. It can’t just be an improvement,” Tyo explained, “It has to be a really massive improvement to make it across the transition threshold.”
This newest policy shift to “capability buckets,” however, should allow for more flexible integration of commercial AI and emerging tech. It aims to break down barriers to entry for innovative startups and nontraditional contractors.
Encouraged by the memo, Seyson noted, “For years, procurement processes didn’t fully catch up with the Agile methodology in industry. It’s very encouraging to see alignment of government procurement processes and technological progress. That’s such a key step.”
AI seems poised to be at the tip of the spear for military funding, given the policy emphasis on public-private partnerships, scalable AI and new acquisition models to prioritize rapid prototyping, interoperability and mission-driven requirements over rigid, decades-long funding cycles.
AI’s Promise and Peril
We’ve established that AI truly is everywhere. From solving unsolved scientific problems to transforming battlefield C2 and optimizing cybersecurity operations, AI’s potential is vast. But so are the challenges of transparency, accountability and integration. At the end of the day, its meaning and impact will depend on context, governance and human oversight. The path forward is not about AI replacing humans, but about forging effective human-AI teams, reforming outdated systems and ensuring that innovation serves both mission and society.
To learn more, read the articles selected by our Crew and discussed in this episode:
“AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms” by the AlphaEvolve team (14 May 2025)
“Use of Agentic AI in Cybersecurity Needs More Transparency” by Rahul Neel Mani for GovInfoSecurity“Secretary of Defense Memo, SUBJECT: Army Transformation and Acquisition Reform (April 2025)”
By: Dawn Zoldi