By: Dawn Zoldi
AI isn’t transforming advanced air mobility because aircraft suddenly became “smart.” It is transforming AAM because a small set of highly specialized teams are turning messy, real‑world data into disciplined, regulator‑ready decisions at scale. End State Solutions, led by former Harrier pilot Charlton Evans, operates “at the intersection of advanced autonomy, airspace integration, and real‑world execution,” helping manufacturers and operators translate AI‑driven insights, safety data, and operational intelligence into repeatable, regulator‑ready large UAS and BVLOS operations in some of the harshest environments on earth. On the other side of the ecosystem, Dr. Yemaya Bordain brings a complementary vantage point of systems engineering and design assurance from the Advanced Air Mobility Institute (AAMI), an international nonprofit research and advocacy organization focused on the responsible adoption of aviation AI and autonomy and on frameworks that make AAM both safe and socially legitimate.
On a recent Dawn of Autonomy episode, which closed out the show’s “AI and Data” month, Evans and Bordain unpacked how AI‑enabled data, learning assurance, and rigorous operational frameworks are turning autonomy from a concept into a scalable, certifiable capability…and why data quality, operational rigor, and regulatory fluency are now inseparable in modern AAM.
From Hype to Hardware: What AI Really Does on the Aircraft

One of the most persistent misconceptions Bordain encounters is that “AI is some kind of wrapper that you will bolt on to the aircraft and it gives the aircraft a brain and it takes control of every part of the aircraft.” In reality, most aviation AI today lives deep inside subsystems. It replaces sprawling forests of “if‑then-else code with trained models that perform very specific tasks, such as detection and classification.
Those models are locked before they ever reach the flight line. They do not keep learning in flight. They execute deterministically on hard-real‑time hardware and software, producing the same outputs for the same inputs every time, as required in safety-critical avionics. Around them, the overall system still follows familiar safety processes and standards, including ARP4754A,ARP4761, DO-254, and DO-178C, with redundancy, voting logic, monitors, and guardrails to manage non‑deterministic components and edge‑case behavior.
This is where End State Solutions’ niche becomes clear. The firm “certifies autonomy in aerospace” by producing the safety cases, CONOPS, certification plans, hazard analyses and test artifacts that translate those AI‑enabled components and architectures into language the FAA and other regulators can actually approve. Drawing on experience that spans ScanEagle type certification, Section 44807 exemptions, complex operations BVLOS waivers and Part 135 approvals, Evans and his team embed directly into flight test programs to oversee data collection and documentation, ensuring that what AI‑enabled systems do in the real world lines up with what the paperwork promises.
Evans pointed out that in many ways “the most non‑deterministic thing on airplanes today traditionally is the pilot.” Uncertainty is not new in aviation.
Situational Intelligence: AI as a Safety Multiplier

Bridging the gap between that regulatory reality and cutting‑edge technology is where Bordain has spent much of her career. Her work at Daedalean (acquired in a successful exit) focused on implementing AI “on the aircraft, particularly in safety‑involved and safety-critical systems,” including a visual awareness system branded PilotEye. Its detect-and-avoid featured mounted cameras provide a 270‑degree field of view, constantly scanning the sky, with neural networks detecting aircraft and other traffic and cross‑checking those detections against ADS‑B.
“AI is about as good as you can possibly get for a machine to operate either as good as a human or better than a human at tasks that are up til now reserved for humans,” she explained, coining the term “situational intelligence,” the ability not just to sense, but to anticipate incursion risk and future states.
PilotEye’s capabilities went beyond basic traffic calls. Bordain described how the system could pick up a runway as a “tiny little dot” in a video feed, lock onto it, and then provide precise landing guidance long before human eyes could reliably pick it out. In helicopter scenarios, the system could identify emergency landing spots in real time, then dynamically update that recommendation if a hazard appeared. This is critical in platforms like a Robinson R22, where pilots have less than two seconds to initiate autorotation after power loss.
Evans connected this directly to lived experience. After a partial engine failure in his own light aircraft, he was forced into an immediate turn‑back and landing. For those moments, he said, he remains “hopeful that AI tools like that will emerge to give pilots better information faster than we can even come up with.”
That kind of safety enhancement is exactly the sort of capability AAMI seeks to see deployed responsibly, especially in public‑safety contexts like drones as first responders and air medical services, where the AAMI argues that “AAM technology should be available immediately to public safety agencies in order to expedite response time and thereby save more lives.”
Learning Assurance: Making AI Data Regulator‑Ready

As the conversation shifts from in‑cockpit effects to the mechanics of certification, the emphasis moves decisively from code to data. “Our requirements are no longer hard‑coded. Instead, our requirements are based on data. They go through data. Our testing goes through data,” Bordain noted.
To make that shift tractable for regulators, Daedalean introduced a now‑influential construct called “learning assurance.” The concept breaks down into several disciplined steps:
- Defining training and test datasets that truly represent the real world the system will see in operation, with strict separation between them;
- Deliberately curating edge cases and “garbage” inputs—not to hide them, but to label them, so the model learns what is out‑of‑scope or unsafe; and
- Iterating between lab development and field data collection, continuously refining models while maintaining traceability back to requirements and safety assessments.
The regulatory goal is familiar even if the methods are new: a model that generalizes to real‑world conditions and can be shown, with evidence, to perform its intended function across the operational envelope and be free of unintended function. As Evans put it, by the time that model is metaphorically “put in a box” and installed on an aircraft, it must be “a certifiable, predictable, known, transparent, traceable product.”
Here again, End State Solutions acts as a kind of “learning‑assurance integrator,” helping clients design flight test campaigns, durability and reliability programs and operational evaluations that generate the right kind of data for both model validation and FAA submissions. In recent projects, including Matternet’s historic type certificate and AATI’s complex BVLOS pipeline patrols, End State’s role has been to make sure that AI‑enabled systems and advanced CONOPS are backed by safety evidence that stands up to regulator scrutiny.
That AI component then slots back into a traditional safety case driven by Functional Hazard Assessments, system safety assessments and design assurance levels.
Offboard AI: Using Data to Build Better Systems Faster
Not every impactful application of AI in AAM is onboard. In fact, Bordain suggested that the fastest wins, and the lowest regulatory burden, come from using AI as a productivity engine across the aerospace and aviation lifecycle.
She framed the landscape as a matrix: building aircraft versus operating aircraft, and onboard versus offboard functions. In the offboard quadrants, AI can mine historical certification artifacts such as safety assessments, CONOPS and test reports, to accelerate new FHA, SSA and operational risk assessments; improve manufacturing yield and quality by flagging defects or anomalous vibration patterns before they become safety issues; and enhance training, maintenance and MRO through predictive analytics and adaptive learning tools.
Evans echoed that view from the services side. End State Solutions is already exploring AI to “assist in supporting compliance and accelerating safe, scalable operations,” from structuring waiver applications to checking test reports for completeness. As he told Autonomy Global in a related piece, “If we can use AI to navigate that much more efficiently, that’s a benefit to us and our customers… Part 108 is not going to make things easier [from a compliance perspective], standards are just going to be clearly mapped,” which means the volume and complexity of data will only grow.
For firms like End State Solutions, which “certify autonomy and aerospace” by building safety cases, CONOPS, certification plans and hazard analyses for large UAS and BVLOS operations, these AI‑enabled efficiencies can radically compress timelines without changing the underlying regulatory expectations. For an organization like AAMI, offboard AI is equally relevant, not to replace regulators or operators, but to power research, scenario modeling and community engagement that help policymakers understand how different AAM deployment choices will play out on the ground.
Evans’ team already operates “at the intersection of advanced autonomy, airspace integration and real‑world execution,” helping utilities, energy companies and OEMs translate AI‑driven insights into regulator‑ready BVLOS programs.
AI, Accountability and the Human in the Loop
For both Evans and Bordain, AI supports decision‑making. It does not relieve humans of accountability. “It’s very difficult to see a world where we should drop humans out of the loop,” Bordain said. “Humans should always have some oversight… there has to be a human that says no this is wrong, yes this is right.”
The sustainable path is augmentation, not replacement. In air traffic management, for example, AI‑driven decision‑support systems now run alongside human controllers, elevating situational awareness while leaving final authority in human hands, a pattern increasingly mirrored in UAS traffic management and emerging AAM CONOPS.
In practice, that means designing cockpit and ground control interfaces that prioritize clear, actionable cues over raw data dumps; actively managing false positives and nuisance alerts so that AI tools do not become safety hazards by drawing attention to phantom threats; and building organizational governance around AI use (data curation, model validation, QA and human review) so that “garbage in, garbage out” becomes “garbage in, clearly marked as garbage on the way out.”
Evans noted that even in the future when AI tools help draft safety documents or code, “we’re not going to just drop humans out of the loop and let the AI come up with an answer.” Internal systems still need governance, backstops and quality checks to prevent unintentional missteps in certification and operations.
Scaling BVLOS and AAM: When Data, Systems and People Align
The promise of AI‑enabled AAM is not merely about automation, it’s about scalability. End State Solutions’ work on type certification, exemptions and BVLOS waivers for platforms like ScanEagle and for operators ranging from BNSF to Chevron has already demonstrated that disciplined frameworks can unlock large‑scale operations in harsh, real‑world environments. At the same time, the Advanced Air Mobility Institute uses its international nonprofit research and advocacy role to keep public trust, policy excellence and equitable access at the center of that transition.
Bordain is explicit that AAM systems cannot only “help the rich or those that are privileged,” but must also serve rural and underserved communities, for example, through faster medical logistics.
To get there, Evans and Bordain argue, three elements must be inseparable: data quality operational rigor and regulatory fluency. When those three align, BVLOS and AAM programs scale not because they are flashy, but because they are repeatable, auditable and trusted.
Next Steps: Where to Go Deeper on AI, Data and AAM
For organizations serious about building AI‑enabled autonomy that regulators and communities will trust, Evans and Bordain both drove home that autonomy only scales when the data, the systems and the people are all pulling in the same disciplined direction.
End State Solutions works directly with manufacturers, utilities and operators to architect regulator‑ready large UAS and BVLOS operations in demanding environments, from data collection strategies to safety case development, 44807 exemptions, 91.113(b) waivers and Part 135 approvals. The AAMI, where Bordain serves as Board Executive and VP of Technology, functions as an independent think tank, connecting industry, regulators and researchers to shape responsible AI and autonomy frameworks and to embed public trust, access and resilience into AAM roadmaps worldwide.
As AAM moves from slideware to skyware, the programs that will last aren’t the ones with the flashiest autonomy. They’re the ones where AI, data and people are already working together like they’re cleared for takeoff like End State Solutions and AAMI.
To explore these themes in more depth, replay the full Dawn of Autonomy episode featuring Evans and Bordain beat-by-beat.