By: Capt. Fahad ibne Masood, MRAeS, Autonomy Global Ambassador – GenNext Tech
City traffic is something everyone wishes they could avoid. Now with the introduction of electric vertical takeoff and landing (eVTOL) vehicles, that wish is being granted. In 2026, aviation companies such as Joby Aviation will begin commercial air taxi operations. They recently opened new offices in Dayton, Ohio where they will be producing four eVTOL aircraft per month by 2027. They also provide pilot training with advanced simulators. In addition, AIR has begun commercial manufacturing with plans of 2026 deliveries of 19 aircraft for cargo and passenger operations. They have already delivered their first production-standard eVTOL. All of these developments will allow urban hops to be quieter and greener and will transform global urban mobility. However, when it comes to autonomous eVTOLs, a more fundamental issue arises: how can such a machine be trusted to fly people? This gets to the heart of the issue of artificial intelligence (AI) in aviation, which requires an examination of how such systems make decisions.
Constructing the Problem: from Obscure Decisions to Real Threats

Everyone uses AI on a daily basis, such as in map and navigation applications. However, the AI used in aviation works on life-sensitive tasks using black-box models (neural networks). These models analyze and interpret large volumes of data (such as from sensors, weather and traffic). In contrast, they make decisions without explaining why. It’s like a driver saying “Turn left,” without providing an explanation as to the reason why. Was it to avoid an accident or avoid traffic? This opacity increases danger in complicated systems.
Studies in 2025 on AI reports and subsequent investigations on autonomous systems’ navigation failures at airports have shown that the more a system is left in ‘black-box’ mode, the more complicated and dangerous it can become. Studies also show that people are more willing to accept a system if they are able to understand how it works and are more opposed to the structure when it is opaque. A lack of transparency reduces system confidence. From a business perspective, this limits the potential of the industry to an incremental value that is significantly less than the projected worth, which is in the tens of billions of dollars, of the UAM market by 2030. This is why policymakers are responding quickly to the need for system development to minimize bias and brittleness in the systems they are using in these higher-stakes applications. That said, right now, as eVTOLs begin to operate as part of the urban air mobility (UAM) system, we can only assume that the more the system is left in autonomous modes, the more this will tend to reduce people’s trust.
Showcasing the Solution: The Next Block as Neuro-Symbolic AI
Neuro-Symbolic AI takes a hybrid approach and goes a step further. It evolves the black box paradigm of “Go this way” to “Go this way to maintain FAA mandated 500-foot separation from regional air mobility (RAM) corridors while optimizing to the shortest in the prevailing winds.” It does this by blending familiar neural learning (which involves pattern recognition from data) with symbolic learning (which is rule-based reasoning / logic). In the simplest terms, it makes AI reasoning explainable. Think of it as showing your work in math.
This AI will be the first to avoid hallucinations via logic embedding and will provide the first dependable autonomy for embodied systems like eVTOLs. AAM industry forecasters predict that neuro-symbolic AI will be embedded in AAM for demand forecasting, aircraft design and real-time UAM/RAM management by 2026. In fact, this year is anticipated as the “Year of Neuro-Symbolic AI.”
Thus, within the AAM industry, the early adoption of neuro-symbolic AI will strategically position companies to gain a competitive advantage. Because it enables probabilistic mission design, it fosters operational scaling while addressing autonomy gaps in oversaturated-unregulated airspace. From a reverse engineering standpoint, neuro-symbolic AI mitigates the AAM rogue AI problem, keeping aligned with the ethical Artificial Intelligence guiding principles (See: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai and https://www.unesco.org/en/artificial-intelligence/recommendation-ethics). Neuro-symbolic AI particularly excels in fulfilling tenets like Transparency and Explainability, Accountability, Human agency and oversight as well as Technical robustness and safety.
Advancing to Policy and Certification: Regulatory Building Blocks

Policies constitute the next layer, beginning with the development of general safety regulations, moving to frameworks that govern the specific use of AI. The European Union Aviation Safety Agency (EASA) is at the forefront of this, with its AI Roadmap 2.0, which it plans to review and update in 2025–2026.
The AI Roadmap 2.0 places emphasis on AI as human-centered, trustworthy, in an explainable form, ethical and robust. EASA accepted input on its first AI proposal in November 2025. A second Notice of Proposed Rulemaking (NPA) is anticipated in 2026, which will focus on issues of reinforcement learning and generative AI. The 2026 European Plan for Aviation Safety (EPAS) also prioritizes the incorporation of AI.
In the U.S., the FAA’s roadmap mirrors this, advocating black-box alternatives like aviation recorders for accountability. These policies strategically incentivize neuro-symbolic adoption, requiring auditable systems for certification, essential as eVTOLs like Joby’s S4 eye 2026 passenger flights.
Horizon 2026: Key Developments and Partnerships
With the establishment of policies in place, 2026 appears to be the first year of neurosymbolic AI in eVTOL, with the first major conferences and partnerships predicted to reshape the future of aerial autonomy.
Possible ‘Hullabaloos’ of Evolving Industry Conferences: At SATEC 2026, industry leaders will likely focus on the transformative role of AI in Neuro-Symbolic and the potential of ‘physical’ AI in advanced real-time operational systems which will be effectively auditable and free of operational hallucinations. At CES 2026, Joby Aviation CEO JoeBen Bevirt showcased his vision of the same as well as the neurosymbolic integration of the AI with hardware for unobstructed eVTOL navigation.
Strategic Partnerships: Joby’s integration with Uber for on-demand eVTOL and Delta’s integration for Air Traffic Control towers demonstrates the potential of Delta’s Neuros-Symbolic AI. In 2026, Delta will participate in the FAA’s eVTOL Integration Licensing Pilot (eIPP) in which the first eVTOLs will operate in metropolitan areas while employing deliverable explainable AI. These products will speak to the complexity of AI-integrated mobility and eVTOLs while pleasing sophisticated users with the ability to artfully navigate the gaps of mobility law.
Culminating in Strategy and Trust: A Policy-Driven Future
Neuro-symbolic AI, and related policies to implement it, together accelerate the potential for AAM industry expansion. Considerable investment interest continues in 2026, which experts expect will be the eVTOL breakout year. McKinsey’s data-driven trends for 2025 suggest that the use of neuro-symbolic AI will address the concerns associated with false perfection superficial perfection and will build trust with stakeholders. While challenges such as scalability remain, neuro-symbolic systems, supported by smart policies and sustained investments, will increase trust and allow autonomous eVTOLs to perform as reliable partners in the skies of the future.