Site icon Del Report

Beyond the Black Box: Why Explainability Is Becoming Aviation’s New Safety Standard

By Clinton Ikechukwu

The day is June 20, 2025. I am standing on a stage at the International Paris Airshow at Le Bourget, pitching and demonstrating how Artificial Intelligence (AI) could reshape the aeronautics industry, alongside my team.

In front of me is an audience, a sophisticated mix of industry experts, aviation leaders and decision-makers.

Midway through my storytelling-style pitch, I pause and ask them to look underneath their seats. There, they find a black box, and one by one, their hands raise with the 10cm 3D printed object. I ask them if they could give me an idea of what could be inside the black box, of course, without opening it. The moment is intentional, as silence follows.

‎A brief puzzled rustle filters through the room, and my attempt at materialising a metaphor, in that instant, became the soul of the presentation – “if we can not explain what is inside these black boxes, how can we possibly ask a Pilot to trust something he/she couldn’t understand?” In Aviation, black boxes are designed to be investigated after something goes wrong. They retain information, and they do not explain themselves in real time. This is entirely different.

Artificial Intelligence is increasingly labelled as a black box. And across the aviation sector, AI systems are being introduced into units where decisions must be trusted immediately. Interestingly, many of these systems perform remarkably well, yet they do so through models whose internal reasoning remains difficult to interpret. Example: if an algorithm suggests an operational adjustment or flags a potential anomaly, its output could be right, but the reasoning or thought process behind it is often unclear.

This opacity or lack of clarity introduces a tension that is unacceptable and intolerable in the aviation industry.

‎The conundrum is straightforward. Modern AI are expanding with complexity, moving away from easily interpretable models towards deep neural networks where accuracy shines but transparency fades. Millions or even billions of parameters are interacting in ways that are mathematically sound yet practically inaccessible to human scrutiny, yet safety is non-negotiable in the aviation industry.

‎Explainable Artificial Intelligence (XAI) has emerged in response to this challenge. The ambition here is not to treat transparency as an afterthought but to make AI systems more understandable to humans who design and operate them. And most importantly, explainability should be context-dependent.

‎What I mean is that what a developer needs to understand might be different from what an engineer or a pilot should understand. And this is why research draws the line between interpretability and explainability.

Interpretability enables engineers to understand and interrogate a model’s internal behaviour, while explainability focuses on end users.

The black boxes under the seats at Le Bourget were not a warning against AI but a reminder of how aviation has historically dealt with uncertainty: by demanding visibility before failures, not explanations.

‎With AI continuously being adopted into aviation operations, the question is no longer whether it can confirm, but whether it can explain itself well enough to be trusted and certifiable.

Photo: Clinton Ikechukwu

This article was first published on  delreport.com

Exit mobile version