Site icon Del Report

Europe Is Making AI Evaluation the Law: Africa Must Make It a Capability

By Clinton Ikechukwu

The European Union is taking a more assertive approach in its discussions about artificial intelligence, particularly how AI systems are evaluated. Previously, the debate was hinged on human values and technological innovation.

The EU’s Artificial Intelligence Act now states that evaluation and safety inspections are becoming enforceable, especially for high-risk systems. This means that when organisations seek to deploy certain AI systems into the European market, specific checks and documentation must be in place. Questions such as what was tested, how it was tested, who verified it, and what happens when systems change must be answered.

For years, the global AI conversation has focused on the West. And in 2024, the African Union adopted a Continental AI Strategy, signalling that the continent is pursuing its own path. This 66-page strategy is focused on Africa’s development, with a web of coordination across member states. From drafting national strategies and launching AI programs to exploring AI’s potential within local communities, the direction is Africa-centric and strategic indeed.

However, from the historical and economic context, and with the need for global trust, Africa might be persuaded to tilt towards the hollow performance of “best practices,” as often happens. And this places African leaders at a crossroads. While this critical point is not unique to Africa, African leaders must confront the daring question of either importing foreign rules or choosing true sovereignty. Although global standards are expected to become enforceable, the path of least resistance, which is simply copying standards, should not be Africa’s first reaction. Leaders must be deliberate about building capacities with a realistic roadmap. Investing in research and development in areas like AI institutes, labs, and training centres for independent auditors has to take priority.

These institutions are essential as they will play a critical role in the coming decade in deciding whether or not an AI system is actually safe for African citizens.
And why is this important? Think of a scenario where an AI system, built in Brussels, meets all requirements there but falls short in Nigeria. This is not a slight against Brussels, by any means, or a romanticisation of Lagos. The point here is to emphasise the concept of local context and, in parallel, disabuse the minds of African leaders from becoming dependent on compliance by simply copying and pasting the European Union’s AI safety checklist. The truth is, being perfect on paper does not translate to being failure-proof.

The prediction that Africa will feel the pressure is undeniable, and institutions will have to either choose to be compliant or risk losing business. And compromising is an easy way to keep the door open for business and partnerships. Many will call this diplomacy, but therein lies the trap of creating the illusion of safety without an actual practice of safety.

Once Africa gets to the point of outsourcing audits, local authorities are left to simply review well-prepared and polished documents, without the ability to verify an actual software behaviour, and this is how institutional dependence worsens. As a result, Africa assumes the position of a passenger in its own digital future, without holding the steering wheel. The truth is, while African institutions might seem compliant, they will have to struggle with the unintended consequence of gaining public and investors’ trust.

The way forward is, firstly, drawing the line that evaluation should not be reduced to paperwork, and recognising it as infrastructure. It is also important to state that borrowing from Europe isn’t a sin. However, the problem with this is a kind of borrowing where paperwork is often the focus without real capability. There is a clear difference between learning and dependence. Regardless, Africa must develop real use cases that question how AI will be deployed in African schools, hospitals, banks, borders, and elections. And there are four core aspects to be considered: tests, auditors, test centres, and procurement rules.

Auditors must be independently trained to do away with mere box-ticking and to be able to interrogate a model’s behaviour, suppliers’ claims, and affirm whether a system is safe, without compromise. Whether an AI system is imported or otherwise, and even if it seems fine in practical demonstration, it must be vetted by African institutions to be worthy for the African public. These tests should be grounded in the local context with specific requirements.
For the test centres, they could be public labs, accredited universities, or independent facilities, provided they can give credible audits. Another important aspect is how procurements are made, as this affects the market.

Imagine that the government mandates evaluation before purchase, and even after purchasing, monitoring after deployment becomes mandatory. This will persuade the market to follow suit. And this is because public institutions are one major buyer of digital systems, and so the government can set minimum evaluation requirements even before perfect legislation takes effect.

It is safe to say that in the coming decades, Africa’s strength in AI won’t be judged just by how much AI it uses (the number of apps, tools, systems, “AI projects” it launches). It will be judged by whether Africa has the power and capability to decide, for itself, which AI systems are safe and appropriate to use in African contexts, and which ones should be rejected, restricted, or redesigned.

Now, as these conversations grow louder, African leaders should see an opportunity to build a policy model that matches Africa’s realities by developing evaluation commons. What are evaluation commons? Evaluation Commons, in this context, are a shared African resource for evaluating AI that countries can use together. The positive takeaway here is that the AU’s Continental AI Strategy already provides a political platform for coordination for this to succeed. And what is needed next is an operational design.

Every country does not need to build everything alone or even host a world-class test centre, but every African nation must have the ability to trust the evaluation evidence. This will be made possible by regional resource sharing. This will also strengthen trust across the continent, and test centres with training programs will help in accelerating capacity.

If Africa gets this right, evaluations will become powerful institutions that will protect citizens, influence the market, and enable African leaders to take the steering wheel of their own digital future.
The EU’s AI Act indicates that Europe is moving to make evaluation the law, and ensuring that evaluation becomes a public capability becomes an imminent strategic task for Africa.

The question is not: can Africa “control” AI on paper? Can Africa verify AI in our languages? In our public services? Under our threat realities? And when it fails, can we act and hold someone accountable? Africa’s challenge is not about writing rules for AI; it’s more about whether people can actually verify if these systems work in local languages, inside real public services, and in the kinds of security situations we face. And whether anyone is held responsible when they don’t.

There are two possible outcomes over the next decade: copying paperwork or building real evaluation systems. And African leadership occupies an inexcusable seat at the steering wheel.

By Clinton Ikechukwu

About the author

Clinton Ikechukwu is an ML Engineer and Manufacturing and Materials Engineer working at the intersection of Industry 4.0, computer vision, and data-driven decision support. He is a Global Winner of an innovation prize. He is also a creative writer and thought leader, using storytelling to shape how we think about arts, engineering, and AI policy.

Notes and sources

African Union, Continental Artificial Intelligence Strategy, adopted July 2024.
• EU AI Act, prohibited practices (Article 5).
• EU AI Act, provider obligations and documentation for high-risk systems (Articles 16 and 18).
• EU AI Act, conformity assessment for high-risk systems (Article 43).

Exit mobile version