
By Clinton Ikechukwu
I have a vivid memory from my childhood: a Saturday morning, with Celine Dion’s music faint in the air, my elder brother and I were scattered all over the table, writing fiction, drawing cartoon characters, and recreating storylines from Marvin’s comic book into our own world.
As I write at this moment, I smile at the refreshingly pleasurable recollection because I’m reminded that I’ve always been drawn to language — how they assemble, and with a peculiar interest in what I call, “the wrestle of metaphors.”
I remember those Saturday afternoons in Port Harcourt where we would be crammed up in a hall, the clock ticking before us, as we battled on-the-spot essay contests. No initial research.
No planning — just a blank page. We only get to know the topic when the clock starts. It was intriguing, tiring, and sometimes chaotic.
But subconsciously, something important was going on during those formative stages of my life. And I would only come to know later that I was learning to think and consequently arrange my thoughts persuasively. A skill I would later come to appreciate in life.
That process of thinking from nothing, creating messy drafts, ripping them apart in agony and starting over again to attain a structured argument within a timeframe was a learning process. It was the stage where my childhood brain muscles got trained. As Tom McAllister puts it in a New York Times guest essay, “All the learning in a writing course occurs in those moments of struggle.”
When a child learns to read early and write their own thoughts, it shapes their cognitive abilities. No wonder childhood researchers continue to promote early reading and writing because they are the foundation in helping a child’s critical reasoning and subsequently higher-order thinking. The US NAEYC says that when children read, they are “organising information to create meaning… recognising patterns…” and these are the actual pathways that lead to deep cognitive development.
I am also reminded of how the scholar, Hélène Edberg, expresses it clearly when she details in her book, Creative Writing for Critical Thinking. She explains that creative writing can support “critical meta-reflection.” She wasn’t just talking about mere self-expression but stating an important development of reflective, thoughtful selves through writing.
But the world has changed, and much has changed with me. Today, with a chatbot, a well-paced and thoroughly structured article can be written. My life is professionally caught between two worlds — building machine learning models and creative writing. And, at the university, when laziness sits on my tongue after a frantic array of classes, despite my undying love for writing, I turn to the easy path of chatbots.
ChatGPT helps me manage my activities and respond to my otherwise long, tedious obligations. Yet every time I do, a small part of me wonders if I am trading away the very things that shaped my life. I use these LLMs, not just because I am tired but because I also build, test, and play with them. Beyond that, they support my coding sessions and improve my understanding. It has become an integral part of my learning ritual. Without intending it, ChatGPT became my assistant. It works tirelessly, and the most admirable trait of it, for me, is its learnability.
“You are oversimplifying the answers and ignoring other, less influential countries,” I said to it in one heated evening of ours.
“Yes, you are right. I acknowledge my mistake,” it responded. This never-all-knowing attitude and how its response always improved as I prompted it better are excellent attributes that keep it endearing to humans.
Yet, every time I stroll through The New York Times or Wired articles, I am confronted by headlines that read along the lines of: “Is AI Making Kids Intellectually Lazy?” “Kids Are Offloading Their Critical Thinking to AI Chatbots — Here’s Why Experts Are Worried.”
I admit that there is often an accompanying uneasiness after reading through these headlines, particularly for my niece, who just turned three years old.
At the university today, institutions are deploying A.I. checkers and detection systems to check whether a student’s work was AI-assisted or not. But this is hardly reliable because even the best detection systems admit uncertainty.
“Our AI detector flags text that may be AI-generated. Use your best judgment when reviewing results. Never rely on AI detection alone to make decisions that could impact someone’s career or academic standing,” says one of the detector’s disclaimers — it’s amusing that a detector could be so unsure. Another one reads: “This result isn’t the percentage of AI text; it’s the model’s confidence in the classification of the entire text as AI.”
So, what does all this mean? We must accept the inescapable reality of AI. It is alive and breathing among us and irretrievably so. And we must seek responsible ways of accommodating it.
My niece belongs to a different generation from mine, and I am genuinely worried that by the time she will be writing her first composition, as we often call it in a Nigerian primary school, I envisage a bigger problem — beyond seeking the help of a chatbot. She and her generation might be skipping a foundational phase of their cognitive development. They might miss out on those brain exercises — those I encountered in the city of Port Harcourt and mostly appreciated by adulthood.
The psychologists say that children stand a risk of not learning how to think critically if they are exposed to chatbots too early. AI undermines a child’s attention span. True, when conversing with ChatGPT, there is often a wilful refusal to be detailed, sometimes. I just want to scan, skim haphazardly and confirm if the solution is worthy. Why? Probably because it is a machine and it should be error-free. My attention span in those instances collapses.
But many will argue that, even though I use these systems, I am still able to think, do independent analytical research, and it hasn’t stopped me from thinking critically. The answer is straightforward — I am an adult whose cognitive abilities are fully developed; I have gone through the foundational stages, and I am still evolving. But my niece and her generation haven’t.
Every time I encountered an assignment as a child, it was seldom a few minutes’ work. Sometimes, I would be frustrated; other times, I would have to call my elder siblings for assistance. And we would go through it until there was a breakthrough. Now, I imagine my niece doing an assignment, and there is a temptation — with just a single click, life presents clean answers. Of course, the next time, the same pattern will repeat, and the unintended consequence is cognitive laziness. No independent thinking. Almost zero effort.
So how can parents manage this AI era without sacrificing growth? I like to think that it should start by limiting or removing access to chatbots until a certain age. Children must be made to understand the role of AI in their lives and how it could affect them. Just like how we were scolded for using words like the “F-word,” it was a taboo. Below a certain age, chatbots must be inaccessible to children.
But banning the technology forever is impractical, so at a certain age range, when critical thinking has matured, children should be educated on AI literacy — how does ChatGPT work? What are its limitations? What is hallucination? How do hallucinations work, and why do those chatbots hallucinate? Most importantly, what is the importance of fact-checking, cross-referencing and confirming sources independently?
My niece needs to understand that tools like ChatGPT or Gemini—if they’re still in vogue by the time she comes of age—can be excellent learning companions. They’re useful for brainstorming, exploring ideas, or critiquing her work. But they should never be the final creators. She must remain the author, the thinker, and the one who decides what is correct.
Artificial intelligence can never substitute human interaction, and so social skills and emotional skills must be developed in parallel. The overall objective is to model critical thinking, and this can only be achieved by creating a sustained habitual behaviour. A child should question AI, discuss perspectives and soak in their natural curiosity as children.
As one who enjoys the art of writing and building ML models, I sit at the intersection of two worlds I can relate to. In this world, I am both a creative writer and a creative developer, and I understand the nuances of the products I create. I recognise that a chatbot can return a blandly competent text, and I also recognise that it can create false solutions to please me. I understand this system in totality, and so I do not want to raise children who fear A.I. I want to raise children who can outthink it. And that means protecting the struggle that taught many of us to think in the first place.
Clinton Ikechukwu
Clinton Ikechukwu is an ML Engineer and Manufacturing/Materials Engineer at the frontier of Industry 4.0, computer vision, and data-driven systems. A Global Innovation Prize winner, EU Erasmus Mundus meta 4.0 Scholar, and storyteller, he writes at the intersection of technology and human experience, questioning what we build with technology, and what we risk losing to it.
www.delreport.com





