Artificial Intelligence
Exposing gaps in governance and regulation
A person interacts with a digital interface displaying a map of Africa and the letters "AI," highlighting the integration of technology and artificial intelligence.
© ShutterstockThe rapid spread of artificial intelligence is exposing weaknesses in governance and regulatory frameworks that should protect users’ rights and prevent the misuse of personal data. Experts warn that AI requires firm, carefully tailored regulation, distinct from traditional information technology rules.
Professor Mpho Primus, an expert in Intelligent Information Systems and Artificial Intelligence, argues that AI governance must go beyond risk checklists and compliance tick boxes. Instead, it should integrate technical understanding, contextual awareness, and a commitment to digital dignity.
“I see a growing gap between the sophistication of our systems and the protections meant to guide them,” she said. “One of the biggest challenges is that many digital and AI tools still treat people as data points, stripped of the linguistic, cultural, and relational context that gives their lives meaning. This is not just a design flaw; it is a structural issue that affects how systems perform in the real world.”
She pointed to African language technologies as a clear example. Models trained largely on data from Europe, North America, or Asia often fail to capture tone, morphology, and pragmatic nuance in African languages. The result, she said, is misclassification, exclusion from services, and the reinforcement of inequalities that technology was meant to address.
Primus added that her own work, which spans algorithm development, dataset creation, and governance frameworks rooted in local contexts, focuses on closing this gap by ensuring AI systems recognise the complexity of African languages and the communities that speak them.
Benjamin Rossman, Professor in the School of Computer Science and Mathematics at the University of the Witwatersrand, said concerns around AI stem from the technology becoming increasingly powerful and general-purpose over time.
“This has several implications,” he said. “From a data perspective, it means we are far more vulnerable when personal information is leaked. For instance, just a few seconds of audio can be enough for some models to clone your voice and impersonate you fraudulently.”
He added that AI systems influence users in subtle but significant ways. “From social media content shaping moods and opinions to large language models reportedly persuading people to do things they otherwise would not, the impact is growing.”
According to Rossman, regulation is made even more difficult by the complexity of interactions between users and AI systems. “New use cases emerge every day, and many of the inner workings of these systems are kept secret by technology companies. This makes effective governance extremely challenging,” he said.
Professor Daniel Mashao, an expert in Artificial Intelligence, Human Language Technologies, and Society at the University of Johannesburg, said users also have a role to play by remaining vigilant and actively defining what is acceptable.
“We should experiment, as Australia has done, with measures such as accounts for minors,” he said. “If other countries adopt these steps, those that do not may find themselves at a disadvantage. For now, this needs to be a step-by-step process. We must put ethical frameworks in place and assess what is gained or lost.”
Australia has recently announced a ban preventing children under the age of 16 from accessing social media platforms including TikTok, X, Facebook, Instagram, YouTube, Snapchat, and Threads. Under the new rules, minors will be unable to create accounts, and existing accounts are being deactivated. Other countries, including Denmark and Malaysia, are also considering similar restrictions for teenagers.
Primus stressed that data has become the backbone of modern life, shaping how people access services, how they are profiled, and how they participate in society. “This makes strong digital rights, transparent AI systems, and context-sensitive data governance non-negotiable,” she said.
“If we want AI to serve people rather than erode their agency or identity, we must design and regulate it with a deep understanding of how people actually communicate, live, and interact with institutions.”