AI and Civil Society
Stakeholders Call for Robust AI Regulations in Africa Amid Concerns Over Ethical Deployment
With civil society organisations in Africa joining their peers in the global north in adopting and deploying Artificial Intelligence in their promotion of inclusive engagement and increased participation of individuals in processes that affect their lives, stakeholders are calling for robust frameworks to ensure the fair and responsible deployment of AI.
Africa is slowly moving towards the use of AI in various sectors of society, with 'small pockets' having so far adopted the technology. There are concerns that the absence of clear and robust frameworks to regulate the deployment of AI could lead to negative consequences, including governments being unable to guarantee data security, the right to privacy for individuals, as well as breaches of cyber and national security. Stakeholders have said that the impact of deploying AI is far-reaching and have lamented the lack of transparency, accountability, and equity in the development and deployment of the technology.
Civil society organisations in Africa are using AI for mapping human rights abuses, number-crunching when dealing with huge statistics, and the technology has enabled efficiency in addressing human rights and developmental needs of communities in Africa. "These technologies can impact everyday lives, including key moments like influencing voting patterns when it comes to national elections," Hayes Mabweazara, a respected analyst and scholar in the journalism and media studies department at Glasgow University, told the Friedrich Naumann Foundation for Freedom (FNF). "These technologies tend to be hidden. They are sub-strate, which means across different sectors, including civil society, they are very likely to be manipulated in very negative ways."
Mabweazara said that while there was a need for robust regulatory frameworks, there is a dearth of knowledge in Africa on how the technologies work, making regulation difficult. "There is a lack of transparency," Mabweazara said. "The big technology companies that are generating and coming up with these technologies are hardly ever transparent. They don't disclose how these technologies are made. There should be regulation that requires that all AI technologies be tested before they are deployed so there is a reduction of harm. It's very hard to regulate what you really don't understand, mostly in the global south where AI is right at the bottom of priorities in terms of issues that bother us as countries. There are still a lot of unknowns and gaps. In the African context, what's required is a transnational approach to regulating these technologies so that those who are lagging behind can benefit from those who have made strides in coming up with regulatory frameworks."
It's very hard to regulate what you really don't understand, mostly in the global south where AI is right at the bottom of priorities in terms of issues that bother us as countries. There are still a lot of unknowns and gaps. In the African context, what's required is a transnational approach to regulating these technologies so that those who are lagging behind can benefit from those who have made strides in coming up with regulatory frameworks."
Sophia Tekwane, a political activist based in Sweden, told FNF that civil society organisations in Africa could benefit immensely from the potential solutions offered by AI. "However, there should be guidelines and regulatory frameworks to ensure accountability, transparency, ethical, and responsible use of AI for the good of individuals and societies in Africa."
So far, three countries in Africa—Mauritius, Egypt, and Kenya—are at advanced stages of making policy documents on the use of AI, while Morocco, South Africa, and Tunisia have begun the process of coming up with policies to regulate AI use. Studies have shown that if unchecked, AI deployment can undermine fundamental rights and freedoms enshrined in various international, regional, and national statutes, including the rights to privacy and personal security.
An ad hoc expert committee assembled by UNESCO made a recommendation in November 2021 on the Ethics of Artificial Intelligence. The recommendation outlines 10 principles, including transparency, fairness, safety, security, and non-discrimination. The recommendation calls on member states to have strong enforcement mechanisms to remedy harm caused by any AI system.
Rashweat Mukundu, Africa Adviser for the Denmark-based International Media Support, told FNF: "What we need in Africa are policy interventions that do not limit rights and uses of AI but rather mechanisms that mitigate the negative effects of AI. For example, the security threats that AI poses to human rights defenders as well as ethical issues that AI poses, such as issues around misinformation and disinformation and the creation of false information that may result in public disorder. What's needed is dialogue among civil society organisations, governments, policymakers, and ordinary citizens in terms of what these new technologies mean for African societies and how we can raise our leverage to engage with big technology companies. We need as Africans to raise our capacity to engage with these big technology companies so that we mitigate the negative effects of AI.
"We also need robust policies to mitigate against the harm that can come as a result of the use of AI by repressive regimes. AI causes a security risk to individuals working in civil society and pushing human rights agendas. Voices can be created using AI, and individuals can be said to have uttered things that are of a criminal nature, and police and governments can act on that as truth. There is a huge security risk that comes with AI in relation to how it can be manipulated by governments, especially in Africa."
Director of the Zimbabwe Chapter of the Media Institute of Southern Africa, Tabani Moyo, told FNF that although civil society organisations in Africa were still experimenting with AI, there were concerns over algorithmic biases and the privacy of personal data. "The AI system will inherit the biases of its producer, and many AI systems are developed in the West. There is a notion that the West has its biases, and producers of AI technologies will knowingly or unknowingly perpetuate those biases. We need datasets developed by Africans and policies that ensure AI deployment adheres to principles of fairness, transparency, inclusivity, and accountability."