The ethical considerations of AI in Africa
ICT analyst Francis Hook delves into the adoption and ethical use of artificial intelligence in Africa and issues around regulatory readiness, human rights, privacy, data protection and social inclusivity for users on the continent.
Discussions around the adoption and ethical use of artificial intelligence (AI) continue to intensify around the world – with AI vendors occupying a good part of the discourse by shining light on the merits and use cases of various AI solutions.
Meanwhile, governments, policy makers and civil society organizations are seeking to divine where the pitfalls might lie for different use cases, and whether such concerns are valid so as to merit interventions including formulating policies, regulations or guidelines that serve as guardrails to protect individual rights.
Presently, the lenses through which AI ethics is examined comprise several principles on which most discussions are currently premised. These are accountability, fairness, inclusivity, privacy, safety and transparency. The ambition of this article is not to delve into those areas, since there is no lack of such analyses, but rather to touch on how some of them demand special attention in Africa and consider what peripheral issues may have bearing on ongoing discourse around AI ethics on the continent.
In Africa, these issues call for robust discussions for a variety of reasons, including concerns relating to regulatory readiness, human rights, digitization of other systems and social inclusivity. There are also other issues that seemingly lie in the periphery, but which are peculiar to Africa, and other developing regions.
Regulatory environment
In the regulatory space, data protection legislation, which already exists in many countries, is important law but only partially addresses some of the ethical concerns to do with privacy and data protection. Some regulation lacks the breadth and finesse required to holistically address AI's ethical concerns.
Aside from data protection, various countries are at different stages of developing laws, regulations or guidelines to inform the ethical use and deployment of AI systems.
In this environment, there are different schools of thought. Some represent governments that would rather formulate laws and regulations to govern AI (and related technologies in which AI can be leveraged); and others emanating from civil society and industry associations who feel that regulations may stifle innovation and rather propose that mutually agreed upon guidelines are put in place and adapted based on emerging needs.
Building digital panopticons
While AI used in surveillance systems can genuinely enhance public safety and national security, its development and deployment does not always have citizens' rights and social inclusion as key features. In some instances – under the guise of addressing national security concerns – governments may be more amenable to consider AI solutions where human rights are secondary to a country's need to develop digital panopticons.
In some countries, governments are considering introducing digital identity systems designed to bring about more inclusion and streamline government services. However, within this realm, AI is a technology that could be leveraged to improperly use data from digital identification systems, which deepens existing concerns about privacy, human rights and widening socio-economic rifts.
Accountable businesses
Much can be said about how social media and increased consumer connectivity lends itself to the exploitation of consumer information by players in sectors such as telecommunications, finance and retail. However, it may be more instructive to shine a light on payment systems, including mobile money, which is so pervasive across Africa.
As it were, there are growing concerns about how players in these sectors store, secure, handle, process and share the huge amounts of data they possess.
Since mobile and electronic payments have different regulations, the amount of data that accrues in these systems is quite considerable and now simply ready for use by AI systems. Either for sale to third parties for analysis, or internal use for customer segmentation – which itself can result in digital exclusion.
While some of these businesses are culpable, when it comes to mobile and electronic payments, even government agencies and departments who have embraced these technologies and possess other non-payment information on citizens should be under equal scrutiny.
It may be argued that payment systems and other information systems (including government systems) predate the AI we see today, and that the owners of such systems should be allowed to operate them without further scrutiny or varied regulation. However, even personal data collected prior should still be protected.
Africa is an AI consumer
It is pertinent to recognize that Africa, for the most part, is a consumer of AI systems created in developed countries where regulatory frameworks or policies had bearing on their development, and which don't take into account local needs. As such, the context used to examine ethical issues usually belongs to other (more developed) countries and may not fit Africa's needs.
Thankfully, the rest of the world is still in the process of discussing the ethical issues of a technology that not only continues to evolve but also presents new realities each day, so there remains leeway for stakeholders to identify the issues that are pertinent to Africa and articulate the continent's standpoint.
Hemispheres of influence
In some cases, the motivation that informs the development of AI applications or use cases varies vastly depending on whether one is looking to the East or to the West. This is also dependent on whether AI applications have individuals' interests at heart or prioritize state objectives like surveillance and national security.
Thus, when it comes to discussing ethics and regulation of AI, there are spheres of influence that invariably come along with the technology from wherever it originates, and this influence partly informs – or perhaps even clouds – the current discourse around ethics and regulation.
Currently most of the discourse around ethics and regulation is based on western contexts, though in some instances, governments seeking to leverage AI for censorship, surveillance and other uses, may be more enamored by countries that produce AI systems that allow them to have greater control over citizens. These may not necessarily pay heed to issues like human rights and privacy, but instead look beyond to how AI systems can help governments consolidate power, deal with dissent and other areas.
Meanwhile, those countries whose AI systems do not overtly focus on individuals' rights and privacy, may also underscore such capabilities as a way of cozying up to those governments that are amenable to AI solutions not encumbered by laws that prevent them from use cases, which may not always be ethical, legal or beneficial to citizens.
As different countries seek to deepen relations with Africa – including promoting their multi-billion-dollar digital businesses – at some point there needs to be pause for thought in terms of what comes along with such deals and relationships, especially in an environment largely wanting in a clear direction when it comes to AI ethics.
AI outlook for Africa
In order to safeguard against unethical use of AI and in order to ward off influence that may seek to make Africa beholden to foreign standards and ethics, it is imperative that Africa's policy makers, academia, industry players and civil society act in concert to build the foundation on which current and future AI regulation and ethics will be based. Developing quasi legal instruments, including resolutions and guidelines, should suffice while AI continues to evolve, and its impact continues to manifest itself. It is also important for such instruments to be frequently reviewed.
Therefore, there is a need to have monitoring and evaluation frameworks in place since these would go a long way in ensuring guidelines on AI remain suitable for changing requirements and foster an environment in which regulations are agile or flexible to consider and address any emerging concerns.
There also need to be considerable investments in infrastructure (including data centers, connectivity, software) as well as developing skills to start building a stable foundation on which to implement Africa's own AI systems – based on its own standards and using its own data.
This obviously needs to be a collaborative effort in which the efforts, resources and contributions of different stakeholders coalesce.
Africa's participation at international forums, unless suitably tempered by local contexts, may simply add reverberations to existing echo chambers. This can result in sustaining the same biases the continent is trying to eliminate from the global south, since most AI systems are developed and trained in the global north.
Overall, there are inherent risks of amplifying inequalities, further perpetuating stereotypes and stifling marginalized groups – based on gender, religion, race, tribe and economic status, among others.
Want to know more about technologies like AI, ML, IoT and smart cities? Check out our dedicated Emerging Tech content channel here on Connecting Africa.
In as much as there is varied discourse, simply adding African voices to the ongoing chorus may not be enough. Instead, scripting a narrative that fits the needs of Africa may be more opportune. This will not only earn the continent a place at the table where broader discussions are being held but contribute to the agenda for those discussions.
Africa can no longer stand at the sidelines raising concerns about ethics in AI without implementing its own frameworks to ensure ethical use of AI.
Related posts:
*Top image source: Image by DC Studio on Freepik .
— Francis Hook, Africa ICT Analyst, special to Connecting Africa