Determining ethical boundaries for artificial intelligence is a growing concern for governments around the globe. With a digital landscape that continuously evolves, in part thanks to AI, the theoretical guardrails proposed by Isaac Asimov are no longer enough. It isn’t sufficient to say “do no harm” when identifying that harm is mired in grayscale. Despite the fantastical nature of the Three Laws of Robotics, they have merit in the future of artificial intelligence. An ethical framework surrounding the use and development of AI technology is vital.
In 2021, UNESCO member states adopted a set of 10 core principles that lay out a human-rights-centred approach to the ethics of AI. Simply, the principles are Proportionality and Do No Harm, Safety and Security, Right to Privacy and Data Protection, Multi-Stakeholder and Adaptive Governance & Collaboration, Responsibility and Accountability, Transparency and Explainability, Human Oversight and Determination, Sustainability, Awareness and Literacy, and Fairness and Non-Discrimination.
These core principles, if enacted and adhered to by all, can work. Yet, there are a few areas of concern. First, when examining the ethos of AI, one must decide whose ethos to use. Not all countries and peoples have the same ethos and laws, nor morals and customs. What one finds appropriate, another may balk at, and that can derail any efforts to curtail AI abuses.
Second, when international cooperation is sought, will some countries be excluded? Powerful countries tend to neglect others when making decisions that have global effects. This bleeds over into the other principles: who will hold bad actors accountable if that bad actor is a powerful nation? How will AI be accessible to all when many basic needs are still lacking?
AI needs guardrails to protect against abuses. Some have already experienced scam calls, deep fake videos, election interference, IP infringements, and continued biases. Ethics in AI isn’t a static concept; it is dynamic, requiring continuous dialogue and adaptation. Ten years ago, deep fake videos weren’t an issue. Today, they are. Five years ago, voice cloning was merely a trope in action movies, but now, it is a real threat. AI has evolved so quickly that the world is racing to catch up.
Governing bodies need comprehensive policies encompassing the entire AI lifecycle and addressing AI actors and technological processes. These policies should be formulated and implemented with a focus on human rights throughout the AI lifecycle. Global governments must possess the necessary expertise and tools for the ethical use and acquisition of AI technologies.
AI innovation is important for society, but innovation shouldn’t come at the cost of humanity itself. Countries and regions are making significant strides in AI regulation, with the EU’s proposed AI Act and the US’ Blueprint for an AI Bill of Rights serving as examples. AI abuses like skin color bias in facial recognition, “hallucinations,” and art and literature training need to be addressed swiftly. Right now, unregulated AI proves that the potential for harm is greater than the potential for good.
About the Author
Dr. Ellen Waithira Karuga is an expert in software engineering and founded Vista Prime Solutions Ltd. With a Doctoral degree in Management and Leadership, an MBA in Strategic Management, and a B.Sc. in Computing and Information Systems, she focuses on Artificial Intelligence(AI), security policy, database management, and ERP systems future. Ellen excels in strategy, change management, and team leadership. Most recently she was a keynote speaker and panelist in Nation Digital Summit 4th Edition.