Human security and the ethics of Artificial Intelligence

Dr Bertrand Ramcharan
Dr Bertrand Ramcharan

By Dr Bertrand Ramcharan Seventh Chancellor of the University of Guyana Sometime Fellow of Harvard University

There is an intense debate underway about the risks that artificial intelligence (AI) might pose to human security and human rights and, indeed, whether they might even surpass and control humans in the future. Up to now, the major corporations involved in the development and application of AI have been in the driver’s seat and have sought to retain control over whether, and how, AI might be regulated. They ostensibly favour self-regulation.

The UN Secretary-General has established an Advisory Group on AI and the UN General Assembly, earlier this year, adopted an initial resolution setting out broad guidelines on the matter. The UN Secretary-General has even suggested that the UN should establish a regulatory agency on AI, and this idea is supported by researchers and academics but resisted by the leading AI corporations.

The security and ethical implications of AI are potentially serious, and scholars have been seeking to provide some insights on how we might go about assessing and dealing with the risks involved. Peter Kirschlaeger, Professor of Ethics at the University of Lucerne in Switzerland and a leading scholar in the field, has argued that if we wish that “AI would no longer serve just the special interests of a few multi-national technology corporations, but should enable all people to live in dignity, and for the planet to have a sustainable future, we should act accordingly and tame the beast”. He thinks that we urgently need an independent regulatory agency at the United Nations similar to the International Atomic Energy Agency (IAEA).

British academics Nigel Shadbolt and Roger Hampson, in a book just published, address the much-debated issue of ethics and artificial intelligence and provide helpful lines of discussion for consideration by governments, leaders, and the general public. am

Artificial intelligence, they explain, is a branch of computer science that deals with the creation of systems capable of performing tasks that would ordinarily require human intelligence – systems that can reason, learn, and act autonomously.

AI research, they continue, has been highly successful in developing effective techniques for solving a wide range of problems from game playing to medical diagnosis. All artificial intelligences, so far, subsist in digital computing machines., or in digital computing parts of other machines, everything from washing machines to streetlights to robots. They may soon be instantiated in other materials, notably derived from biology.

Shadbolt and Hampson recall that many different approaches and methods that are important to computer science today were developed in the AI field. Two particularly influential groups of techniques concern rule-based systems and machine learning systems. Rule-based systems rely on a set of explicit rules that define how the system should behave. An early type of AI, they are still used in many applications such as medicine or finance, where they make decisions or advise on a particular course of action.

Machine learning systems, by contrast, learn patterns from data and these patterns drive their behaviour. They are explicitly programmed not to solve a task, but rather to discover patterns in data that may lead to effective decisions. The most recent developments in machine learning have seen the emergence of generative AI. The goal of generative AI is to create new data samples that resemble the data it was trained on. The data created can be music, images, or text.

Shadbolt and Hampson point out that emergent breakthrough technologies present societies with many challenges. They recall that attempts have always been made to control each new scientific potentiality, via conventions and treaties, codes of conduct, regulations and laws. They write: “ We now see this with artificial intelligence. Artificial intelligence is everywhere in nearly every modern thing, not least smartphones. Recent developments in machine learning have given rise to generative AI, which is able to spawn very extensive content: like ChatGPT, which can answer a huge range of general questions and many very specific professional ones”.

Addressing the ethical dimensions of AI, Shadbolt and Hampson recall that, historically, what it means to be ethical has varied from society to society, and also within each society. It has also varied from decade to decade, if not faster: “There is not now, and never will be, one single set of answers to the ethics of artificial intelligence”.

Shadbolt and Hampson assess that, despite widespread existential anxiety about artificial intelligence, humans remain firmly in charge and they think that humans will continue to be firmly in charge.

They counsel that “We will have to take steps to ensure that this is so, and to properly manage the transformative effects on our economy and society”.

They advance the thesis that the human and the machine should be held to the same ethical standards. The difference, they suggest, is that we can and should be stricter in our enforcement with machines than with humans. They write: “We should want them to behave like the best of us. We should judge them morally as if they were humans.  We should not allow them to fall short in ways that we would not tolerate in people. We should not expect them to be ethically superior to us.”

Shadbolt and Hampson discuss the risks that AI pose to human rights. While AI is a powerful tool that can be used to improve the lives of humans, they also pose a threat to human rights. “Badly at risk is the right to be heard” – in the context, e.g. of judicial sentencing guidelines operated by AI. There is, they assert, a right to a human decision in the context of artificial intelligence.

They cite Oxford Professor John Tasioulas, who has written that the right to be heard by a human is a moral right that arises from the value of human dignity and autonomy. It is essential for a fair and just society: “The right to be heard is the right to be able to participate in the decision-making process that affects your life. And the right to have your opinion considered.”

The threat, in Tasioulas’s view, arises from AI’s ability to make decisions without human input, or consent, and to make decisions that are discriminatory or violate our rights. We need to protect the right to be heard by a human if we want to live in a fair and just society. AI should be used in a way that respects our rights and that does not violate our fundamental freedoms.

Tasioulas believes that there are a number of ways to protect this right to be heard from AI. He suggests that we should:

Require that AI systems be transparent and that they should explain their decisions;

Require that AI systems be accountable to humans; and

Ensure that AI systems are used in a way that respects human rights.

Shadbolt and Hampson write: “We have argued that significant decisions about individuals should be made transparently and accountably, within a framework that allows for appeal to a hearing by a fellow human. Machines should be subject to the same moral and legal frameworks that we currently apply to medical research, cloning, and biological warfare. Datasets should be open to the public, competing institutions, corporations and groups, except where the data relates to identifiable people. Social machines that combine the intelligences of many individuals into powerful collective human intelligences should be nurtured and directed toward humane social and political ends. Discussion of the dangers and opportunities presented by the world of intelligent machines should be as central to our cultural life as are our arguments about other global challenges.”

They conclude the book with the following seven ‘proverbs’:

A thing should say what it is and be what it says.

Artificial intelligence should show respect for human beings.

Artificial intelligence are only ethical if they embody the best human values.

Artificial intelligence should be transparent and accountable to humans.

Humans have a right to be judged by humans if they so wish.

Decisions that affect a lot of humans should involve a lot of humans.

The future of humanity must not be decided in private by vested interests.

It is plain that we are here faced with formidable new challenges when it comes to the security and ethical implications of AI. So far there has been little consensus on applicable rules and principles: nationally, regionally, or internationally. Young countries such as the members of CARICOM are faced with pressing challenges to meet the subsistence needs of their peoples. AI might be able to contribute in the quest for development. But we should be mindful of the attendant risks. A debate is getting underway internationally. It would be helpful to the governments and peoples of CARICOM for such a debate also to commence within CARICOM.