By Chan Shi Yun, TIME
Published: 19 Mar 2019 - 09:53 PM
With rights come restrictions, with human rights come responsibility, and AI is no exception. If AI were to be granted rights, they should have the responsibility to not abuse the use of those given rights. As technological tools of our human society, these AI, apart from sentient AI, lack the common sense to do so. If AI are given rights, humans may potentially hack into the AI system to abuse the rights of AI. In this case, a new question emerged during the Council Session – ‘How do we control the extent of rights of AI then?’ Sparking off a heated debate, delegates expressed their opinions strongly on their view of this issue.
The delegate of the United States of America (USA) suggested that the system should be based on the guideline of ‘monitor-and-control’ to prevent the exploitation of AI. The delegate further emphasized that AI should have a code of conduct where humans can interfere and interrupt the system in the event of a malfunction. This proposition was widely accepted by the other delegates in the council, especially the idea of a code of conduct. The code of conduct, as explained by the delegate of India, refers to maintaining and managing AI to a point whereby it is no longer considered a foe but only as a friend. Following up on the suggestion of the USA delegate, many other delegates started to propose solutions as to how the code of conduct could be carried out.
There were three main solutions brought up by the delegates – the Kill-Switch system, Imposing frequent checks on AI system and last but not least, during the development of AI, the government should carry out more tests and experiments to ensure the safety of the AI being produced.
Firstly, the Kill-Switch system was suggested by the delegate of Nigeria. He argued that it will allow humans to maintain ultimate control over AI and be able to have the final say in robotic decision-making. This system is such that when this switch is activated, all functions and actions of the AI will be deactivated, rendering it useless and unable to perform its normal functions. However, this suggestion was opposed by the delegate of Turkey, Spain and Portugal who all reasoned that as a tool who possesses a higher level of intelligence, AI might potentially be able to find a way to counter the kill-switch system and therefore render the system unable to serve its purpose of deactivating AI.
Secondly, the imposition of regular checks on the AI system was another solution suggested by the delegate of China. He explained that frequent checks on the AI system not only helps to reduce the risk of AI malfunctioning but also tests the reliability of the system to prevent any future violation of it. This solution received the support of many other delegates who felt that this was a feasible and practical solution in the long-run.
Thirdly, the delegate of Nigeria further elaborated that more tests and experiments can be carried out on AI during the development phase. He suggests that the government should educate and put together a team of specialised technical personnel trained specifically to fully comprehend the AI system. This team of personnel can then be utilized to carry out the tests and in an event of a malfunction, be trained to resolve the issue with the most efficient solution. This solution also had the support of many other delegates who felt that this was a feasible and effective way to prevent exploitation of AI and to maintain the ethics of AI.
As the Council closed, the delegates were able to come to a consensus that for a solution to be effective, it must fulfil a very important condition. Namely, that every AI must have a level of human interference and control such that in any event of malfunction or abuse of rights, humans are able to control the situation before it leads to the AI causing harm to the general public or any other acts that are causes for concern.
Picture depicts how the lack of machine ethics in future may result in the relationship between humans and robots to be broken