Introduction
It is perceived that Artificial intelligence (AI) will revolutionise the 21st century and drive the fourth industrial revolution greatly improving people’s lives. It exists in various forms2, with the common characteristic being the ability of a computer system to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
As businesses begin to implement AI as part of their digital transformation initiatives or indeed as a critical part of their core businesses, there is a need to focus on the legal risks and issues related to the use of personal data in AI, the potential bias and unpredictability of its output as well as ethical issues. These surrounding legal issues can lead to accumulation of substantial risk for businesses and presents potential liabilities.
This newsletter briefly examines the potential legal risks, issues, challenges and ethics of the adoption of AI and the need to identify and address same.
AI and Ethics
Generally, there is an ethical code that guides human interaction, an understanding of what is right and wrong, and a system of laws to support it. Such code and set of rules is required for AI as well. This has become a strategic priority for governments and private industries, hence the approval of the Recommendation on Artificial Intelligence by the Organization for Economic Coordination and Development (OECD) in May 20195. The document contains principles which promotes AI that is innovative and trustworthy and respects human rights and democratic values. Similarly, the European Union’s High Level Expert Group on AI, has developed a set of ethics guidelines to ensure that trustworthy AI adheres to human rights principles and values.
These recommendations and guidelines on the ethical standards expected of AI are not legally binding but establish international standards that should strongly influence national legislations in this field.
AI, Data Privacy and Protection
AI is designed to perform tasks utilizing data sometimes in an opaque manner creating an obvious challenge for compliance with data protection. Furthermore, the principle of purpose limitation requires that the reason for processing personal data is clearly indicated when the data is collected. Machine learning often takes place by utilizing data-sets collected for other purposes. For this process to be in compliance with the laws, the controller must assess the existence of an applicable legal ground such as legitimate interest, research and statistical purposes and consider whether the purpose of
training of the AI algorithm is compatible with the initial purpose of processing or any of the applicable
exceptions under the existing laws.
Without doubt, Data protection frameworks across jurisdictions will be applicable to controllers and processors, such as those who developed the AI and those who continue to maintain it. As progress is made with AI, data protection principles and rules currently existing in laws and regulations like the EU GDPR and Nigerian Data Protection Regulations will become stretched. Consequently, data protection legislation must continue to evolve to ensure the lawfulness and protection of the interests involved.
Legal Responsibility
With several parties involved in an AI system (data provider, equipment manufacturer, programmer, developer, user and the AI system), liability becomes difficult to establish where the AI fails to perform or malfunctions. Specific rules are being formulated in certain sectors to deal with the risks posed by AI. For example, the UK is proposing to introduce rules under which the insurer will generally bear primary liability in the case of accidents caused by autonomous vehicles.6 In the absence of legislation relating to AI, redress for victims who have suffered damage or losses as a result of a failure of AI would
most likely be sought under the tort of negligence.
The ability to accumulate experience and learn from it, as well as the ability to act independently and make individual decisions, creates preconditions for damage, however there is some degree of consensus based on the current applicable laws across jurisdictions, that AI lacks legal personality for imposition of liability, its ability to reason like humans is still vague because it is only a tool used to aid humans and enhance their productivity. Liability for any harm such as that which may be caused by an autonomous vehicle will therefore lie on either of the data provider, equipment/device manufacturer, programmer, developer, or user depending on the nature of the error leading to the failure and the consequent damage or loss.
It is key however, for AI systems to be aware of their limitations, which must be communicated to the purchaser or user. It is well established that vendors have a duty to inform purchasers of any known flaws in their system. Furthermore, where a robot programmed with open source software can be modified by the purchaser, the purchaser may be held liable for the damage caused based on the modification made to the robot. Hence, the need for businesses to understand the nature and
responsibility attached to AIs they are adopting or modifying or customizing.7 Where the error borders
on incorrect output leading to damages, the data provider or programmer may be held responsible and it is important therefore, that data protection legislations continue to develop to provide for new layers of requirements to regulate the intelligence and autonomy inherent in AI.
Conclusion
We have considered how AI raises important legal, ethical, and data privacy and protection issues. We are presented with an opportunity and obligation to continue to develop principles and accountability tools to encourage responsible practices, respect for human rights, and to dispel fears of any harm that may be occasioned by the technology. There is need for international collaboration in the development of global standards and guidelines for the development and adoption of AI not only to encourage innovation but to also stimulate adoption.
Comments ( 3)