Hollywood has colorful ideas about artificial intelligence (AI). The popular image is a future where robot armies have turned violent, putting humanity in a war against extinction.
In fact, the risks posed by AI today are more insidious and more difficult to remove. They are often a product of such endless application of technology in modern society and increasing role in everyday life, perhaps best highlighted by the latest multi-billion-dollar investment by Microsoft by ChatGPT developer OpenAI.
Either way, it’s not surprising that AI is generating a lot of debate, especially about how we should create regulatory safeguards to ensure we can master the technology, rather than handing over control to machines.
Currently, we deal with AI using a patchwork of laws and regulations, as well as guidance that does not have the force of law. Against this backdrop, it is clear that current frameworks are likely to change – perhaps significantly.
So, the question that demands an answer: what is the future for a technology that is destined to change the world?
Behavioral problems
As the use of AI-style tools spreads rapidly across industries, concerns are inevitably raised about the ability of these systems to detrimentally – and unpredictably – affect a person’s destiny.
A colleague observed recently that there is a growing appreciation among businesses and regulators about the potential effects of AI systems on the rights and well-being of individuals.
This growing awareness has helped identify the risks, but we have not yet moved to a time where there is consensus about what to do about it. Why? In many cases, it’s because those risks are constantly changing and hard to see.
Often, the same tools that are used for bad purposes can be used for harmful purposes. Get facial recognition; the same technology that uses ridiculous filters on social media can be used by oppressive regimes to suppress the rights of citizens.
In short, risks are not only borne from the technology, but from its use. And with a technology like AI, where the number of new applications is increasing, the solutions that are suitable today may not be suitable tomorrow.
A prominent example is the Australian Government’s Robodebt scheme, which used an unsophisticated AI algorithm to automatically, and in many cases erroneously, send debt notices to known welfare recipients. who received overpayments.
Intended as a cost-saving exercise, the continued attempts to recover debts not owed, or miscalculated, has led many to raise concerns about the scheme’s impact on the physical and mental health of those received the debt notice.
Add to this the added complication of ‘black box’ AI systems, which can hide processes or assume incomprehensible patterns, making it difficult to explain to individuals how or why a tool works. of AI brings a consequence. Without this transparency, the ability to recognize and challenge consequences is minimized, and any route to redress is effectively revoked.
Filling the gap
Another complication is that in many jurisdictions, these risks are not addressed by an AI-related law or regulation. They are instead subject to a patchwork of existing laws covering areas such as employment, human rights, discrimination, data security and data privacy.
While none of these specifically target AI, they can still be used to address its risks in the short to medium term. But, by themselves, they are not enough.
Many risks fall outside these laws and regulations, so while policymakers may grapple with the far-reaching consequences of AI, other industry bodies and other groups are pushing for the adoption of guidance, standards and frameworks – some of which may become standard. industry practice even without law enforcement.
An illustration is the National Institute of Standards and Technology’s AI risk management framework, which is intended “for voluntary use and to improve the ability to incorporate reliability considerations into design, development, use, and evaluating products, services, and AI. systems”.
Similarly, the International Organization for Standardization (ISO) joint technical committee for AI is currently working on adding to its 16 non-binding standards with over twenty more yet to be published.
The current focus of many of these initiatives surrounding the ethical use of AI is fairness. Bias is an important element. The algorithms at the heart of AI decision-making may not be human, but they can still imbibe biases that alter human judgment.
Fortunately, EU policy makers seem to be alive to this risk. The bloc’s draft EU Artificial Intelligence Act addresses a range of algorithmic bias issues, arguing that the technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly already in contexts such as recruitment and finance.
It is expected that many other jurisdictions will look to address this issue with future AI laws, although views on how to balance regulation and innovation in practice will vary from country to country.
The race to regulate
What is interesting is that the EU looks to put the rights of its citizens at its center, in apparent contrast to the laissez-faire approach to technology and regulation more commonly adopted in the US.
The European Commission further supplemented the draft Act in September 2022, with proposals for the AI Liability Directive and a revised Product Liability Directive that would streamline compensation claims where individuals suffer damage related to AI, including discrimination.
In contrast, some commentators argue that it is currently unclear where the UK wants to go. The quest to become a global leader in AI regulation has never been completed, partly because of the inherent tension between deregulation following Brexit and bringing other countries with us by making UK regulations.
There are, however, some signs of the UK seeking global leadership in this space. The Information Commissioner’s Office (ICO) recently fined software business Clearview AI £7.5 million after the company scraped online images of individuals into a global database for a somewhat controversial identification tool. face
Clearview has since launched an appeal. However, in addition to emphasizing the growing attention to the protection of the use of even publicly available biometric data, the ICO’s action sends a clear message to the market: UK regulators act quickly to address AI risks where they deem necessary.
Out of the box
The next five years will likely mark the implementation phase where soft guidance becomes hard law, potentially building on the progress already made through the OECD AI principles and the UNESCO Recommendation on Ethics in AI. But many observers hope it will be longer before the emergence of something resembling a comprehensive global AI framework.
As some in the industry will be troubled by intrusive oversight from policymakers, as individuals’ appreciation of the ethical implications of technology expands along with its application, it is hard to see how businesses can maintain public trust without strong and considered AI regulation in place. .
In the meantime, discrimination and prejudice will continue to command attention to show the most immediate dangers of this technology applied not only with bad intentions, but just a lack of diligence with unexpected consequences.
But such factors are ultimately only pieces of a larger puzzle. Industry, regulators and professional advisors face years of piecing together the full picture of legal and ethical risks if we want to remain masters of this technology, and not the other way around.