What are lawmakers and regulators doing about AI?
AI has had a big couple of years. Hugely popular and genuinely impressive AI-based tools from Stable Infusion and DALL-E 2 to ChatGPT have captured the world’s imagination, with many of us thinking we are on the cusp of something truly massive and revolutionary. These days, AI is everywhere too, from your Spotify Wrapped playlist all the way up to government agencies and ministries handing over important powers to automated decision-making systems.
With AI then seemingly able to do many of our jobs better and quicker than we ever could and other aspects of our human rights being put into the hands of advanced algorithms, it is not hyperbole to ask, do we need to be scared of AI, and what is being done to make sure things don’t get out of hand? Today, we are going to look at some of the different laws that are being drafted for this exact reason and why they are important.
The EU AI Act
The EU AI Act is important as it marks the first attempt to create a fully horizontal law that will regulate all uses of AI. It works on risk-based criteria and categorizes applications of AI into three risk levels. The first category includes applications and systems that pose an unacceptable risk, such as social scoring by the government, which is banned. The second category includes high-risk applications, such as a CV-scanning tool that ranks job applicants, which are subject to legal requirements. The third category includes applications that are not explicitly banned or listed as high-risk, which are largely unregulated.
The EU AI Act is currently being debated by the EU and if passed will likely come into force in 2023 or 2024. If this does happen there will then be a 24-36 month grace period before the main requirements will come into force. If you’re wondering whether this will matter to you if you live outside of the EU, you need to think back to GDPR. The sheer size and value of the European market, among other things, makes the EU a regulatory superpower meaning international companies that have dealings there will almost certainly follow the law.
The Blueprint for an AI Bill of Rights
You might have got it from the language used, but the Blueprint for an AI Bill of Rights is a US initiative. The White House Office of Science and Technology Policy has released a blueprint for an AI Bill of Rights that outlines five principles to guide the design, use, and deployment of automated systems in the age of artificial intelligence. The principles, which are designed to protect the rights of the American public, include ensuring that AI systems are fair, transparent, and accountable; protecting privacy and civil liberties; promoting access to the benefits of AI; encouraging collaboration and innovation; and ensuring that AI systems are sustainable and safe. The blueprint is accompanied by a handbook for incorporating these principles into policy and practice.
It is good news to see The White House discussing these types of issues, and with this type of language, but it is very basic stuff, and on top of that it is not binding legislation. In this light then, it could be seen as an educational tool designed to raise awareness of the issue without moving too far to take binding action in the way that EU AI Act will.
Other examples of AI laws and regulations
There is no doubt that the EU AI Act is the biggest player in this field, but even that falls short in many regards as EU regulators try to shy away from the US innovates and the EU regulates cliché. A risk-based framework does have its merits, but human rights and harms-based approaches would better protect everyday citizens from the cold algorithmic logic that may not even be fully trained or aware of the individual intricacies of their lives. It is heartening to see then that there is a plethora of national regulations on the table in a wide variety of countries including Canada, China, the UK, Brazil, and more. As to whether these will take up the yolk of human rights and protect from AI-based abuses remains to be seen. Therefore, when it comes to asking if we should be scared of AI in the face of so many risks as we seek to unleash its mind-blowing potential, the answer is: maybe.Advertisement