What are lawmakers and regulators doing about AI?

Patrick Devaney
Dec 14, 2022
Updated • Dec 14, 2022

AI has had a big couple of years. Hugely popular and genuinely impressive AI-based tools from Stable Infusion and DALL-E 2 to ChatGPT have captured the world’s imagination, with many of us thinking we are on the cusp of something truly massive and revolutionary. These days, AI is everywhere too, from your Spotify Wrapped playlist all the way up to government agencies and ministries handing over important powers to automated decision-making systems.

Do we need to be scared of ai

With AI then seemingly able to do many of our jobs better and quicker than we ever could and other aspects of our human rights being put into the hands of advanced algorithms, it is not hyperbole to ask, do we need to be scared of AI, and what is being done to make sure things don’t get out of hand? Today, we are going to look at some of the different laws that are being drafted for this exact reason and why they are important.

The EU AI Act

The EU AI Act is important as it marks the first attempt to create a fully horizontal law that will regulate all uses of AI. It works on risk-based criteria and categorizes applications of AI into three risk levels. The first category includes applications and systems that pose an unacceptable risk, such as social scoring by the government, which is banned. The second category includes high-risk applications, such as a CV-scanning tool that ranks job applicants, which are subject to legal requirements. The third category includes applications that are not explicitly banned or listed as high-risk, which are largely unregulated.


The EU AI Act is currently being debated by the EU and if passed will likely come into force in 2023 or 2024. If this does happen there will then be a 24-36 month grace period before the main requirements will come into force. If you’re wondering whether this will matter to you if you live outside of the EU, you need to think back to GDPR. The sheer size and value of the European market, among other things, makes the EU a regulatory superpower meaning international companies that have dealings there will almost certainly follow the law.

The Blueprint for an AI Bill of Rights

You might have got it from the language used, but the Blueprint for an AI Bill of Rights is a US initiative. The White House Office of Science and Technology Policy has released a blueprint for an AI Bill of Rights that outlines five principles to guide the design, use, and deployment of automated systems in the age of artificial intelligence. The principles, which are designed to protect the rights of the American public, include ensuring that AI systems are fair, transparent, and accountable; protecting privacy and civil liberties; promoting access to the benefits of AI; encouraging collaboration and innovation; and ensuring that AI systems are sustainable and safe. The blueprint is accompanied by a handbook for incorporating these principles into policy and practice.

It is good news to see The White House discussing these types of issues, and with this type of language, but it is very basic stuff, and on top of that it is not binding legislation. In this light then, it could be seen as an educational tool designed to raise awareness of the issue without moving too far to take binding action in the way that EU AI Act will.

Other examples of AI laws and regulations

There is no doubt that the EU AI Act is the biggest player in this field, but even that falls short in many regards as EU regulators try to shy away from the US innovates and the EU regulates cliché. A risk-based framework does have its merits, but human rights and harms-based approaches would better protect everyday citizens from the cold algorithmic logic that may not even be fully trained or aware of the individual intricacies of their lives. It is heartening to see then that there is a plethora of national regulations on the table in a wide variety of countries including Canada, China, the UK, Brazil, and more. As to whether these will take up the yolk of human rights and protect from AI-based abuses remains to be seen. Therefore, when it comes to asking if we should be scared of AI in the face of so many risks as we seek to unleash its mind-blowing potential, the answer is: maybe.

What are lawmakers and regulators doing about AI?
Article Name
What are lawmakers and regulators doing about AI?
With AI developing at a lighting pace, do we need to be scared of Artificial intelligence and what are lawmakers and regulators doing about it?
Ghacks Technology News

Previous Post: «
Next Post: «


  1. John G. said on December 14, 2022 at 4:08 pm

    To regulate the artificial intellegence they should have enough natural intelligence first. Thanks for the article.

  2. Tom Hawack said on December 14, 2022 at 5:29 pm

    Quoting this informative article,
    “Therefore, when it comes to asking if we should be scared of AI in the face of so many risks as we seek to unleash its mind-blowing potential, the answer is: maybe.”. Indeed, given general awareness and specific tools described in the article any other answer would require facts and arguments unless to be explicitly stated as subjective… and subjectively it will be : make that “maybe” a “likely”.

    Science defines a rational approach to discover the mysteries of the universe, to explain them, to use them for the best as for the worst. We learn. At the same time a scientific mind may undoubtedly be carried beyond a reasonable risk when perceiving that its quest, its Graal, is one hand away : Dr. Strangelove may concern a mad scientist’s hysteria and not only an army’s dangerous Bozo. Societies have firewalls as described in the article but at the end the real power is not at the “rifle’s arm length” as Mao stated it but in the minds of those who know : knowledge, which means that society will need experts as well as humanists to achieve a true protection.

    This said, personally, if I understand the best AI may allow I’m not sure it bypasses the worst it could potentially achieve. Up to now we’ve more or less controlled nuclear weapons so hopefully we will as well control AI provided the comparison makes any sens.

    1. Tom Hawack said on December 14, 2022 at 5:33 pm

      Edit :
      “This said, personally, if I understand the best AI may allow I’m not sure it bypasses the worst it could potentially achieve. ” : balances, not bypasses.

  3. just an Ed said on December 14, 2022 at 6:08 pm

    I find the terminology 0deceptive. It is indeed artificial, but it is by no stretch of the imagination intelligence. These systems are semi-autonomous computerized aids that are trained by weighting different items presented on specific topics. They are more akin to idiot savants than they are to an intelligence. To view them as more than aids is to fall prey to marketing hype.
    This is not meant to disparage their usefulness in specific situations; but to view them as incipient Skynets, a’ la’ Terminator, is to thoroughly misunderstand their abilities, their very nature.
    My two cents; we’ll have fusion before we have a true AI.

  4. Anonymous said on December 14, 2022 at 8:26 pm

    Intelligence implies learning from experience. How does a CV vetting tool gain experience from all the CVs it rejects?

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.