The European Union (EU) has just passed a landmark act on regulating Artificial Intelligence (AI). Here’s the full text – I used Dave Shapiro’s summarized version of this on OpenAI Playground to figure out the basics contained in this fairly extensive and complicated document:

What You Need To Know:
Here’s a high level debrief created using the summary of the legislation itself:
- Why: The EU made these rules to keep AI systems safe and protect people’s rights, especially when the AI could be harmful or risky.
- When: The rules will start about 2 years after they become official.
- How: The EU will have a group of experts, and each country in the EU will have a team to make sure the rules are followed.
- What: The rules focus on high-risk AI systems and include things like:
- Not allowing AI that tricks people or causes harm. Making sure AI systems use good data and handle it correctly.
- Making AI systems clear about what they do and how they work.
- Making sure humans can control and oversee AI systems.
- Helping small companies and encouraging new ideas in AI.
Examples from the legislation:
- AI systems that can recognize people’s faces in public places are considered high-risk and need to follow strict rules.
- AI systems that try to manipulate people’s behavior or exploit their weaknesses are not allowed.
- AI systems used in important areas like education, jobs, and healthcare must be transparent and have human oversight.
ChatGPT’s Take, From Published News:
I also decided to test ChatGPT’s ability to decode the legislation along the same lines using a CNN article:
The European Union (EU) has proposed a new law, the Artificial Intelligence (AI) Act, which will affect everyone who makes and uses AI systems in the EU, even if they are based outside of the EU:
- Who: The AI Act will apply to anyone who develops and deploys AI systems in the EU, even if they are located outside of the EU.
- Why: The goal of the AI Act is to manage the risks that AI can pose. Some AI systems could potentially harm people’s health, safety, basic rights, or the environment, or be used in harmful ways, like influencing elections. The EU wants to make sure these risky AI systems are handled carefully or banned outright.
- When: This law might start to be used from 2026, but it could change a bit before then because AI technology is changing very fast. The law has already been updated several times since they first started writing it in 2021.
- How: The law has different levels of rules based on how risky an AI system is. The most risky systems are banned. This includes things like real-time face recognition in public places, tools that predict crime, and systems that give people a score based on their behavior. There are also strict rules for “high-risk” AI systems. These are systems that could cause significant harm to people’s health, safety, basic rights or the environment. For example, AI systems used to influence elections, or social media platforms with more than 45 million users that recommend content to their users would be considered high-risk. The law also requires AI systems to be transparent. This means that AI systems like me, ChatGPT, would have to tell people that the content was made by AI, show the difference between deep-fake images and real ones, and stop the creation of illegal content. Systems with less risk, like spam filters, have fewer rules to follow.
- What: If someone uses an AI system that is not allowed, they could be fined a lot of money, up to €40 million ($43 million) or up to 7% of a company’s worldwide annual turnover, whichever is higher. But the law also considers the size of the company, so smaller companies might not have to pay as much if they make a mistake. The AI Act also gives people the right to complain about AI systems and asks EU countries to set up at least one “sandbox” where AI systems can be tested before they are used. This will help ensure that AI is used safely and responsibly.
- Examples: To give you a clearer picture, let’s consider a few examples. If a company outside of the EU, let’s say in the United States, develops an AI system and wants to use it in the EU, they would have to follow this law. This could be a company like Facebook, Twitter, or Instagram which have more than 45 million users and suggest content to their users. If an AI system is used to try to change how people vote in an election, that’s considered high-risk and would have strict rules to follow. Or, if a company wanted to use real-time facial recognition in public spaces, that would be banned altogether. If a company didn’t follow these rules, they could be fined a lot of money.
Remember, this is a simplified explanation, and the actual law is more complex. Also, the law might change a bit before it starts to be used because AI technology is changing very fast.
Your Thoughts?
What are your thoughts on the new EU legislation? It certainly does a good job of moving quickly (which is needed and like with GDPR, is ahead of the curve compared to the US) and addressing various key types of risks (though it doesn’t explicitly call out existential risk, which is perhaps something we may need to address in the decade(s) ahead if AGI becomes a reality).
0 comments on “The New EU Artificial Intelligence Act Explained”