- AI For You
- Posts
- EU's Laws on AI - An Explainer
EU's Laws on AI - An Explainer
Steal our best value stock ideas.
PayPal, Disney, and Nike all dropped 50-80% recently from all-time highs.
Are they undervalued? Can they turn around? What’s next? You don’t have time to track every stock, but should you be forced to miss all the best opportunities?
That’s why we scour hundreds of value stock ideas for you. Whenever we find something interesting, we send it straight to your inbox.
Subscribe free to Value Investor Daily with one click so you never miss out on our research again.
The European Union has often been known to set worldwide precedents to enact laws for issues facing the modern world, from carbon pricing to artificial intelligence. In today’s article, we briefly explain what the recently adopted AI laws are all about.
Given the increasing role of AI in everyday lives, the European Union set out on a mission to develop a human-centric approach to AI, while ensuring that its utility to humans can be maximized and ill effects limited.
The laws seek to classify all AI systems into four categories through a risk-based approach. These are - Unacceptable Risk, High Risk, Limited Risk and Minimal Risk.

While systems under the ‘Unacceptable Risk’ are prohibited, all other categories are subjected to a set of requirements concerning transparency, cybersecurity, and copywriting to be able to access the EU markets. The laws also allow provisions to create a controlled environment to enable the development and testing of truly path-breaking AI systems. In technical parlance, this is known as establishing a ‘regulatory sandbox’.
While these laws are a major step to regulate AI, some issues have been brought up-
Defining ‘AI Systems’
The currently adopted laws outline very broad definitions for ‘AI Systems’, which are the subject of all regulations. There are also no clarifications about the laws applicable for different value chain participants like providers of AI systems and companies which simply use the open-source AI models but don’t develop them. This could lead to legal uncertainties and implementation loopholes.
Compliance Costs
Reports suggest that the cost of complying to the AI laws could be significant enough to deter SMEs from undertaking innovation and research in the field. However, the exact magnitude of costs remains conflicted.
Enforcement Efficacy
Currently, the preliminary risk classifications have been entrusted to the self-assessment of AI developers. Additionally, no clear institutions have been established to enable stakeholders such as consumer organizations to participate in the development of standards or file complaints. These problems raise questions about the efficacy of such laws.
By giving the EU Commission the power to levy fines up to $38 million or 7% of a violator’s global revenues, the scope of the AI laws is indeed huge. Whether or not this will translate into the flourishing of a safe and transparent AI industry, is a question that only the future can answer.
Readers’ Questions
Q. Are Machine Learning and AI the same thing?
A. Machine Learning is the technique to develop algorithms and models such that computer systems can use them to perform complex tasks without any explicit instructions. Instead, the machines use inferences from past data.
AI, on the other hand is a wide-ranging concept that includes all techniques to make machines resemble human intelligence. Thus, AI and ML are not the same, ML is one of the branches of AI.
If you want us to answer your burning questions about the world of AI, send them across through this form :)