The AI Act - A New Era of AI Regulation in the EU
- evanschwartz2
- Mar 19, 2024
- 5 min read
Updated: Jan 21

The European Union has always been at the forefront of digital rights and privacy regulations, and its latest move - the AI Act - is no exception. This comprehensive law on artificial intelligence (AI) aims to establish a legal framework for developing and using AI systems while ensuring ethical safeguards and transparency.
The AI Act, officially approved by the European Parliament after three years of deliberation, will come into effect in May. The changes it brings will be noticeable by the end of the year.
The AI Act: What Changes?
The AI Act introduces several critical changes to how AI is used and regulated.
Firstly, it bans specific AI uses that pose a high risk to people's fundamental rights in areas like healthcare, education, and policing. It also bans uses that pose an "unacceptable risk," such as AI systems that deploy manipulative techniques or infer sensitive characteristics. However, law enforcement agencies can still use sensitive biometric data and facial recognition software to combat serious crime in public places.
The AI Act identifies several specific uses of AI as high-risk. These include:
Remote Biometric Identification Systems: All such systems are considered high-risk and are subject to strict requirements. Remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited. But is still available in severe circumstances.
AI Systems for Access, Admission, and Education: AI systems used to determine access or admission or to assign people to education or training institutions are classified as potential high-risk AI systems. Similarly, AI systems used to evaluate learning outcomes or assess the appropriate level of education are also considered high risk.
AI Systems in Specific Areas: AI systems that could negatively affect safety or fundamental rights are considered high risk. These systems are divided into two categories:
AI systems are used in products that fall under the EU's product safety legislation, such as toys, aviation, cars, medical devices, and lifts.
AI systems fall into specific areas that must be registered in an EU database, including the management and operation of critical infrastructure.
Please note that the AI Act is still in the negotiation process and may undergo changes before becoming law. The classification of AI systems as high risk depends on their intended purpose and specific application areas, not necessarily the technology itself.
Secondly, the Act requires tech companies to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. However, the technology to reliably detect AI-generated content is still in development. This AI Act defines AI-generated content as outputs produced by machine-based systems designed to operate with varying levels of autonomy. These outputs can include predictions, recommendations, or decisions that influence physical or virtual environments. The Act imposes specific transparency obligations on providers of AI-generated content. For instance, AI-generated text intended to inform the public on matters of public interest must be clearly labeled as artificially generated. This requirement also extends to audio and video content, including deep fakes.
The AI Act is designed to ensure that AI systems used within the European Union are safe, transparent, traceable, non-discriminatory, and environmentally friendly. It also establishes different rules for AI systems based on their risk levels.
Please note that the AI Act is specific to the European Union, and its regulations may not apply globally. The Act is still in the process of being finalized and may undergo further amendments during trilogue negotiations.
Thirdly, the AI Act establishes a new European AI Office to coordinate compliance, implementation, and enforcement. Citizens in the EU can submit complaints about AI systems when they suspect they have been harmed by one and receive explanations on why the AI systems made the decisions they did.
Lastly, AI companies developing technologies in "high-risk" sectors will have new obligations, including better data governance, ensuring human oversight, and assessing how these systems will affect people's rights. Companies with the most powerful AI models will face more stringent requirements, such as performing model evaluations, risk assessments, mitigations, ensuring cybersecurity protection, and reporting incidents where the AI system failed.
The AI Act: What Doesn't Change?
While the AI Act introduces significant changes, it leaves many aspects of AI use unregulated. Applications not explicitly banned or listed as high-risk are left mainly unregulated. This means that many AI applications, from recommendation algorithms to predictive analytics, will continue to operate as they have been.
Recommendation Algorithms
Recommendation algorithms are a type of information filtering system that predicts the "rating" or "preference" a user would give to an item. Here are a few examples:
Collaborative Filtering: This method predicts a user's interests by collecting preferences from many users. The assumption is that if user A has the same opinion as user B on an issue, A is more likely to have B's opinion on a different issue.
Content-Based Filtering: This method uses information about the description and attributes of the items users have previously consumed to model user's preferences. These algorithms recommend items that are similar to those that a user liked in the past.
Hybrid Systems: These systems combine collaborative filtering and content-based filtering. Hybrid approaches can be implemented in several ways, such as by making content-based and collaborative-based predictions separately and then combining them or by unifying the approaches into one model.
Predictive Analytics
Predictive analytics uses statistical techniques from data mining, predictive modeling, and machine learning to analyze current and historical facts to make predictions about future or otherwise unknown events. Here are a few examples:
Regression Analysis: This is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, focusing on the relationship between a dependent variable and one or more independent variables.
Decision Trees: This is a flowchart-like structure in which each internal node represents a "test" on an attribute, each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes).
Neural Networks: These are a set of algorithms, modeled loosely after the human brain, designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling, or clustering of raw input.
Common Tools
Here are some common tools for recommendation algorithms and predictive analytics:
Python Libraries: Scikit-learn, TensorFlow, PyTorch, Keras, LightFM, Surprise, etc.
R Libraries: Caret, randomForest, rpart, nnet, etc.
SAS: A software suite developed by SAS Institute for advanced analytics, multivariate analyses, business intelligence, data management, and predictive analytics.
SPSS: A software package used for interactive, or batched, statistical analysis.
RapidMiner: A data science software platform that provides an integrated environment for data preparation, machine learning, deep learning, text mining, and predictive analytics.
KNIME: A free and open-source data analytics, reporting, and integration platform.
Weka: A suite of machine learning software applications written in Java.
H2O: An open-source software for data analysis in the area of machine learning.
Tableau: A data visualization tool that provides real-time data insights in a matter of minutes.
Power BI: A business analytics tool developed by Microsoft. It provides interactive visualizations with self-service business intelligence capabilities.
The AI Act: What's Next?
The AI Act is a landmark legislation that sets a new standard for AI regulation. It is expected to have a significant impact on the AI industry, both within the EU and globally. As other regions and countries grapple with the challenges of regulating AI, the EU's AI Act could serve as a model for future legislation.
In conclusion, the AI Act represents a significant step forward in regulating AI. It introduces essential safeguards and transparency measures while imposing new obligations on AI providers. As we move into a new era of AI, the AI Act will play a crucial role in shaping the future of this transformative technology.
Comments