Beltsys Labs
Beltsys Labs

The European Union Artificial Intelligence Act: “A Regulation for a New Era”

Beltsys Labs

Beltsys Labs

The European Union Artificial Intelligence Act: “A Regulation for a New Era”

On July 12, 2024, the European Union Artificial Intelligence Act (AIA) was published in the Official Journal. It will begin to apply from August 1 of that year. This regulation brings clear guidelines for using artificial intelligence safely and ethically. Its goal is also to drive responsible innovations.

The AIA is the first legislation of its kind in the world. It classifies AI applications into four risk levels: unacceptable, high, limited, and minimal. Thanks to this classification, AI technology can be better managed, adapting it to the necessary context.

It establishes that certain uses of AI, such as manipulating human behavior and using facial recognition in public spaces, are unacceptable. In the case of high-risk applications, such as those used in critical infrastructure and education, strict rules are established. These include risk assessment, documentation, and transparency.

Fines for non-compliance with the regulation can be enormous, reaching up to 7% of a company’s global business volume or 35 million euros. The EU has an Artificial Intelligence Office responsible for enforcing these rules. This maintains coordination across all of Europe.

Key Takeaways

  • The European Union Artificial Intelligence Act will be published on July 12, 2024, and will enter into force on August 1.
  • The regulation categorizes AI applications according to four risk levels: unacceptable, high, limited, and minimal.
  • The use of AI that manipulates human behavior and real-time facial surveillance in public places are considered unacceptable.
  • High-risk applications include critical infrastructure, education, employment, and essential services.
  • Companies must comply with rigorous requirements such as risk assessment, technical documentation, and transparency.

Introduction to the Artificial Intelligence Act

The European Union Artificial Intelligence Act (AIA) presents a comprehensive regulatory framework for AI. It seeks to balance technological innovation and the protection of fundamental rights. It also addresses the challenges of security and ethics in the development and use of AI.

The European Artificial Intelligence Act (AIA) is pending final approval by the EU Council for publication in the Official Journal of the European Union.

Proposed in April 2021, the AIA has been in development for about four years. This shows how complex and important it is to regulate AI in Europe. This effort prohibits certain AI uses that can manipulate or negatively classify people.

The framework classifies AI applications based on their risk.

  • Unacceptable risk: certain AI practices are prohibited for their negative effects on fundamental freedoms.
  • High risk: systems that could significantly impact health, safety, and fundamental rights.
  • Limited risk: AI with minor impact that needs to demonstrate transparency.
  • Minimal risk: most AI systems only need to follow basic information rules.

The effect of the AI regulation could be as significant as the GDPR. It highlights the importance of AI regulation worldwide. It is expected to enter into force on August 2, 2026, positioning the EU as a leader in establishing a safe and ethical path for AI.

Objectives of the AI Act

The Artificial Intelligence Act (EU) 2024/1689 was announced on July 12, 2024. Its goal is to improve the market and promote AI that is safe and respects human beings. It seeks to protect AI safety and people’s rights, and also foster responsible innovation.

Safety and Fundamental Rights

This regulation classifies AI according to risk, prohibiting things like cognitive manipulation and predictive surveillance. It aims to protect AI safety and our fundamental rights. Additionally, it regulates those who participate in AI in the EU, ensuring extensive protection.

Promoting Responsible Innovation

This regulation drives careful AI innovation, with clear rules for its creation and use in important sectors like healthcare. The AI Office will monitor compliance with these standards. Thus, it seeks to increase the benefits of AI while reducing its risks, generating trust and safety.

Risk Categorization in AI Applications

The new European AI regulation has created a method for classifying risks. Now, depending on their danger level, AI applications are divided into four categories. These are: unacceptable, high, limited, and minimal. The rules vary for each category. This protects people’s safety and rights.

Unacceptable Risk

It is prohibited to use AI systems that present a danger called unacceptable. An example is technology that changes how people act. Or the use of facial recognition in public places. These practices are a danger to everyone’s freedom and safety.

High Risk

High-risk applications impact important areas such as health and education. These AI systems must pass many tests. They need to have quality information and clearly explain their operation. Additionally, the use of remote biometric identification is considered high risk and must follow precise European Union rules.

Limited Risk

Limited-risk AIs must be transparent. It is necessary for people to be able to monitor them. This helps people trust these technologies. The idea is to have responsible use that everyone understands well.

Minimal Risk

Minimal-risk AIs have almost no special restrictions. But managing data well and respecting intellectual property remains key. These AIs can be used freely, such as in video games with AI.

Implications for AI System Developers

The Artificial Intelligence Act (AIA) affects those who develop AI in Europe. It is necessary to follow rigorous standards that ensure ethics and safety. This regulation emphasizes risk assessment and AI documentation.

Risk Assessment

AI creators must carefully evaluate the risks of their systems. These analyses are crucial, especially for high-risk systems. This evaluation involves several stages:

  • Identifying potential threats and vulnerabilities.
  • Applying actions to reduce those risks, following EU guidelines.
  • Ensuring that their developments comply with regulations.

The goal is for AI systems to be safe and not threaten people’s rights. By following the rules, developers avoid large fines. These can reach up to 6% of global revenue or 30 million euros.

Documentation and Audits

Detailed documentation is another requirement for developers. They must have files that cover:

  1. How the AI system was designed and how it works.
  2. Summaries of security measures and impact.
  3. Audit reports to verify compliance and performance.

These files are used in periodic audits, keeping everything clear and accountable. Audits promote legality and improve trust in AI.

The AIA seeks an AI design that balances innovation with societal protection. Thus, it ensures that artificial intelligence progress is positive for everyone.

Obligations for Business Users

The EU AI Act defines criteria that business AI users must follow. These criteria ensure the ethical and safe use of technology. We highlight three main areas to focus on and comply with this regulatory framework.

Selection of Compliant Providers

It is essential that business AI users choose their technology providers carefully. They must select those that comply with Regulation (EU) 2024/1689. This selection ensures safe and quality applications. Thus, user welfare and rights are protected.

Oversight and Control

Business AI users must implement strict oversight. Conducting audits regularly and documenting every step of the process is key. This reinforces compliance with standards and fosters trust in technology.

Training and Awareness

Educating the team about AI is a mandate of the new regulation. It is critical that everyone understands the dangers and responsibilities involved. With continuous training, high ethical and safety standards in AI use are maintained.

Supervisory and Control Authority

The EU Artificial Intelligence Office will play a very important role. It will coordinate how AI systems are controlled and supervised in Europe. It will work to ensure that standards are properly applied in all member states. Additionally, it will provide advice and receive reports on AI problems. It will also promote cooperation among agencies at the national and regional levels.

In Spain, laws 22/2021 and 28/2022 led to the creation of AESIA. This step made Spain a leader in Europe by having a public entity that supervises AI. Work was also done quickly to launch AESIA through a decree.

AESIA reports to the Secretary of State for Digitalization and Artificial Intelligence. It has the task of developing and supervising AI projects at the national level and complying with European Union requirements. The Artificial Intelligence Strategy dictates its projects. A Coruna will be where AESIA is located. It will also work with the Ministry of Defense and important AI authorities in Europe.

A key part of AESIA’s work is issuing certificates for AI systems that may be risky. It will ensure they follow certain established rules. Interestingly, it will promote special places to test AI safely. With the help of private companies, they want to test and develop AI safely. It will also investigate and take measures to reduce AI risks in safety, health, and fundamental rights.

Sanctions Regime

The European Union Artificial Intelligence Act presents an AI sanctions regime. Fines of up to 7% of the company’s annual global revenue or 35 million euros can be applied. This depends on how serious the offense is.

The purpose of these fines is to ensure that companies comply with the laws. They also seek to prevent carelessness with artificial intelligence technology.

In Spain, AESIA will be in charge of monitoring how artificial intelligence is used. They will have the authority to apply sanctions. Member states are expected to develop rules describing fines and measures to ensure the regulation is effective.

Additionally, it is noted that each country in the Union must designate entities for oversight. These must supervise compliance with the regulation, with the exception of certain AI uses. These entities may request information from those who provide, use, or import AI.

In conclusion, this AI sanctions regime is key to ensuring that artificial intelligence standards are followed in the Union. It helps create a safer and more responsible technological environment.

Implementation Timelines for the Regulation

The European Union Artificial Intelligence Act will enter into force in August 2024. This will be a crucial moment for the regulation of AI systems in Europe. All member states are expected to fully implement it by mid-2025.

Companies in various sectors must prepare for important new deadlines. This is necessary to comply with the AI Act. Here are some key dates:

  • February 28, 2024: Date the analyzed article was published.
  • August 2024: The Regulation begins to apply. This is the start of AIA implementation.
  • Mid-2025: Deadline for full implementation in EU countries.
  • Article 73: Serious incidents with high-risk AI must be reported within 12 months of implementation.
  • Article 6: Guidelines on high-risk AI must be published within 18 months of application.

There are specific provisions that impact how the AIA is applied. An example is the prohibition of certain practices in Article 113. According to Article 70, each country’s government has important roles designating special areas under Article 57.

The European Commission may conduct annual reviews under Article 112. This allows it to change prohibitions and adjust standards for high-risk AI (Article 111).

It is a priority for companies to comply with the AI Act before deadlines expire. Compliance with Article 43 on high-risk AI is crucial. This ensures that rules are properly followed.

Finally, delegated acts of the Commission will last five years and can be extended under Article 97. The AI Office’s codes of good practice will help AI providers comply with everything. This is to ensure the AI supply chain is in order.

Conclusion

The European Union has taken a major step in AI regulation in Europe. It was achieved with 523 votes in favor, 46 against, and 49 abstentions. This framework seeks ethical and safe use of artificial intelligence. Its goal is to protect rights and foster innovation.

There are different risk levels in AI, from unacceptable to the least dangerous. This is to protect our European values. High-risk systems require strict evaluations before use.

This regulation involves various actors such as providers and business users. In Spain, it will be the Spanish Agency for the Supervision of Artificial Intelligence that oversees compliance. Thus, the EU leads with a comprehensive approach to AI use.

The AI Act will begin to apply between 6 to 24 months after activation. It seeks a future of artificial intelligence that is safe and innovative in Europe. This regulation is key to maximizing AI’s benefits for everyone.

FAQ

What is the European Union Artificial Intelligence Act (AIA)?

The EU Artificial Intelligence Act (AIA) is a promising regulation. Its purpose is to guide the safe and ethical development and use of AI. It was approved on July 12, 2024, and will be mandatory from August 1 of that year.

What are the main objectives of the AI Act?

Primarily, the AIA seeks to protect citizens' safety and their basic rights. It also promotes responsible AI innovation. It defines risk control measures for those who develop and use AI systems in business.

How does the AI Act classify artificial intelligence applications?

It divides AI applications into four risk categories: unacceptable, high, limited, and minimal. Each risk level is assigned specific prohibitions and rules.

What must AI developers do to comply with the regulation?

AI creators must thoroughly evaluate risks. Additionally, they must maintain detailed documents and submit their developments to frequent inspections. It is vital that their work follows the ethics and safety the AIA requires.

What obligations do business users of AI systems have under the regulation?

Companies that use AI must choose providers committed to the AIA. They need to oversee AI implementation and educate their employees about the dangers and rules of AI use in the workplace.

What is the role of the EU Artificial Intelligence Office?

The EU Artificial Intelligence Office is responsible for coordinating the regulation and oversight of AI systems. It provides advice and serves as a central hub for incident reports. It seeks effective and uniform implementation of the regulation across all member states.

What types of sanctions can companies face for non-compliance with the AIA?

Companies that violate the AIA could receive considerable fines. These can reach up to 7% of their annual global revenue or 35 million euros. These penalties aim to enforce compliance and prevent careless use of AI.

What are the timelines for implementing the AI Act?

The regulation will begin to apply 24 months after becoming official. However, there are exceptions, such as in the case of completely prohibited practices. Companies must adjust to these timelines to adapt their AI use.

Ethics in Artificial Intelligence Social impact Technological Innovation Artificial Intelligence Legislation Data protection Artificial Intelligence Regulation European Union

Have a project in mind?

Let's talk about how we can help you make it happen.

Contact Us