The AI Act: Reshaping Legaltech

EU AI act and its impact on Legaltech

After a long negotiation period, the European Commission, Council, Parliament, and the EU member states have come to a final agreement in early 2024. Hence, the EU AI Act will be implemented in mid-2024. The first provisions of the AI Act will be enforced in late 2024.

The European Union’s proposed AI Act promises a significant impact on various industries, and legal LegalTech is no exception considering its advancements, future growth, and the sector’s relation to artificial intelligence. While the Act aims to ensure ethical and trustworthy AI development, its regulations will likely modify how LegalTech companies in Europe operate. This article explores the potential impact of the AI Act on LegalTech and how it might reshape the jobs within this growing field.

Understanding Risk Categories in the EU’s AI Act

The AI Act sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. There will be different regulations and requirements for each class.

Unacceptable Risk

Unacceptable risk is the highest level of risk. AI systems related to these areas will be prohibited in the EU given that their incompatibility with EU values and fundamental rights. These are applications related to:

  • Subliminal manipulation: changing a person’s behavior without them being aware of it.
  • Exploitation of the vulnerabilities of persons resulting in harmful behavior: this includes social or economic situation, age, and physical or mental ability.
  • Biometric categorization of persons based on sensitive characteristics.
  • General purpose social scoring: using AI systems to rate individuals based on their characteristics, social behavior, and activities, such as online purchases or social media interactions.
  • Real-time remote biometric identification (in public spaces): biometric identification systems will be completely banned, including ex-post identification. Exceptions can be made for law enforcement with judicial approval and the Commission’s supervisory.
  • Predictive policing: assessing the risk of persons committing a future crime based on personal traits.
  • Scraping facial images: creating or expanding databases with untargeted scraping of facial images available on the internet or from video surveillance footage.

High Risk

High-risk AI systems will be the most regulated systems allowed in the EU market. In essence, this level includes safety components of already regulated products and stand-alone AI systems in specific areas, which could negatively affect the health and safety of people, their fundamental rights or, the environment. This classification is the most controversial area, as it imposes a significant burden on companies and to be put on the market they must meet requirements. This includes:

  • Biometric and biometrics-based systems, such as biometric identification and categorization of persons.
  • Management and operation of critical infrastructure, for example, traffic and energy supply.
  • Education and vocational training, for instance, assessment of students in educational institutions.
  • Employment and workers management, like recruitment, performance evaluation, or task allocation.
  • Access to essential private and public services and benefits.
  • Law enforcement, e.g. evaluating the reliability of evidence or crime analytics.
  • Migration, asylum, and border control management such as assessing the security risk of a person or the examination of applications for asylum, visa, or residence permits.
  • Administration of justice and democratic processes.

Limited Risk

The third level of risk is a limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots are classified as limited risk. This is especially relevant for generative AI systems and their content.

The lowest level of risk is minimal (or no) risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

The AI Act’s effect on LegalTech workflows

The AI Act categorizes AI systems based on the potential damage they could cause, requiring LegalTech to develop “low-risk” AI tools for specific activities. For example, AI contract review systems might only look at certain parts or find potential issues. This lowers bias and makes following the Act easier. Encouraging developers to create AI models that can be explained which builds trust with users while adhering to the Act’s rules

Data is critical for AI, and the AI Act requires effective data management. LegalTech companies must have solid data procedures in to train their AI models on high-quality, fair datasets. By doing so, LegalTech firms may ensure that their AI models are fair, not biased, and adhere to the Act’s data restrictions.

The AI Act brings issues and opportunities for LegalTech. If they change their ways to focus on low-risk AI, transparency, and solid data governance, they can ensure their tools work well and meet the new rules. This care for innovation will bring a better, more fair use of AI in law.

Finding the balance between the AI Act and Legal Tech

While the Act presents exciting opportunities for innovation, it also introduces challenges that legal tech companies must overcome. We explore both sides of the coin, highlighting the importance of responsible AI development for building a trustworthy legal ecosystem.

One major concern lies in the compliance burden regulations might impose. Smaller LegalTech companies could struggle to adhere to new Acts and regulations. The cost of implementing robust compliance measures could restrain their growth. Furthermore, the sheer amount of requirements may make it impossible for them to keep current, perhaps leading to unintentional noncompliance and severe fines.

Overly rigid restrictions may also stifle the same innovation they want to promote. The legal field thrives on continuous improvement, and LegalTech provides exciting opportunities to streamline procedures, automate duties, and increase access to justice. However, overly careful regulations may create unneeded barriers to developing and implementing innovative technology. This might result in a suffocating climate in which ground-breaking ideas get stuck in red tape, impeding the entire advancement of the legal tech industry.

Post-AI Act Landscape

The Act’s focus on transparency can also benefit the legal industry. AI-powered legal tools can become more interpretable, allowing lawyers to trust in their recommendations.

The future of European LegalTech hinges on collaboration. Regulators, developers, and legal professionals must work together to create a regulatory framework that encourages innovation while safeguarding ethical principles.

Standardization of compliance processes can ease the burden on startups. Additionally, fostering a culture of open communication between developers and regulators can ensure that regulations adapt to the rapid evolution of AI.

The AI Act undoubtedly presents challenges for European LegalTech. However, it also creates a unique opportunity to establish Europe as a global leader in responsible AI development. By embracing the Act’s principles and fostering collaboration, European LegalTech can continue to innovate and transform the legal industry.

How does the EU AI Act categorize AI systems?

The AI Act categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal (or no) risk. Each category has different regulations and requirements. Unacceptable risk systems are prohibited, high-risk systems are highly regulated, limited risk systems require transparency, and minimal risk systems have no mandatory obligations.

What are high-risk AI systems, and how are they regulated?

High-risk AI systems are those that could negatively affect the health, safety, fundamental rights, or the environment. They include biometric systems, critical infrastructure management, education assessment tools, employment management, access to essential services, law enforcement analytics, migration control, and justice administration. These systems must meet stringent requirements to be allowed on the EU market.

What impact will the AI Act have on LegalTech companies?

The AI Act will significantly impact LegalTech companies by requiring them to develop low-risk AI tools, ensure transparency, and maintain robust data management practices. LegalTech firms must adhere to the Act’s regulations to avoid penalties and ensure their AI models are fair and unbiased.

What are the challenges and opportunities presented by the AI Act for LegalTech?

While the AI Act offers opportunities for innovation and responsible AI development, it also presents challenges. Compliance with the Act may be burdensome for smaller LegalTech companies, potentially stifling innovation. However, the Act’s emphasis on transparency and collaboration can lead to a more trustworthy and efficient legal ecosystem. LegalTech companies can thrive by focusing on low-risk AI, solid data governance, and collaboration with regulators.

Legal AI: 20 essential terms for lawyers
Legal AI: 20 Essential Terms for Lawyers (Part One)
Explore the essential terms in Legal AI that every lawyer should know. From AI to sentiment analysis, stay competitive in the dynamic field of legal technology.
HR Manager Using Automation Tool
HR Workflow Automation For Improved Employee Satisfaction
Discover how HR workflow automation can revolutionize your HR department by streamlining processes, reducing manual tasks, and enhancing the employee experience. Elevate efficiency and productivity with automation tools,
Legal Automation and Workflow Automation
Legal Automation: The Key to Transforming Legal Services
Automation in the legal industry transforms traditional practices with modern technology. By streamlining workflows and document management, law firms enhance efficiency, reduce errors, and improve client satisfaction.
Illustration of AI prompts transforming legal research with advanced technologies.
Introduction to AI Prompts in Legal Research
Discover how AI prompts are revolutionizing legal research with increased efficiency and accuracy. Explore the benefits and ethical considerations of AI in the legal field.

Schedule directly your demo!

Grab a cup of coffee and we will walk you through our tool and answer all your questions. We might also have a seat left on our pilot customer list.