The AI Act: Reshaping Legaltech

EU AI act and its impact on Legaltech

After a long negotiation period, the European Commission, Council, Parliament, and the EU member states have come to a final agreement in early 2024. Hence, the EU AI Act will be implemented in mid-2024. The first provisions of the AI Act will be enforced in late 2024.

The European Union’s proposed AI Act promises a significant impact on various industries, and legal LegalTech is no exception considering its advancements, future growth, and the sector’s relation to artificial intelligence. While the Act aims to ensure ethical and trustworthy AI development, its regulations will likely modify how LegalTech companies in Europe operate. This article explores the potential impact of the AI Act on LegalTech and how it might reshape the jobs within this growing field.

Understanding Risk Categories in the EU’s AI Act

The AI Act sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. There will be different regulations and requirements for each class.

Unacceptable Risk

Unacceptable risk is the highest level of risk. AI systems related to these areas will be prohibited in the EU given that their incompatibility with EU values and fundamental rights. These are applications related to:

  • Subliminal manipulation: changing a person’s behavior without them being aware of it.
  • Exploitation of the vulnerabilities of persons resulting in harmful behavior: this includes social or economic situation, age, and physical or mental ability.
  • Biometric categorization of persons based on sensitive characteristics.
  • General purpose social scoring: using AI systems to rate individuals based on their characteristics, social behavior, and activities, such as online purchases or social media interactions.
  • Real-time remote biometric identification (in public spaces): biometric identification systems will be completely banned, including ex-post identification. Exceptions can be made for law enforcement with judicial approval and the Commission’s supervisory.
  • Predictive policing: assessing the risk of persons committing a future crime based on personal traits.
  • Scraping facial images: creating or expanding databases with untargeted scraping of facial images available on the internet or from video surveillance footage.

High Risk

High-risk AI systems will be the most regulated systems allowed in the EU market. In essence, this level includes safety components of already regulated products and stand-alone AI systems in specific areas, which could negatively affect the health and safety of people, their fundamental rights or, the environment. This classification is the most controversial area, as it imposes a significant burden on companies and to be put on the market they must meet requirements. This includes:

  • Biometric and biometrics-based systems, such as biometric identification and categorization of persons.
  • Management and operation of critical infrastructure, for example, traffic and energy supply.
  • Education and vocational training, for instance, assessment of students in educational institutions.
  • Employment and workers management, like recruitment, performance evaluation, or task allocation.
  • Access to essential private and public services and benefits.
  • Law enforcement, e.g. evaluating the reliability of evidence or crime analytics.
  • Migration, asylum, and border control management such as assessing the security risk of a person or the examination of applications for asylum, visa, or residence permits.
  • Administration of justice and democratic processes.

Limited Risk

The third level of risk is a limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots are classified as limited risk. This is especially relevant for generative AI systems and their content.

The lowest level of risk is minimal (or no) risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

The AI Act’s effect on LegalTech workflows

The AI Act categorizes AI systems based on the potential damage they could cause, requiring LegalTech to develop “low-risk” AI tools for specific activities. For example, AI contract review systems might only look at certain parts or find potential issues. This lowers bias and makes following the Act easier. Encouraging developers to create AI models that can be explained which builds trust with users while adhering to the Act’s rules

Data is critical for AI, and the AI Act requires effective data management. LegalTech companies must have solid data procedures in to train their AI models on high-quality, fair datasets. By doing so, LegalTech firms may ensure that their AI models are fair, not biased, and adhere to the Act’s data restrictions.

The AI Act brings issues and opportunities for LegalTech. If they change their ways to focus on low-risk AI, transparency, and solid data governance, they can ensure their tools work well and meet the new rules. This care for innovation will bring a better, more fair use of AI in law.

Finding the balance between the AI Act and Legal Tech

While the Act presents exciting opportunities for innovation, it also introduces challenges that legal tech companies must overcome. We explore both sides of the coin, highlighting the importance of responsible AI development for building a trustworthy legal ecosystem.

One major concern lies in the compliance burden regulations might impose. Smaller LegalTech companies could struggle to adhere to new Acts and regulations. The cost of implementing robust compliance measures could restrain their growth. Furthermore, the sheer amount of requirements may make it impossible for them to keep current, perhaps leading to unintentional noncompliance and severe fines.

Overly rigid restrictions may also stifle the same innovation they want to promote. The legal field thrives on continuous improvement, and LegalTech provides exciting opportunities to streamline procedures, automate duties, and increase access to justice. However, overly careful regulations may create unneeded barriers to developing and implementing innovative technology. This might result in a suffocating climate in which ground-breaking ideas get stuck in red tape, impeding the entire advancement of the legal tech industry.

Post-AI Act Landscape

The Act’s focus on transparency can also benefit the legal industry. AI-powered legal tools can become more interpretable, allowing lawyers to trust in their recommendations.

The future of European LegalTech hinges on collaboration. Regulators, developers, and legal professionals must work together to create a regulatory framework that encourages innovation while safeguarding ethical principles.

Standardization of compliance processes can ease the burden on startups. Additionally, fostering a culture of open communication between developers and regulators can ensure that regulations adapt to the rapid evolution of AI.

The AI Act undoubtedly presents challenges for European LegalTech. However, it also creates a unique opportunity to establish Europe as a global leader in responsible AI development. By embracing the Act’s principles and fostering collaboration, European LegalTech can continue to innovate and transform the legal industry.

Streamlining Workflows with the use of Predictive AI
Streamlining Workflows with the use of Predictive AI
Predictive AI is reshaping legal workflows by automating tasks and providing data-driven insights, enabling lawyers to predict outcomes and streamline operations effectively.
EU AI act and its impact on Legaltech
The AI Act: Reshaping Legaltech
The EU AI Act, set to be implemented in mid-2024, introduces significant regulations on LegalTech, reshaping the industry with strict compliance standards and fostering innovation in a legally compliant framework.
The-Powerful-Combination-Workflow-Automation-and-ESG
The Powerful Combination: Workflow Automation and ESG 
Leverage workflow automation to enhance your company’s ESG initiatives, streamlining processes for a sustainable future.
AI use in the Legal and compliance sector 
AI use in the Legal and compliance sector 
Discover the impact of AI on the legal industry: From boosting transcription efficiency and sentiment analysis to revolutionizing document management

Schedule directly your demo!

Grab a cup of coffee and we will walk you through our tool and answer all your questions. We might also have a seat left on our pilot customer list.