Artificial Intelligence is radically shaping our societies and economies and more transformations are expected in the next few years. The AI market is continuously growing alongside the use of AI technologies within industrial and commercial applications.

Nevertheless, several surveys show that most customers are concerned with misinformation from AI tools. Moreover, just a small part of those surveyed believes they can tell the difference between content written by a human and that generated by automated chatbots.

Given this context, policy makers have a duty to minimise the risk that AI poses through smart regulation, preserving the many benefits AI brings. Today, the European Parliament made a huge step towards AI regulation formally voting the EU AI Act, the world’s first and most comprehensive legal framework on AI. The broad consensus shown in the European Parliament remarks the importance and the need of this regulation towards a true responsible AI, as BIP xTech has actively promoted for a long time.

The main purpose of the EU AI Act is to ensure a high level of protection of health, safety and fundamental rights enshrined in the Charter of Fundamental Rights of the European Union, which includes democracy, rule of law and environmental protection. Following these principles, the AI Act seeks to establish harmonized rules for the development, marketing, and use of AI systems in the Union with a proportionate risk-based approach.

Indeed, the regulation classifies AI systems according to their associated risk:

  • Unacceptable risk is prohibited – Prohibited AI Systems must be removed;
  • High-risk AI systems are regulated – companies have to comply with obligations regarding human oversight, transparency, data governance, risk management, technical documentation, record keeping, accuracy and cybersecurity;
  • Limited risk AI systems are subject to lighter transparency obligations;
  • Minimal risk envisages minor obligations (e.g. code of conduct on voluntary basis).

The EU AI Act introduces obligations even for providers of General Purpose AI systems (GPAI), both for direct use and for integration in other AI systems. Some GPAI systems are considered as capable of causing systemic risks and are therefore subject to stricter obligations.

The obligations fall on providers and operators of AI systems, which are required to promptly comply with the regulation: building an AI Governance framework, cataloguing and classifying all the AI systems and defining controls, metrics and remediation plans.

A non-compliant company could be subject to fines that could go up to 7% of the total worldwide annual turnover for the previous FY, that is up to 35 million euros.

The AI Act is expected to entry into force by early summer 2024. Imposed obligations will start after:

  • 6 months for prohibited AI systems;
  • 12 months for GPAI;
  • 24-36 months for high-risk AI systems (depending on the purpose of the system).

For this reason, companies affected by this regulation must act as soon as possible to assess impact and implications on their operations.

BIP xTech, with its decade-long experience in developing AI systems and making it responsible, trustworthy, safe and compliant with emerging regulations, can help Clients understand all the implications of the EU AI Act and rapidly guide them towards compliance.

Wondering if you organization is impacted by the EU AI Act? Please take the quick survey (approx. 5 mins) available below and do not hesitate to contact us for more information. 

Spotlight

How we can help you?

Get in touch with BIP xTech experts and find out more