Exploring the AI Act

Published on
Authors

On the 1st of August 2024 the European Union regulation on the use of artificial intelligence (AI), the AI Act, entered into force1. The regulation tries to promote the uptake of human-centric and trustworthy artificial intelligence (AI).

The regulation covers more in-depth high-risk general purpose AI systems such as the one provided by OpenAI, Mistral, Anthropic, Google, Microsoft or other generic Large Language Models.

What means high-risk can be defined based on the human involvement in the process and its application with respect to human rights and the depth of use in critical (national) infrastructure (biometric identification, emotion recognition, evaluation of natural person, access to essential services and benefits, cableway installations, personal protective equipment, medical devices, civil aviation, vehicle, marine, rail, ... 2).

The more the human is in control of the process, i.e. revise and validate the output and take the final decision, the less the system is considered risky.

Let's focus on consumers and users of AI (e.g., classification, recommendation, extraction, ...). If you are creating a Large Language Model (like the ones from OpenAI, Google, ...) you must read carefully all the AI Act.

I'm using AI created by third parties

When you use AI services hosted by third parties you are subject to transparency requirements while the provider must ensure that you get instructions of use of the provided AI models (e.g., GPT-4o System Card). Imagine that you pay the OpenAI subscription to allow your employees to summarize text. OpenAI is then responsible to provide all the information for a correct use of their models, while you (the company) are responsible to tell your employees that summaries are generated using AI techniques and should be double checked before use. From the organizational perspective you should probably think to establish processes around what is safe or not to do with the AI based on the instructions given by the provider.

Ask to the provider

  • Information on the purpose of the AI model(s), the level of accuracy and robustness, as well as information to enable you to interpret the output of the AI system and use it appropriately;
  • If applicable, how to self-host them, the computational and hardware resources needed, and any necessary maintenance and care measures;
  • where relevant, a description of the mechanisms included within the AI system that allows to properly collect, store and interpret the logs.

Provide to end-users

  • Ensure that the AI output is labelled appropriately to state that was coming from the AI

I'm creating or re-selling AI

When you create AI models or offer products that users AI models (e.g. OpenAI GPT models) you are considered a provider. Providers are subject to both transparency and instructions of use disclosure. From the transparency perspective your users must be informed when AI is used during his/her request. This time is your turn to provide instructions on how those models are best suitable for and the kind of interactions that understand.

While the AI Act focuses on high-risk systems (i.e. large language models), some of the recommendation applies to any kind of artificial intelligence (or machine learning) model.

Provide to users

  • the intended purpose of the AI model(s): what are those used for? (e.g. classifying text, extracting information, generating text, ...);
  • the level of accuracy, including its metrics, robustness and cybersecurity
  • information to enable you to interpret the output of the AI system and use it appropriately;
  • when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used (this is useful to verify eventual bias);
  • a description of the mechanisms included within the AI system that allows deployers to properly collect, store and interpret the logs;
  • if self-hosting is possible, the computational and hardware resources needed, and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates.

Key takeaways

The file contains a blog post on the European Regulation on Artificial Intelligence. From what is written, derive some conclusions to add to the blog post in a special "Key Takeaways" section.

Certainly. Based on the content of the blog post, I'll derive some key takeaways to add to the article. Here's a "Key Takeaways" section we can add: [...]

can you make it shorter

Certainly, I'll create a more concise version of the key takeaways.

  1. The AI Act entered into force on August 1, 2024, with a phased implementation through 2026.
  2. It focuses on high-risk AI systems, determined by human involvement and application in critical areas.
  3. Users of third-party AI must ensure transparency and establish safe-use processes.
  4. AI providers must offer transparency and usage instructions, including the model's purpose and accuracy.
  5. All AI outputs should be appropriately labeled as AI-generated.
  6. The Act promotes human-centric, trustworthy AI with humans maintaining control.
  7. While focused on high-risk systems, many recommendations apply to all AI models.
  8. Organizations should stay informed about compliance timelines and consult legal experts as needed.

The key takeaways section was generated using Anthropic Claude 3.5 Sonnet3 via the chat interface.

This article does not constitute legal advice


  1. Although the regulation entered in force on the 1st of August 2024 some of the articles and requirements only applies from 2025 and 2026, see article 113. This does not constitute legal advice, check with your experts what articles applies to you. ↩︎

  2. More examples can be found in Article 6, Annex I and III and in general in Chapter 3↩︎

  3. Anthropic Claude 3 model card and system prompt↩︎