The world of AI is constantly evolving, and the EU AI Act acknowledges this by establishing a framework for General Purpose AI (GPAI).
In this blog, we'll delve into this unique category and explore how to navigate its development (by providers of these products) within the Act's framework.
What is General Purpose AI?
GPAI refers to AI models and Systems that use AI models, that are designed to perform a wide range of tasks and that can be used for different applications. These tools, like Large Language Models (LLMs) or advanced robotics, hold immense potential but also raise unique challenges in terms of regulation.
Generative AI products fall into this category within the AI Act.
Due to their widespread use across Society, the European Commission has singled-out this type of System by highlighting responsibilities for makers of these systems (Providers).
The Challenge of Regulating the Unforeseen; responsibilities for Providers of GPAI models
The EU AI Act adopts a risk-based approach, but GPAI presents a unique challenge. Its broad applicability makes it difficult to pre-emptively assess all potential risks for every future use case.
As a result, the Act has adopted a two-tiered approach to categorising AI models, determining their need for scrutiny based on;
its' ability to have a foreseeable negative impact public health, safety, public security, fundamental rights, or society as a whole
when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs)
If any of the above criteria are met, then the GPAI model is classified as a "GPAI model with Systemic risk" and is subject to rigorous evaluation before it's use can be approved for use in the EU.
When the above criteria are not met, then the GPAI model has less obligations under the Act.
There are a number of requirements that must be met if you are a provider or user of a high-risk AI System. These requirements are outlined in the Annexes to the AI Act (found here), which may be updated over time.
Requirements for all providers of GPAI models:
Draw up technical documentation, including training and testing process and evaluation results.
Provide information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply with the Act themselves.
Establish a policy to respect the Copyright Directive.
Publish a sufficiently detailed summary about the content used for training the GPAI model.
Additional requirements for providers of GPAI models with Systemic risk:
Perform model evaluations, including conducting and documenting adversarial testing to identify and mitigate systemic risk.
Assess and mitigate possible systemic risks, including their sources.
Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.
Ensure an adequate level of cybersecurity protection.
Remember: Compliance isn't just about avoiding penalties; it's about building trust and contributing to a responsible AI future. Continuously adapt your development practices to ensure your GPAI models and systems operate ethically and responsibly.
Next Up:
Blog 6: Using General Purpose AI in your Business: Understand how to successfully deploy GPAI tools across your day to day operations.
----
Throughout this journey, remember: compliance doesn't have to be a burden. By understanding the AI Act and implementing its principles, you can harness the immense potential of AI while fostering trust and ensuring responsible innovation.
If you need help with understanding AI Compliance in your Business, or with anything at all to do with your IT Organisation, reach out to us here at CB Navigate.
What are your thoughts? We'd love to hear your feedback, or if you'd like to discuss this topic and others with us in more detail, please reach out to us info@cbnavigate.com
Comments