top of page

Navigating the EU's AI Act: 4 - Limited-Risk AI Systems


Interacting with an AI Chatbot

Not all AI systems are created equal, and the EU AI Act recognizes this.


In this blog, we'll focus on limited-risk AI systems, exploring the lighter regulatory touch they receive compared to their high-risk counterparts, while also outlining the responsibilities that still apply to developers and deployers (users).



A Lighter Touch: What It Means


Limited-risk AI systems such as; chatbots, spam filters, basic image recognition tools, and other tools that interact with your customers or are used day to day in the Business, generally pose minimal risks to fundamental rights and safety. As such, the Act adopts a less stringent approach, allowing for faster and easier deployment.


However, this doesn't mean there are no obligations on your Business. As these Systems become more commonplace, it is important to remember your responsibilities when using such tools.


Key Responsibilities for Developers and Deployers:


  • Risk Management: Conduct a risk assessment to identify and mitigate potential risks associated with your system, even if they are considered limited. This demonstrates awareness and proactive responsibility.

  • Transparency: Ensure users are aware they are interacting with an AI system. This fosters trust and allows users to make informed decisions. For example, your customer needs to be informed when interacting with a machine, or that AI-generated content has been labelled as such

  • Record-Keeping: Maintain adequate records of how your AI system works and the data it uses. This can be crucial for addressing any potential issues or concerns in the future.

  • Post-Market Monitoring: Implement mechanisms to monitor your system's performance and address any unexpected issues that could arise after deployment. This shows a commitment to continuous improvement.



Beyond Compliance: Building Trust and Responsible Use


While the regulations are lighter, it's essential to remember that ethical considerations still apply:


  • Bias mitigation: Even low-risk systems can perpetuate bias if not carefully developed. Implement measures to identify and mitigate potential biases in your AI.

  • Data protection: Ensure you comply with relevant data protection regulations (like the General Data Protection Regulation) when collecting, storing, and using personal data in your AI system.

  • Responsible use: Consider the potential impact of your AI system on society and deploy it in a way that benefits individuals and contributes to a positive future.



Remember: AI Systems fall under the same responsibilities as other existing Computer Systems. Alongside the AI Act, you need to ensure that your usage of these tools still works in accordance with existing EU regulations, such as GDPR.



Next Up:


Blog 5: Building General Purpose AI models and Systems: Uncover the unique considerations for this fast evolving category of AI and how to approach building these tools it within the Act's framework.


----


Throughout this journey, remember: compliance doesn't have to be a burden. By understanding the AI Act and implementing its principles, you can harness the immense potential of AI while fostering trust and ensuring responsible innovation.


If you need help with understanding AI Compliance in your Business, or with anything at all to do with your IT Organisation, reach out to us here at CB Navigate.


What are your thoughts? We'd love to hear your feedback, or if you'd like to discuss this topic and others with us in more detail, please reach out to us info@cbnavigate.com



Recent Posts

See All

Comentários


Os comentários foram desativados.
bottom of page