The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.
Purchase
Buy instant access (PDF download and unlimited online access):
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your brill.com account
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 2234 | 879 | 25 |
Full Text Views | 805 | 95 | 3 |
PDF Views & Downloads | 1497 | 185 | 10 |
The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 2234 | 879 | 25 |
Full Text Views | 805 | 95 | 3 |
PDF Views & Downloads | 1497 | 185 | 10 |