In April 2021, the European Union proposed a framework to regulate the use of artificial intelligence (AI) across all sectors. The Act adopts a risk-based approach, classifying the AI systems into four categories: low/minimal risk, limited risk, high risk and unacceptable risk.
The organisation that intends to implement AI needs to develop a set of mandatory requirements before deploying AI and non-compliance is met with a hefty penalty.
Scroll below to assess your risk level using our AI Act Conformity check.
AI systems that manipulate human behaviour, such as exploiting emotional or cognitive vulnerabilities in people.
Systems that enable social scoring or surveillance, including the scoring and ranking of individuals based on their actions, social interactions or beliefs.
AI systems that produce or circulate deceptive content, such as deepfakes, with the purpose of misleading or causing harm to individuals or society.
AI systems such as facial recognition technology that are employed for biometric identification and classification.
AI systems utilised within the realm of employment and human resources, including decisions related to hiring, advancement or termination.
Stringent legal prerequisites.
AI systems that aid in human decision-making, such as chatbots or virtual personal assistants.
Establish an individual's classification using biometric information.
Comply with transparency obligations outlined in the EU AI Act.
Recommendation systems that provide product and content recommendations according to user preferences and actions.
Video games incorporating AI, fraud detection systems and spam filters.
These systems are not subject to legal requirements.
This test is created in collaboration with Statworx GmbH.
Copyright © 2023 EIT Manufacturing