28 June 2025

OECD Classification of AI Systems

The rapid integration of Artificial Intelligence (AI) across diverse sectors necessitates robust frameworks for understanding and governing these complex systems. Recognizing the varied benefits and risks posed by different AI applications – from virtual assistants to self-driving cars – the Organisation for Economic Co-operation and Development (OECD) developed a comprehensive framework for classifying AI systems. This framework, built upon the foundational OECD AI Principles, serves as a crucial tool for policymakers, regulators, and other stakeholders to characterize AI systems, assess their implications, and foster the development of trustworthy and responsible AI.

At its core, the OECD's classification framework aims to provide a common language and structured approach to evaluate AI systems from a policy perspective. It acknowledges that the impact of an AI system is not solely dependent on the technology itself, but also on the specific context in which it operates and the stakeholders it affects. To address this complexity, the framework classifies AI systems along five key dimensions:

  1. People & Planet: This dimension considers the direct and indirect impacts of AI systems on individuals, groups, and the environment. It prompts consideration of aspects such as human rights, well-being, privacy, fairness, and potential for displacement or harm. This dimension is deeply connected to the human-centric values embedded in the OECD AI Principles.

  2. Economic Context: This dimension examines the economic sector in which the AI system is deployed, its business function, and its overall scale and maturity. Understanding the economic environment helps assess market implications, competitive landscapes, and the broader societal value generated or impacted by the AI system.

  3. Data & Input: Acknowledging that data is the lifeblood of most AI systems, this dimension focuses on the characteristics of the data used. This includes its provenance, collection methods, dynamic nature, quality, and issues of rights and identifiability (especially for personal data). Biases in data, for instance, can propagate and amplify biases in AI system outputs, making this a critical area of assessment.

  4. AI Model: This dimension delves into the technical particularities of the AI system itself. It differentiates between various model characteristics (e.g., symbolic AI, machine learning, hybrid approaches), how the model is built, and how it performs inference or is used. This helps in understanding the underlying mechanisms and potential technical limitations or vulnerabilities.

  5. Task & Output: Finally, this dimension describes what the AI system does and the results it produces. It considers the specific tasks the system performs (e.g., recognition, personalization, automation) and its level of autonomy in performing these actions. The nature of the output, and how it is consumed or acted upon, has direct implications for policy considerations.

The OECD framework is designed to be generic yet powerful, allowing for a nuanced and precise policy debate around AI. It helps identify typical AI-related risks such as bias, lack of explainability, and robustness issues. By linking AI system characteristics with the OECD AI Principles, the framework guides the development of technical and procedural measures for implementation. It also serves as a baseline for creating inventories or registries of AI systems, informing sector-specific regulations (e.g., in healthcare or finance), and developing robust risk assessment and incident reporting mechanisms. Ultimately, the OECD’s classification framework is a vital step towards fostering international collaboration and establishing common standards for trustworthy and beneficial AI worldwide.

OECD Classification of AI Systems