Schedule 1
Updation Date and Time
10 May 2025
Schedule 1 – AI Standards Classification and Implementation
Part A: Imperatives associated with AI Standards and Principles
(1) Pre-Regulatory Imperatives: Pre-regulatory imperatives encompass standards/principles that anticipate future regulatory needs and help shape emerging governance frameworks. These imperatives categorize approaches that guide innovation while ensuring AI systems remain ethical, scalable, and sustainable as new regulations develop. Standards classified under pre-regulatory imperatives reflect proactive governance, establishing frameworks adaptable to evolving regulatory landscapes.
(2) Regulatory Imperatives: Regulatory imperatives classify standards/principles designed to align with existing legal and regulatory requirements. These imperatives characterize frameworks that help organisations comply with current laws and regulations, ensuring AI systems meet necessary legal, ethical, and operational benchmarks. Standards attributable to regulatory imperatives are particularly significant for industries operating in highly regulated environments.
(3) Post-Regulatory Imperatives: Post-regulatory imperatives are attributable to standards/principles developed to enhance and refine existing regulatory frameworks. They categorize guidelines for implementation and compliance after regulations have been established, ensuring AI systems operate within legal boundaries while optimizing performance. Standards within this imperative address gaps in regulatory frameworks and focus on operational excellence within established legal parameters.
(4) Miscellaneous Imperatives: Miscellaneous imperatives characterise standards/principles that apply universally across all stages of governance (pre-regulatory, regulatory, post-regulatory). They encompass foundational frameworks adaptable across different industries, regions, or use cases, addressing cross-cutting concerns such as interoperability, ethical considerations, and stakeholder inclusion. Standards classified under miscellaneous imperatives ensure consistency in AI governance regardless of regulatory stage.
Part B: Imperative Matrix for AI Standards and Principles
AI Standards and Principles must address any of the four key imperatives—technical, commercial, legal, and ethical—at various stages of governance (pre-regulatory, regulatory, post-regulatory). The following table outlines how each imperative applies to different types of standards and principles:
Imperative | Post-Regulatory | Regulatory | Pre-Regulatory | Miscellaneous |
Technical |
|
|
|
|
Commercial |
|
|
|
|
Legal |
|
|
|
|
Ethical |
|
|
|
|
Part C: Stakeholder-centricity
(1) For every principle or standard, it is required to specify which specific facet of a standard or a principle is attributable to which stakeholder, as per the table below:
Government | Community | Organisations |
Central Government Ministries / Departments | Academic Institutions / Researchers | Large Enterprises |
Regulatory Bodies | Open-Source Communities | Micro, Small & Medium Enterprises |
Policymakers | Technical Standards Bodies | Start-ups |
Public Sector Organisations | Ethics Committees | Technology Providers |
Parliament/Legislative Bodies | Civil Society Organisations | Intergovernmental Organisations |
National Security Agencies | Industry Consortiums | Financial Institutions |
State/Regional Government Ministries / Departments | Professional Associations | Healthcare Providers |
Sectoral Regulators | Data Science Collectives | Educational Institutions |
Local Administrative Bodies | Digital Rights Groups | Media Organisations |
Advisory Councils | Consumer Advocacy Groups | Industry Associations |
Inter-ministerial Committees | Scientific Research Institutes | Sector-specific Technology Firms |
Policy Implementation Agencies | Traditional / Indigenous Knowledge Groups | Development Partners |
AI Safety Institutes | Regional Technical Communities | Telecommunications Providers |
Judicial Institutions | Legal Experts | Digital Service Providers |
Constitutional Bodies | Capacity Building Networks | Non-Profit Organisations |
(2) ISAIL.IN's principles are non-binding and for consultative purposes only. They are distinct from Article 28 of the Charter.
Part D: Cross-Cutting Considerations
(1) AI standards and principles may address multiple imperatives simultaneously, ensuring a holistic approach to standardisation.
(2) The classification of standards may evolve based on:
(a) Technological advancements,
(b) Market conditions,
(c) Regulatory developments,
(d) Ethical considerations.
(3) Flexibility in implementation of AI standards is essential to allow for:
(a) Modular design,
(b) Scalable frameworks,
(c) Adaptable mechanisms,
(d) Context-sensitive approaches.
(4) Regular reviews with respect to AI Standards shall be conducted to assess:
(a) Effectiveness of the standard
(b) Stakeholder feedback,
(c) Emerging challenges
(d) Necessary updates or revisions.
Part E: Standard Development Essentials
(1) Each standard must be evaluated against:
(a) Governance stage appropriateness (pre-regulatory, regulatory, post-regulatory),
(b) Alignment with technical, commercial, legal, and ethical imperatives,
(c) Feasibility of implementation,
(d) Stakeholder impact.
(2) The development process should include:
(a) Clear scope definitions
(b) Stakeholder consultation at relevant stages,
(c) Technical validation through expert review,
(d) Policy alignment with existing or emerging regulations.
(3) Documentation shall specify:
(a) The primary governance stage (pre-regulatory, regulatory, post-regulatory),
(b) Key imperatives addressed (technical, commercial, legal, ethical),
(c) Implementation requirements,
(d) Review mechanisms for periodic updates.