top of page
User Interface

[ AiStandard.io Alliance ]

The Alliance Charter

< Back

Article 7

Updation Date and Time

10 May 2025

Article 7 | Use Case Classification, Tracking and Risk Analysis

 

(1)   The AI Development Committee shall classify use cases based on the following categories:

(a)   Start-ups: AI initiatives driven by early-stage companies or entrepreneurs.

(b)   MSMEs (Micro, Small, and Medium Enterprises): AI solutions developed by small to medium-sized businesses.

(c)   Research Labs: AI projects emerging from academic or private research institutions.

(d)   Social Enterprises: AI applications aimed at addressing social challenges or promoting public welfare.

(e)   Developer Community: Contributions from open-source developers or independent AI practitioners.

 

(2)   The Policy Innovation Committee shall classify use cases based on the following problem types:

 

(a)   Regulatory Problems: Issues related to existing or emerging regulatory frameworks.

(b)   Adjudicatory Problems: Subject-matter issues around legal disputes or challenges involving AI systems.

(c)   Policy Problems:

(i)    Commercial Viability Problems: Concerns about the market feasibility of AI solutions.

(ii)   Risk Problems: Potential risks posed by AI systems to society, individuals, or institutions.

(d)   Strategy Problems:

(i)    Knowledge Management Problems: Challenges in managing and disseminating knowledge within AI ecosystems.

(ii)   Technology Management Problems: Issues related to the governance, control, and deployment of AI technologies.

(e)   Technical Problems:

(i)    Technical Viability Problems: Concerns about whether an AI solution is technically feasible or scalable.

 

(3)   Imperative Alignment: The Committees shall document how each use case aligns with the four imperatives outlined in Schedule 1.

(4)   Risk Assessment Framework: Each classified use case shall undergo a comprehensive risk analysis that evaluates:

(a)   Technical Risks:

(i)    Implementation complexity

(ii)   System reliability

(iii)  Scalability challenges

(b)   Operational Risks:

(i)    Resource availability

(ii)   Stakeholder readiness

(iii)  Integration challenges

(c)   Strategic Risks:

(i)    Market adoption barriers

(ii)   Competitive threats

(iii)  Long-term viability concerns

(d)   Compliance Risks:

(i)    Regulatory conflicts

(ii)   Legal liabilities

(5)   The Committees may consider documenting mitigation strategies for any risks identified as per paragraph (4).

 

(6)   Cross-Cutting Considerations: The Committees shall consider and document any cross-cutting factors that may influence multiple imperatives or governance stages. These may include:

(a)   Overlapping technical and legal challenges that require multi-disciplinary solutions.

(b)   Ethical concerns that intersect with commercial viability or regulatory compliance.

(c)   Strategic risks that could affect both market adoption and long-term sustainability. 

 

(7)   Documentation Requirements: The Committees may maintain records for each use case, including:

(a)   The rationale behind its classification under either the AI Development Committee, the Policy Innovation Committee, or any other designated ISAIL.IN Committee.

(b)   How the use case aligns with the technical, commercial, legal, and ethical imperatives as outlined in Schedule 1.

(c)   A summary of the key risks identified and any proposed mitigation strategies.

(d)   Any cross-cutting considerations that could influence multiple imperatives or governance stages.

 

(8)   Documentation as per paragraph (7) should be updated at important milestones in the standardisation process to reflect any significant changes in classification, risk status, or alignment with the imperatives.

bottom of page