The ISAIL.in-HPNLU Shimla Outcome documentation on AI and IP Law, 2025
- Indian Society of Artificial Intelligence and Law
- 5 days ago
- 3 min read

So ISAIL.IN (the Indian Society of Artificial Intelligence and Law) and the Himachal Pradesh National Law University Shimla’s Centre for Artificial Intelligence and Intellectual Property Rights joined hands and collected some survey data from two key stakeholders of an essay competition conducted by HPNLU Shimla:
Participants (Students and Scholars) - 31 Responses
Reviewers (Professionals and Practitioners) - 17 Responses
What did we document?
Their intellectual property law perspectives around AI.
You can now access the outcome document with necessary data at https://indopacific.app/product/the-isail-in-hpnlu-shimla-outcome-documentation-on-ai-and-ip-law-2025/
We believe while such results bear no direct recommending value, the documentation may help anyone understand the intellectual property law perspectives around AI, from the perspectives of these 2 kinds of stakeholders as they are described above.
Hence, the first segment of the documentation shall feature responses by the participants, while the second segment shall feature responses by reviewers.
Why Do We Document?
Enforcing AI standards in the Indian and Indo-Pacific market is hard and it requires us to collect data and inputs associated with the market tendencies around three facets of artificial intelligence: its adoption, research around it, and its real-life policy implications across as many sectors as possible. In fact, in the lingo of public policy, there are 3 types of realities that markets, government institutions and technical stakeholders face:
The knowns (something that we know)
The unknowns (something that we don’t know)
The unknown unknowns (something that we don’t know that we don’t know)
Â
Any documentation that we produce remains open access under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license and is free to be used for research purposes, provided that it is explictly cited (attributed).
Â
Since ISAIL.IN as a trust is a community, no documentation published is guided by commercial expectations. All outcome documents that we produce are guided to provide some tangible data, anecdata and inputs on the state of market practices around the adoption of, research in and the impact of AI.
Comparative Analysis: Reviewer vs. Author Perspectives on AI and IP Law
Analysing both datasets reveals fascinating contrasts between how reviewers evaluate essays versus how authors perceive their own work. Here are the key insights:
1. IP Law Focus Distribution: Concentration vs. Diversification
Reviewers saw extreme concentration:
Copyright Law: 85.0% (17/20 essays)
Patent Law: 10.0% (2/20 essays)
Trademark Assessment: 5.0% (1/20 essays)
Authors showed broader diversification:
Copyright Law: 52.7% (29/55 essays)
Patent Law: 30.9% (17/55 essays)
Trademark Assessment: 16.4% (9/55 essays)
Authors believe they're writing about a more diverse range of IP topics, but reviewers primarily see copyright-focused work. This suggests either: (1) selection bias in reviewed essays, or (2) authors overestimate the breadth of their coverage.
2. Self-Assessment vs. External Evaluation: The Confidence Gap
Reviewers' assessment of essay approaches:
Policy-oriented with innovation: 38.5%
Competent but lacking novelty: 30.8%
Rehashing existing arguments: 30.8%
Authors' self-assessment:
Comparative international approach: 48.0%
Doctrinal analysis: 32.0%
Policy-oriented reform: 20.0%
Key Insight: Authors are more optimistic about their comparative international work (48%) while reviewers focus on evaluating innovation levels. There's a methodological mismatch - authors describe what they did, reviewers judge the quality of output.
3. AI-IP Topic Focus: Perfect Alignment
Both groups showed identical 33.3% distribution for top AI-generated creative works topics, suggesting strong consensus on what constitutes the most important area of study.
4. Unique Reviewer Concerns vs. Author Blind Spots
Reviewers identified critical gaps authors missed:
36.8% of essays never addressed AI inventorship in patents
26.1% highlighted "practical enforcement mechanisms" as the biggest scholarship gap
26.7% noted rural creators and end-users as most overlooked stakeholders
Authors focused on methodology over impact:
No corresponding questions about stakeholder analysis
More emphasis on technical approach than practical implementation
5. The "Honesty Test" Phenomenon
The author survey explicitly asked respondents to "be honest" about their approach, while reviewers provided external evaluation. Despite this prompt for honesty, authors still showed more optimistic self-assessment than reviewer evaluations.