top of page

Analyzing the Denmark AI Strategy Report

Varad Mohan,

Research Intern,

Indian Society of Artificial Intelligence and Law.


Following is an analysis of the regulatory framework presented in the European Commission’s AI Watch report, titled ‘Denmark AI Strategy Report. The report is divided into the following six sections viz. Human Capital, From the Lab to the Market, Networking, Regulation, Infrastructure and Update. This analysis will be limited to the Regulation portion of the report. The full text may be accessed here.

“The use of AI technologies is expected to raise new ethical and legal issues, calling for the development of a responsible ethical and legal framework for the use of artificial intelligence.”

The report, at the outset, accurately identifies the need for a legal framework for the use of artificial intelligence technologies.

“The Danish strategy foresees to develop an ethical framework based on six principles in order to improve the level of trust and confidence in AI. The six principles for AI relate to self-determination (i.e. ensuring that citizens can make informed and independent decisions) and to human dignity, equality and justice (i.e. ensuring that there is no infringement of human rights and maintaining respect for diversity). It also covers aspects of responsibility and explainability (i.e. openness and transparency). The sixth principle stipulates that AI development should be ethically responsible.”

The six principles mentioned herein guarantee some necessary elements that must be incorporated by any foundational regulatory mechanism vis-à-vis artificial intelligence. However, there is still scope for incorporating additional principles that would prove beneficial in establishing a reliable and acceptable regulatory framework. Reference must be made to principles such as the Asilomar AI Principles and the Responsible Machine Learning Principles. Such widely recognized principles have been formulated over the years by leading experts in the field of AI and must be given due consideration.

To ensure that ethical issues are taken up, the Danish government advocates”

  • In May 2019 an independent Data Ethics Council was launched with the purpose of making recommendations on ethical issues, in particular on responsible and sustainable use of data by the public and private sector;

  • In December 2019 a Data Ethics Toolbox was launched to support companies to adopt and implement data ethics into their business models;

  • A joint cybersecurity and data ethics seal is a labelling scheme has been agreed upon and is expected to be launched in the first half of 2020 by an independent multi-stakeholder consortium involving consumer groups, research institutes and the industry;

  • The Danish government is going to present a law on disclosure of Data Ethics Policy for the largest Danish companies in spring 2020. This means that companies will have to explain their company data ethics policy and comply with the aforementioned law in a similar way as they are already doing for their corporate social responsibility policies.”

Data ethics and cybersecurity are the pillars on which artificial intelligence stands. Without a comprehensive framework for these aspects, the field of artificial intelligence will be extremely vulnerable. These steps are in the right direction. Additional reference should be made to the General Data Protection Regulation (GDPR) to ensure that all aspects of data regulation are covered by this model. Additionally, it must be ensured that these data regulations are compatible with other data regulatory frameworks that are recognized at the international level.

“In terms of legislation, the Danish strategy highlights the need to evaluate the current legal framework and to adopt new legislation to guarantee a responsible development of artificial intelligence applications. New legislation relates among others to data ethics and security regulations and include initiatives as:

  • Setting up an inter-ministerial working group to analyse to what extent the existing legislative framework is sufficiently covering the current needs of AI regulations, and to determine if new legislation should be launched;

  • Legislative amendment to the Danish Financial Statements Act with the obligation for businesses to report about their data-ethics policy;

  • Cyber security directive: This directive, properly known as the Directive on security of network and information systems (NIS) requires Member States to adopt a national cyber-security strategy. The Danish cyber-security strategy has been published in May 2018.”

This approach is definitely viable as it recognizes the need to identify and subsequently rectify the blind spots that exist in the current legislative framework. Careful consideration must be given to the fact that the field of artificial intelligence, much like other technologies, is rapidly evolving and will inevitably require additional amendments. Therefore, it must be ensured that any new legislative framework that is developed will be sufficiently flexible and must allow for necessary amendments, as and when they are deemed to be necessary. Additionally, the inter-ministerial group that undertakes such a task must ensure that the needs of AI regulation are contrasted with domestic, as well as international standards.

“The Danish strategy recognises the importance of international standards in AI. In this regard, the Danish government will initiate work to develop national technical specifications based on the specific needs of Danish businesses. In particular, the Strategy for Denmark’s Digital Growth foresees to:

  • · Support the development of international standards for small and collaborative robots (cobots).”

Appropriate consideration has been given to international standards, as well as domestic needs. It is imperative to maintain a healthy balance between the two.

The strategy presented herein is comprehensive and sufficiently covers several aspects of artificial intelligence regulation. However, there is still some scope for additional inputs and considerations.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page