top of page

How "Mind-Reading" AI Systems Are Different than What We Have Seen in the AI Industry

Updated: Dec 30, 2020

Abhishikta Sengupta,

Research Intern,

Indian Society of Artificial Intelligence and Law.


 

Researchers at the University of California have taken a significant step towards turning one of the most popular science-fiction tropes into reality, by developing an Artificial Intelligence system which can analyse a person’s brain activity and turn it into text. Although this is miles away from a comprehensive reading of the mind, as this system is trained to decipher only 250 of the 20,000-35,000 words that form the average active vocabulary of a speaker of the English language, and because so far, the system operates on the neural patterns which are detected only when an individual is speaking out loud, this exciting research paints a hopeful picture for those who no longer possess the ability to speak due to neurological disorders, paralysis and so on. This system, having an accuracy of 97% if developed further would revolutionize the way they communicate. Previous attempts at developing tools to enable such individuals to express themselves, includes, most notably, the ones used by Stephen Hawking, who could select letters with the movement of merely one muscle, enabling him to form words. However, an AI system which simply interprets one’s thoughts directly could vastly simplify such a long drawn process, and allow participation in conversations of even great rapidity, as well as control over other linguistic elements.

From Science Fiction to Science: Current and Possible Applications of Mind-Reading Artificial Intelligence Systems and Related Ethical Concerns

At present, the newly developed AI system is in its elementary stage, confined to a very limited sentence set and having reducing accuracy with each new word. However, with the rapid and profound advancements in the field of Artificial Intelligence, it may not be long before the development of mind-reading technology – a glimpse into our brains to discern our thoughts. In light of this, it is significant to note that autonomy over our thoughts and minds has always been inherent and fundamental to our nature of being human. This raises several noteworthy ethical questions, as the development of AI platforms in this domain could have serious and far-reaching implications for our privacy.

The most apparent application of such an AI system would be to the domain of Criminal Law. There has already been mounting interest in the technology of Brain Fingerprinting, wherein AI will be able to interpret brain activity relating to certain stimuli which may be relevant to the concerned crime and thus perceive concealed information. This would have monumental applications during interrogations by authorities. Objects and phrases related to the crime which only the perpetrator would be familiar with and which are significant to him or her would yield a certain response in the brain. This could find its use in combatting terrorism, as it could detect an individual’s brain activity in a public place, perceiving thoughts of using explosives or firearms. Researchers at the Carnegie Mellon University have worked on a system that would identify from a person’s brain waves whether they are familiar with a certain place, such as the victim’s residence. Law enforcement implementations such as profiling suspects may be found in AI which can use someone’s memory to recreate a face, on which there has also been researched.

This, however, questions the protection against self-incrimination that is guaranteed by legal systems across the world. Contrastingly, if such scans are disallowed at the outset, this robs wrongly accused individuals of an opportunity to prove their innocence. What must be addressed in order to answer this question, and something that has been contemplated by researchers is whether brain activity would fall into the category of testimony, or the category of DNA samples, blood and hair, which an individual can be compelled to give. Due to an absence of precedent, in this case, no conclusive answers are present. The ongoing debate on the validity of such technology thus continues.

The further development of AI systems to give a voice to the speech impaired, and those afflicted with Locked-In Syndrome, tetraplegia, Alzheimer’s, Parkinson’s disease, multiple sclerosis and so on, although holding potential to provide an enormous benefit, would raise certain ethical questions. How will the system be able to differentiate between what the individual wishes to speak, and their deeper thoughts which they wish to keep private? How can informed consent be obtained from a person who has lost the ability to communicate?

A likely evolution of this system may lead to accessing and modifying memory. Along these lines, another therapeutic application which is equally weighed down by ethical considerations is the treatment of war veterans suffering from Post-Traumatic Stress Disorder (PTSD). University of California, Berkeley researchers have successfully managed to create, reactivate and delete memories from the brains of rats. Although this would provide immense relief to people suffering from PTSD, such a technique if applied to humans reeks of potential misuse. This technology in the wrong hands may find any degree of malicious use, from manufacturing fake information in a suspect’s brain in order to confuse or incriminate him, to wiping his mind of the memory of an event entirely.

The very basis of the pharmaceutical industry’s success has been a progressive transformation from treatment to enhancement. If the use of such AI technology becomes rampant as a form of speech prosthesis, it is only a matter of time before it follows suit to become the privilege of multi-million dollar corporations in their attempt to gauge customer needs. In fact, the emerging field of Neuromarketing is already working to determine the best marketing strategies for various target markets. Allowing such an AI system to develop into a mere capitalist venture dominated by the major market players who invest in its development in furtherance of their own personal business objectives will no doubt brand this all-important system as a prerogative of the rich and lead to social stratification. This provokes the questions – Who is accountable for the use of this AI technology? Is it the government? Neuroscientists? Will future generations be permitted to have brain scans performed on their children to determine if they’re lying to them? Should it be permitted to be used for commercial purposes at all?

Were such technology to fall into the hands of terrorists, it could be used to reveal information possessed by hostages, removing the barrier of the usually used extended process of coercion and torture, which most soldiers are trained against. Furthermore, according to Artificial Intelligence experts, AI may be able to develop by itself in the future, thus blurring the lines between human thoughts and machine thoughts altogether.

The possible applications and allied ethical concerns are endless. Whatever be the implementation, apprehensions of privacy invasions arise. Researchers have stated that covert “mind-reading” technologies are in their developmental stages, such as a light beam which is simply required to be projected on an individual’s forehead in order to interpret their brain waves. If we are unaware of the use of such tools on our minds, this would surely breach privacy laws. Moreover, what if further developments in this technology enable it to control our minds with our knowledge and assent? It may seem like dystopian fiction, but if AI can read our thoughts, how far are we from the possibility of a controlled state, acting unconsciously in pursuance of a totalitarian regime?


Conclusions and Future Perspective

Like any evolving technology, this Artificial Intelligence system holds the potential to be of colossal benefit if used ethically. In the right hands, it could entirely revolutionize anything from speech prosthesis to the criminal justice system. Although a comprehensive mind-reading technology arising from the system in question seems to be a tremendously remote possibility at present, as it increases in accuracy and usage, the plethora of issues in relation to it, ethical or otherwise, are too numerous to ignore. It has become increasingly necessary to discuss where such technology can be used, and more importantly, by whom. There is a pressing requirement for consensus on certain ethical guidelines for its development, use and dissemination.




The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 1, July 31, 2020.

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page