Do brain-computer interfaces raise new privacy concerns?

Imagine being able to control machines using your thoughts alone. This idea inspired many popular science fiction franchises such as The Matrix or Pacific Rim. In these stories, humans are able to interface with digital machines that detect brain signals, which gives the characters superhuman powers. For many, these brain-computer interfaces (BCIs) seem futuristic and fantastic, which may explain the hype behind recent innovations such as Elon Musk’s Neuralink.

However, BCIs are not new, and neither are most of the concerns that they raise. Humans have been able to control computers with brain signals since the early 1990s. Existing BCIs often leverage non-invasive electroencephalography (EEG), which simply sits on someone’s head, and usually uses machine learning to detect changes in brain patterns. These detected brain patterns have been applied to a limited number of functions such as spelling applications and video games. Nonetheless, research in non-invasive EEG, including ongoing research in our group at Dalhousie University, has yielded to various innovative applications, such as detecting mind wandering when watching online long lectures.

For some individuals, such as people with locked-in syndrome, BCIs are not just novel technologies but life-enhancing devices through which they can interact with the world. Even a limited EEG-based BCI can enable someone suffering from paralysis to regain a degree of freedom. This in-turn inspired medical researchers and biomedical engineers to develop research into BCI that was much more functional, by leveraging brain implants. The implants that have since been developed have enabled patients who were otherwise paralyzed to command prosthetics, often far more precisely than they could with an EEG-based BCI. While invasive BCI offers a much higher degree of functionality than non-invasive BCI, progress was limited because these brain implants require surgeons to drill holes in a patient’s skull, which is a high-risk surgery.

BCI technology is now changing quickly because there are emerging technologies that are both comparatively less invasive and highly functional. On July 6th, a company called Synchron became the second company in history to receive US Food and Drug Administration approval for clinical trials of an invasive permanent BCI. While not the first to implement permanent surgical BCI (that is, BCI that fits under the scalp, rather than on the scalp), Synchron’s solution is implemented through blood vessels, which makes it much easier to deploy and less risky than transcranial surgery. In the coming months, Synchron plans to conduct a wider trial which, if successful, will allow them to sell the implant as a medical device. This development would mark a major milestone in the advancement of BCI technology, as it would enable digital programs to interface with smaller collections of neurons, and in-turn enable wider range of people to precisely control robotic prosthetics with their brains. It is reasonable to expect a greater proliferation of the technology in medical applications, and possibly to commercial applications in the future.

With this milestone there will also be some new legal and policy challenges. While not qualitatively different from existing BCI technologies, I argue that the complexity and sensitivity of data warrant new privacy considerations, which past technologies did not.

The complexity raises specific concerns with privacy and consent, especially with potential commercial applications of implanted BCI. Since the development of the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada has affirmed principles of informed consent and limiting use of data. The recently proposed Consumer Privacy Protection Act, which will supersede PIPEDA, will expand on these principles to include a more rigorous expectation of documenting consent, similarly to GDPR guidelines.

The complexity of the new systems that could be developed from the new generation of BCI would present challenges with informed and documented consent. Machine learning systems that use this type of brain data would be very complex, involving deep learning that is non-transparent and often developed from other individuals’ data. Companies and watchdogs may wish to consider guidelines to manage the clear communication of the nature of the data that would be collected, its use, and the limits of its use. It may also motivate the application of explainable AI solutions to BCIs, which would help users understand exactly what their data is being used for.

The new generation of BCI will likely collect a range of data that is more sensitive than non-invasive BCI. With EEG-based BCI the signals that are detected are generated from a limited range of neural patterns which are difficult to identify as belonging to a particular individual or diagnose an illness. Many participants of EEG studies even consent to publishing their brain data publicly on the internet, given that it is impossible to link this data to a specific individual. However, with an endovascular BCI, the signals will be much less limited, and it would be more likely to identify a specific individual.

This may raise new issues with respect to Canada’s privacy torts. Following the ruling of Jones v Tsige 2012, Ontario recognized damages that could be incurred from the improper access of personal data. The courts have since established that the possible damages are also related to the degree of sensitivity of the data. While it is not immediately clear to what degree someone’s brain data in this context could be considered sensitive, it is clear that there would be at least some sensitivity concerns analogous to health data. Whether this would be categorically different from, say, a Fitbit, would likely hinge on precisely how identifiable the data would be and what types of health information could be inferred from the data.

As the technology develops, it will become increasingly important for legal and policy professionals to understand the nuances of this technology. BCIs are no longer a matter of science fiction and are quickly becoming much more sophisticated. Moving forward, Canadians would benefit by considering these in light of recent advancements. While these will not be categorically different form existing technologies, we should take care to consider the implications of their data complexity and sensitivity.

Why cognitive enhancement technologies do not need new regulation

After digging through old files from my PhD, I ran into a copy of a class presentation I worked on with Sarah Macleod and Mohammad Habibnezhad on cognitive enhancement technologies. Examples of cognitive enhancement technologies could be cognitive enhancing drugs such as Ritalin, or even whatever the heck Elon Musk is working on at Neuralink; what they have in common is that they make us better at thinking or learning. One of the potential future applications of my thesis work is the development of computer-based education that adapts to users attention. Such technologies may one day deliver education radically better than either our current MOOCs or even in-person lectures, and is therefore an example of a cognitive enhancement technology.

The presentation that we gave concerned whether cognitive enhancement technologies should be strictly regulated. We argued that cognitive enhancement technologies do not require additional regulation. Our argument was best summarized by the following chart:

Cognitive enhancement technologies can seem scary, but there already already existing frameworks to understand them. In Canada, the Food and Drugs Act governs all food, drugs, or medical devices, including any devices that modify or correct the body structure of humans. Both cognitive enhancing drugs and devices which modify the functioning of the human brain (as in Musk’s case) would therefore be governed by the Food and Drugs Act. This regulatory regime is fundamentally designed to ensure the safety of such technologies, or failing that offer sufficiently great rewards for the expected risks. If such high risk technologies are unable to offer great benefits to their users, they are likely to be banned.

I find the lower half of the matrix more interesting. When technologies are low risk to humans, we can envision them as either goods or rights. For example, there is pretty good evidence that coffee is a cognitive enhancer. There is some evidence that caffeine provides benefits to learning and memory, and even better evidence that it improves reaction time. However, the benefits of caffeine are slight. In market economies, we normally think of these sorts of things as ‘goods,’ insofar as they satisfy a consumer’s want. The main benefit of coffee is that it satisfies my desire for coffee; potential cognitive enhancement is secondary.

Sometimes however, goods can be  low risk and offer high advantages, so much so that not having them will disadvantage a person’s capabilities to be a functioning member of society. For example, access to primary and secondary education is categorically different from access to coffee. People who do not have access to primary and secondary education are severely limited in their ability to take part in society. Children who have access to education can learn skills required to participate in the economy or polity, or potentially choose to pursue tertiary education of their choosing. Those who do not have access are severely limited in their capabilities and agency and will never be able to choose how to contribute to society. This idea is better summarized by Amartya Sen and Martha Naussbaum, and is often called the capability approach to rights.

I believe that low risk, high benefit cognitive enhancement technologies may fall into this category of ‘rights’ if they truly offer large political or economic advantages to their consumers. If we imagine a radically better way to teach students using learning technology, students who used such hypothetical technology would have significantly greater capabilities in society. They could potentially learn in a fraction of a time, becoming much more productive and potentially much more competitive. People who do not have access to this hypothetical technology would likewise be severely limited in their capabilities. We would therefore consider universal access to such cognitive enhancing technologies, at least if they become adopted at a large scale. These days, there are even business models that incentivize both innovation and free access to such high benefit technologies. Such business models might serve as a starting point for such technologies as they eventually make their way to open access.

Long story short, this is why I believe that cognitive enhancing technologies do not need additional regulation. If they are high risk technologies, we have existing regulation that covers them. If they are low risk, they are either goods or rights depending on their benefits; we have exiting methods of distributing these. Black Mirror will have to wait on this one.