Concerns about C-27 and the AIDA

What are the implications of Canada’s new Artificial Intelligence and Data Act (AIDA) for researchers? I have a few thoughts to contribute to this conversation; there are big problems.

The AIDA is part of the wider Bill C-27 which is currently nearing its final reading in the Canadian parliament. A few months back I had the pleasure of being published with Carla Heggie in the Canadian Journal of Law and Technology. We wrote a paper on the implications of the new law related to emerging brain-computer interface (BCI) technologies. Upon reflection on that discussion, our findings related to BCI speak to a larger concern.

The AIDA contains a discussion related to “high impact systems,” which are artificial intelligence technologies that could cause increased harm compared to other AI technologies. The Act provides some factors for assessing whether a system would be high impact (see the Companion Document):

  • Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
  • The severity of potential harms;
  • The scale of use;
  • The nature of harms or adverse impacts that have already taken place;
  • The extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
  • Imbalances of economic or social circumstances, or age of impacted persons; and
  • The degree to which the risks are adequately regulated under another law.

I have two questions for lawmakers related to this issue.

The first is “how do lawmakers categorize technologies?”

For example, Elon Musk’s Neuralink is a BCI system, but it involves surgery that can harm an individual. The BCI technologies that we work on in my group are non-invasive, and are no greater risk than everyday life.

If they consider technologies as categories, I am concerned that it could have negative implications for researchers. Would my research still need to have an extensive legal review? The uncertainty of this question and the nature of the legislation may in themselves add a chilling effect to my work, because research ethics boards may be reluctant to permit activities that could potentially be illegal, even if they are harmless.

A question I have is “how will lawmakers administer high impact systems legislation?”

If each system must go through a parliamentary review, it will take years, if not decades to legislate each technology. If there is an alternative mechanism, how can they ensure that the process does not get mucked in red tape?

These tests will create regulatory uncertainty. They are different from the tests that the European Union uses in its new AI legislation. Will companies need to do additional compliance checks to operate in Canada?

I challenge lawmakers to consider the risks of this legislation. While I think we should have transparent and trustworthy AI, and that legislation is a way to get there, the current gaps in the proposed law could do much more harm than good.

What do you think about the law? To learn more, check out the AIDA Companion Document.

You may also be interested in our paper (citation below, not yet online).

Conrad, C., and Heggie, C. (2024). Legal and ethical challenges raised by advances in brain-computer interface technology. Canadian Journal of Law and Technology, 21(2).

Do brain-computer interfaces raise new privacy concerns?

Imagine being able to control machines using your thoughts alone. This idea inspired many popular science fiction franchises such as The Matrix or Pacific Rim. In these stories, humans are able to interface with digital machines that detect brain signals, which gives the characters superhuman powers. For many, these brain-computer interfaces (BCIs) seem futuristic and fantastic, which may explain the hype behind recent innovations such as Elon Musk’s Neuralink.

However, BCIs are not new, and neither are most of the concerns that they raise. Humans have been able to control computers with brain signals since the early 1990s. Existing BCIs often leverage non-invasive electroencephalography (EEG), which simply sits on someone’s head, and usually uses machine learning to detect changes in brain patterns. These detected brain patterns have been applied to a limited number of functions such as spelling applications and video games. Nonetheless, research in non-invasive EEG, including ongoing research in our group at Dalhousie University, has yielded to various innovative applications, such as detecting mind wandering when watching online long lectures.

For some individuals, such as people with locked-in syndrome, BCIs are not just novel technologies but life-enhancing devices through which they can interact with the world. Even a limited EEG-based BCI can enable someone suffering from paralysis to regain a degree of freedom. This in-turn inspired medical researchers and biomedical engineers to develop research into BCI that was much more functional, by leveraging brain implants. The implants that have since been developed have enabled patients who were otherwise paralyzed to command prosthetics, often far more precisely than they could with an EEG-based BCI. While invasive BCI offers a much higher degree of functionality than non-invasive BCI, progress was limited because these brain implants require surgeons to drill holes in a patient’s skull, which is a high-risk surgery.

BCI technology is now changing quickly because there are emerging technologies that are both comparatively less invasive and highly functional. On July 6th, a company called Synchron became the second company in history to receive US Food and Drug Administration approval for clinical trials of an invasive permanent BCI. While not the first to implement permanent surgical BCI (that is, BCI that fits under the scalp, rather than on the scalp), Synchron’s solution is implemented through blood vessels, which makes it much easier to deploy and less risky than transcranial surgery. In the coming months, Synchron plans to conduct a wider trial which, if successful, will allow them to sell the implant as a medical device. This development would mark a major milestone in the advancement of BCI technology, as it would enable digital programs to interface with smaller collections of neurons, and in-turn enable wider range of people to precisely control robotic prosthetics with their brains. It is reasonable to expect a greater proliferation of the technology in medical applications, and possibly to commercial applications in the future.

With this milestone there will also be some new legal and policy challenges. While not qualitatively different from existing BCI technologies, I argue that the complexity and sensitivity of data warrant new privacy considerations, which past technologies did not.

The complexity raises specific concerns with privacy and consent, especially with potential commercial applications of implanted BCI. Since the development of the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada has affirmed principles of informed consent and limiting use of data. The recently proposed Consumer Privacy Protection Act, which will supersede PIPEDA, will expand on these principles to include a more rigorous expectation of documenting consent, similarly to GDPR guidelines.

The complexity of the new systems that could be developed from the new generation of BCI would present challenges with informed and documented consent. Machine learning systems that use this type of brain data would be very complex, involving deep learning that is non-transparent and often developed from other individuals’ data. Companies and watchdogs may wish to consider guidelines to manage the clear communication of the nature of the data that would be collected, its use, and the limits of its use. It may also motivate the application of explainable AI solutions to BCIs, which would help users understand exactly what their data is being used for.

The new generation of BCI will likely collect a range of data that is more sensitive than non-invasive BCI. With EEG-based BCI the signals that are detected are generated from a limited range of neural patterns which are difficult to identify as belonging to a particular individual or diagnose an illness. Many participants of EEG studies even consent to publishing their brain data publicly on the internet, given that it is impossible to link this data to a specific individual. However, with an endovascular BCI, the signals will be much less limited, and it would be more likely to identify a specific individual.

This may raise new issues with respect to Canada’s privacy torts. Following the ruling of Jones v Tsige 2012, Ontario recognized damages that could be incurred from the improper access of personal data. The courts have since established that the possible damages are also related to the degree of sensitivity of the data. While it is not immediately clear to what degree someone’s brain data in this context could be considered sensitive, it is clear that there would be at least some sensitivity concerns analogous to health data. Whether this would be categorically different from, say, a Fitbit, would likely hinge on precisely how identifiable the data would be and what types of health information could be inferred from the data.

As the technology develops, it will become increasingly important for legal and policy professionals to understand the nuances of this technology. BCIs are no longer a matter of science fiction and are quickly becoming much more sophisticated. Moving forward, Canadians would benefit by considering these in light of recent advancements. While these will not be categorically different form existing technologies, we should take care to consider the implications of their data complexity and sensitivity.

Fundraising AI Forum 2021

I participated in #FUNAI2021. You can find the slides from my presentation here.

Extended Abstract

Over the past five years there has been a rise in public awareness about the efficacy of social media for targeted advertising. Most notoriously, social media was heavily leveraged in both the Obama 2012 (Gunn & Anja Anaheim, 2016) and Trump 2016 political campaigns to generate advertising advantages. In the case of the latter, illegal and unethical use of social media data by the defunct Cambridge Analytica company facilitated an unprecedented advantage by leveraging intimate personal data acquired from Facebook’s servers (Isaak & Hana, 2018). Perhaps more than any other case, this has facilitated public skepticism about targeted advertising. Yet, despite such increased public scrutiny and concerns about privacy (Gruzd & Hernández-García, 2018), evidence suggests that targeted advertising decreases advertisement avoidance, potentially decreasing advertising costs for the organizations that employ it (Jung, 2017).

Should charities follow suit? Evidence from two prior studies conducted by researchers at Dalhousie University suggest that social media can similarly be leveraged to conduct targeted advertising. In the first study, publicly accessible Twitter data was used to successfully predict political donations among 438 Twitter users with 70% accuracy (Conrad & Keselj, 2016). In the second study, Twitter data was used to predict donations to charities with 71% accuracy (Calix Woc, 2020). These results could likely be improved further, lending confidence that such data could be leveraged to conduct targeted donation asks, ultimately decreasing charities’ costs of prospecting online. This would certainly be welcome in the era of Covid-19 and our increasingly digital world. However, the use of such targeted advertising techniques could also increase privacy concerns among donors and stakeholders, even when only leveraging publicly accessible data (Jung, 2017).

It is now critically important to invest in an open science research programme on how artificial intelligence and social media can be ethically leveraged in the donor prospecting process. Open science can mean many things but has been formally described as “transparent and accessible knowledge that is shared and developed through collaborative networks” (Vicente-Saez & Martinez-Fuentes, 2018). Generally, open science can consist of the transparent publication of data (when possible), as well as the publication of transparent methods for conducting the research, and the publication of open access scientific reports. Such open science research has the potential to generate new insights for all charities and nonprofits, while also increasing transparency, awareness, and control for potential donors.

It is possible to create a collaborative process for conducting this research. Such a process would ask prior charitable donors to explicitly consent to data linkages between past donations and social media profiles. The results of the research could be published publicly, and the artificial intelligence generated could be used solely to improve matching between prospective donors and potential causes. This would ultimately build a virtuous cycle of innovation and trust between donors and charities, preparing the sector for the challenges of the AI-enabled age.

References

Calix Woc, Carlos. (2020). Psychographic Profiling of Chartiable Donations Using Twitter Data and Machine Learning Techniques [Master’s Thesis]. Dalhousie University.

Conrad, C., & Kešelj, V. (2016). Predicting Political Donations Using Twitter Hashtags and Character N-Grams. 2016 IEEE 18th Conference on Business Informatics (CBI), 2, 1–7.

Enli, G., & Naper, A. A. (2016). Social Media Incumbent Advantage: Barack Obama’s and Mitt Romney’s Tweets in the 2012 U.S. Presidential Election Campaign. 9781138860766, 364–378.

Gruzd, A., & Hernández-García, Á. (2018). Privacy Concerns and Self-Disclosure in Private and Public Uses of Social Media. Cyberpsychology, Behavior, and Social Networking, 21(7), 418–428. https://doi.org/10.1089/cyber.2017.0709

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Jung, A.-R. (2017). The influence of perceived ad relevance on social media advertising: An empirical examination of a mediating role of privacy concern. Computers in Human Behavior, 70, 303–309. https://doi.org/10.1016/j.chb.2017.01.008

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043