What are the implications of Canada’s new Artificial Intelligence and Data Act (AIDA) for researchers? I have a few thoughts to contribute to this conversation; there are big problems.
The AIDA is part of the wider Bill C-27 which is currently nearing its final reading in the Canadian parliament. A few months back I had the pleasure of being published with Carla Heggie in the Canadian Journal of Law and Technology. We wrote a paper on the implications of the new law related to emerging brain-computer interface (BCI) technologies. Upon reflection on that discussion, our findings related to BCI speak to a larger concern.
The AIDA contains a discussion related to “high impact systems,” which are artificial intelligence technologies that could cause increased harm compared to other AI technologies. The Act provides some factors for assessing whether a system would be high impact (see the Companion Document):
- Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- The severity of potential harms;
- The scale of use;
- The nature of harms or adverse impacts that have already taken place;
- The extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
- Imbalances of economic or social circumstances, or age of impacted persons; and
- The degree to which the risks are adequately regulated under another law.
I have two questions for lawmakers related to this issue.
The first is “how do lawmakers categorize technologies?”
For example, Elon Musk’s Neuralink is a BCI system, but it involves surgery that can harm an individual. The BCI technologies that we work on in my group are non-invasive, and are no greater risk than everyday life.
If they consider technologies as categories, I am concerned that it could have negative implications for researchers. Would my research still need to have an extensive legal review? The uncertainty of this question and the nature of the legislation may in themselves add a chilling effect to my work, because research ethics boards may be reluctant to permit activities that could potentially be illegal, even if they are harmless.
A question I have is “how will lawmakers administer high impact systems legislation?”
If each system must go through a parliamentary review, it will take years, if not decades to legislate each technology. If there is an alternative mechanism, how can they ensure that the process does not get mucked in red tape?
These tests will create regulatory uncertainty. They are different from the tests that the European Union uses in its new AI legislation. Will companies need to do additional compliance checks to operate in Canada?
I challenge lawmakers to consider the risks of this legislation. While I think we should have transparent and trustworthy AI, and that legislation is a way to get there, the current gaps in the proposed law could do much more harm than good.
What do you think about the law? To learn more, check out the AIDA Companion Document.
You may also be interested in our paper (citation below, not yet online).
Conrad, C., and Heggie, C. (2024). Legal and ethical challenges raised by advances in brain-computer interface technology. Canadian Journal of Law and Technology, 21(2).